Show simple item record

dc.contributor.advisorNguyen, Van Hop
dc.contributor.authorHUYNH, HOANG MY DUNG
dc.date.accessioned2025-02-12T06:27:34Z
dc.date.available2025-02-12T06:27:34Z
dc.date.issued2024
dc.identifier.urihttp://keep.hcmiu.edu.vn:8080/handle/123456789/6446
dc.description.abstractThis study investigates the application of a Double Deep Q-Network (DDQN), a Deep Reinforcement Learning (DRL) algorithm, to the Flexible Job Shop Problem under job arrival and machine breakdown uncertainties. The primary objective is to minimize total tardiness while guarantee real-time (within 10 seconds) computational time to mitigate production disruptions. The proposed algorithm designs an action space comprising 16 diverse decision-making techniques, including heuristics, metaheuristics, and heuristic-guided metaheuristics. Results demonstrate that this DDQN-based approach outperforms previous methods by having win rate at least 60%, highlighting the efficacy of leveraging DRL and a diverse set of scheduling strategies to tackle the challenges of dynamic and stochastic manufacturing environments. This work contributes valuable insights for researchers and practitioners in production planning and control.en_US
dc.subjectDQNen_US
dc.subjectFJSPen_US
dc.titleReal-Time Flexible Job Shop Problem Under Uncertainties Using Deep Reinforcement Learningen_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record