dc.description.abstract | This study investigates the application of a Double Deep Q-Network (DDQN), a
Deep Reinforcement Learning (DRL) algorithm, to the Flexible Job Shop Problem
under job arrival and machine breakdown uncertainties. The primary objective is to
minimize total tardiness while guarantee real-time (within 10 seconds)
computational time to mitigate production disruptions. The proposed algorithm
designs an action space comprising 16 diverse decision-making techniques,
including heuristics, metaheuristics, and heuristic-guided metaheuristics. Results
demonstrate that this DDQN-based approach outperforms previous methods by
having win rate at least 60%, highlighting the efficacy of leveraging DRL and a
diverse set of scheduling strategies to tackle the challenges of dynamic and
stochastic manufacturing environments. This work contributes valuable insights for
researchers and practitioners in production planning and control. | en_US |