SUN’IY INTELLEKTDA OPTIMALLASHTIRISH USULLARINING MATEMATIK MODELLASHTIRILISHI

Authors

  • Axmedova Feruza G‘ayrat qizi Author
  • Yo‘ldashev Javohir Rasulbek o‘g‘li Author
  • Xoljigitov Dilmurod Xolmurod o‘g‘li Author

Keywords:

Kalit so‘zlar: Sun’iy intellekt, neyron tarmoqlar, optimallashtirish algoritmlari, gradient descent, Newton metodi, genetik algoritmlar, matematik modellashtirish, konvergensiya, global minimum, adaptiv learning rate, loss funksiyasi, hisob samaradorligi.

Abstract

  Annotatsiya:  Ushbu  maqolada  sun’iy  intellekt  (AI)  modellarini  o‘qitish 
jarayonida ishlatiladigan optimallashtirish usullari matematik jihatdan tadqiq qilinadi. 
Gradient descent va uning variatsiyalari, Newton va quasi-Newton metodlari hamda 
genetik  algoritmlar  kabi  populyar  metodlar  solishtiriladi.  Muhim  e’tibor  global 
minimumga  yaqinlashish,  lokal  minimumlardan  chiqish,  konvergensiya  tezligi  va 
hisob resurslaridan foydalanish samaradorligiga qaratiladi. Maqola optimallashtirish 
metodlarining  afzalliklari  va  kamchiliklarini  ko‘rsatadi  hamda  "gibrid"  yondashuv 
orqali optimal yechim taklif etadi. 

References

Foydalanilgan adabiyotlar

1. Kingma, D. P., & Ba, J. (2015). Adam: A Method for Stochastic Optimization.

Proceedings of the 3rd International Conference on Learning Representations

(ICLR).

https://arxiv.org/abs/1412.6980

2. Ruder, S. (2017). An Overview of Gradient Descent Optimization Algorithms. arXiv

preprint.

https://arxiv.org/abs/1609.04747

3. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

(AI optimallashtirish algoritmlarining nazariy asoslari haqida asosiy manba)

4. Nocedal, J., & Wright, S. J. (2006). Numerical Optimization (2nd ed.). Springer

Science & Business Media.

(Matematik modellashtirish va Newton metodlari bo‘yicha fundamental manba)

5. Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization, and Machine

Learning. Addison-Wesley.

(Genetik algoritmlar nazariyasi uchun asosiy klassik manba)

6. Mirjalili, S. (2019). Evolutionary Algorithms and Neural Networks: Theory and

Applications. Springer.

(Evolyutsion hisoblash va AI integratsiyasi haqida zamonaviy tadqiqot)

7. Bottou, L. (2012). Stochastic Gradient Descent Tricks. In Neural Networks: Tricks

of the Trade (pp. 421–436). Springer.

(SGD optimallashtirish texnikalari bo‘yicha amaliy yondashuv)

8. Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., & Talwalkar, A. (2017).

Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization.

Journal of Machine Learning Research, 18(1), 6765–6816.

(Optimallashtirishda hyperparametrlarni boshqarish usuli)

9. Schmidhuber, J. (2015). Deep Learning in Neural Networks: An Overview. Neural

Networks, 61, 85–117.

(Neyron tarmoqlarda optimallashtirish jarayonlari haqida umumiy tahlil)

10. Das, S., Suganthan, P. N. (2011). Differential Evolution: A Survey of the State-of-

the-Art. IEEE Transactions on Evolutionary Computation, 15(1), 4–31.

(Evolyutsion algoritmlar va ularning optimallashtirishdagi roli haqida chuqur tahlil)

Published

2025-10-12

How to Cite

Axmedova Feruza G‘ayrat qizi, Yo‘ldashev Javohir Rasulbek o‘g‘li, & Xoljigitov Dilmurod Xolmurod o‘g‘li. (2025). SUN’IY INTELLEKTDA OPTIMALLASHTIRISH USULLARINING MATEMATIK MODELLASHTIRILISHI . Ta’lim Innovatsiyasi Va Integratsiyasi, 55(1), 120-123. https://journalss.org/index.php/tal/article/view/2343