CS 4365/CS 5354 Deep Learning Demystified
Fall 2026 Syllabus

Class time: MW 3-4:20 pm

Room: CCSB 1.0202

Instructor: Vladik Kreinovich, email vladik at utep.edu, office CCSB 3.0404,
office phone (915) 747-6951.

Main Objective: to teach the foundations of deep learning in AI.

Contents

Current deep learning has had many spectacular successes, and many new successes appear all the time. Naive readers of the media reports may be under the false impression that all you need to succeed is to apply the available the current tools to new problems – and then rush to publish the results. This happens but very rarely.

In reality, many ideas have been and are being proposed, some of these ideas lead to empirical successes, but many don't. It is therefore desirable to have some theoretical guidance that would help researchers and practitioners to make their ideas more focused on successful directions, to better avoid blind alleys.

A natural way to come with such a guidance is to analyze the existing empirical successes. This will be the main focus of this course.

In this course, students will learn possible theoretical explanations for many successful features of deep learning:

Students will:

As part of the class, students will work on projects. Possible projects range:

Since this course is focused on theoretical explanations, it will use some mathematics. Not to worry: we will review the needed mathematics in the class.

Sources: On this topic, there is no up-to-date textbook yet, we will use several papers. For topic for which there are no easy-to-read papers, we will try to post easier-to-summaries of the not-so-easy-to-read papers on the class website.

Projects: An important part of the class is a project. There are three possible types of projects:

One of the most important aspects of the project is that it should be useful and/or interesting to you.

Assignments: Reading and homework assignments will be announced on the class website. You should expect to spend at least 10 hours/week outside of class on reading and homework.

Homework Assignments: Each topic means home assignments. Howeworks will be due by the day of the next class. To submit a homework, send it to me by email. If it is not electronic, scan it and send him/her the scanned version.

One week after the homework was assigned, I will post correct solutions. I will be glad to answer questions if needed.

If you have a legitimate reason to be late, let me know, you can then submit it until the homeworks are posted. If you were simply late, you can still submit until the homeworks are posted, but then points will be taken off points for submitting late.

Since I will be posting correct solutions to homeworks, it does not make any sense to accept very late assignments: once the solution is posted, it make no sense for you to copy it in your own handwriting, this does not indicate any understanding. So, please try to submit your assignments on time.

Things happen. If there is an emergency situation and you cannot submit it on time, let me know, you will then not be penalized -- and I will come up with a similar but different assignment that you can submit to me when you become available again.

Homework must be done individually. While you may discuss the problem in general terms with other people, your answers and your code should be written and tested by you alone. If you need help, consult the instructor.

Exams: There will be two tests and the final exam.

Similar to homeworks, I will post solution, send you the grades, and answer questions if something is not clear.

As usual, if you are unable to attend the test, let me know, I will organize a different version of the test at a time convenient for you.

Grades: Each topic means home assignments (mainly on the sheets of paper, but some on the real computer). Maximum number of points:

A good project can help but it cannot completely cover possible deficiencies of knowledge as shown on the test and on the homeworks. In general, up to 80 points come from tests and home assignments. So:

Special Accommodations: If you have a disability and need classroom accommodations, please contact the Center for Accommodations and Support Services (CASS) at 747-5148 or by email to cass@utep.edu, or visit their office located in UTEP Union East, Room 106. For additional information, please visit the CASS website at http://www.sa.utep.edu/cass. CASS's staff are the only individuals who can validate and if need be, authorize accommodations for students.

Scholastic Dishonesty: Any student who commits an act of scholastic dishonesty is subject to discipline. Scholastic dishonesty includes, but not limited to cheating, plagiarism, collusion, submission for credit of any work or materials that are attributable to another person.

Cheating is:

Plagiarism is:

Collusion is unauthorized collaboration with another person in preparing academic assignments.

Instructors are required to -- and will -- report academic dishonesty and any other violation of the Standards of Conduct to the Dean of Students.

NOTE: When in doubt on any of the above, please contact your instructor to check if you are following authorized procedure.

Daily schedule (tentative and subject to change)

August 24: topics to cover:

August 26: topics to cover:

Additional material for those who want to learn more: [14]; [27]; [35].

August 31: topics to cover:

Additional material for those who want to learn more: [30], Section 2; [28], Sections 3-7.

September 2: topics to cover:

Additional material for those who want to learn more: [30], Section 2; [28], Sections 3-8.

September 7: topics to cover -- why ReLU?

September 9: topics to cover -- why ReLU?

Additional material for those who want to learn more: [7]; [20]; [40].

September 14: topics to cover -- why ReLU?

Additional material for those who want to learn more: [18].

September 16: topics to cover -- why ReLU?

Additional material for those who want to learn more: [24]; [33], Section 4; [50].

September 21: topics to cover -- why ReLU?

Additional material for those who want to learn more: [17]; [21].

September 23: topics to cover -- why ReLU?

Additional material for those who want to learn more: [1]; [3]; [5]; [6]; [8]; [49].

  • September 28: topics to cover -- why sigmoid activation function?

    September 30: preview for Test 1

    October 5: Test 1

    October 7: work on your project day

    October 12: work on your project day

    October 14: work on your project day

    October 19: overview of Test 1 results

    October 21: topics to cover -- which pooling operation should we use:

    Additional material for those who want to learn more: [50].

    October 26: topics to cover:

    Additional material for those who want to learn more: [34]; [36]; [51].

    October 28: topics to cover:

    November 2: topics to cover:

    November 4: topics to cover:

    Additional material for those who want to learn more: [44].

    November 7: UTEP/NMSU Workshop on Mathematics, Computer Science, and Computational Science

    November 9: topics to cover:

    November 11: topics to cover:

    Additional material for those who want to learn more: [3]; [5]; [6]; [8].

    November 16: topics to cover:

    Additional material for those who want to learn more: [4]; [9]; [10]; [12]; [15]; [16]; [19]; [22]; [23]; [25]; [26]; [29]; [32]; [38]; [39]; [41]; [42]; [43]; [45]; [46]; [47]; [48].

    November 18: project presentations.

    November 23: preview for Test 2

    November 25: Test 2

    November 30: topics to cover:

    December 2: topics to cover:

    References:

    
    [1] Kevin Alvarez, Julio C. Urenda, Orsolya Csiszar, Gabor Csiszar,
      Jozsef Dombi, Gyorgy Eigner, and Vladik Kreinovich, "Towards Fast
      and Understandable Computations: Which `And'- and `Or'-Operations
      Can Be Represented by the Fastest (i.e., 1-Layer) Neural Networks?
      Which Activations Functions Allow Such Representations?", Acta
      Polytechnica Hungarica, 2021, Vol. 18, No. 2, pp. 27-45.
    pdf file
    
    [2] Kittawit Autchariyapanikul, Olga Kosheleva, and Vladik Kreinovich,
      "Shapley Value under Interval Uncertainty and Partial Information",
      In: Vladik Kreinovich, Woraphon Yamaka, and Supanika Leurcharusmee
      (eds.), Data Science for Econometrics and Related Topics, Springer,
      Cham, Switzerland, to appear.
    pdf file
    
    [3] Chitta Baral and Vladik Kreinovich, "Why Sigmoid Transformation
      Helps Incorporate Logic Into Deep Learning: A Theoretical
      Explanation", In: Martine Ceberio and Vladik Kreinovich (eds.),
      Uncertainty, Constraints, AI, and Decision Making, Springer, Cham,
      Switzerland, to appear.
    pdf file
    
    [4] Chitta Baral and Vladik Kreinovich, "How to Make a Neural Network
      Learn from a Small Number of Examples -- and Learn Fast: An Idea",
      Proceedings of the 9th World Conference on Soft Computing, Baku,
      Azerbaijan, September 24-27, 2024.
    pdf file
    
    [5] Barnabas Bede, Olga Kosheleva, and Vladik Kreinovich, "Every
      ReLU-Based Neural Network Can Be Described by a System of
      Takagi-Sugeno Fuzzy Rules: A Theorem", In: Hung T. Nguyen,
      Janusz Kacprzyk, and Vladik Kreinovich (eds.), Contributions of
      Fuzzy Techniques to Systems and Control: A Tribute to Michio Sugeno,
      Springer, Cham, Switzerland, to appear.
    pdf file
    
    [6] Barnabas Bede, Olga Kosheleva, and Vladik Kreinovich, "For Which
      Activation Functions, Any Neural Network Is Equivalent to a
      Takagi-Sugeno Fuzzy System with Constant or Linear Outputs?", In:
      Marek Z. Reformat, Sabrina Senatore, Yusuke Nojima, and Vladik
      Kreinovich (eds.), Fuzzy Systems 60 Years Later: Past, Present, and
      Future. Proceedings of the Joint 2025 World Congress on the
      International Fuzzy Systems Association and the 2025 Annual
      Conference of the North American Fuzzy Information Processing
      Society IFSA-NAFIPS 2025, Banff, Canada, August 16-19, 2025,
      Springer, Cham, Switzerland.
    pdf file
    
    [7] Barnabas Bede, Vladik Kreinovich, and Uyen Pham, "Why Rectified
      Linear Unit is Efficient in Machine Learning: One More
      Explanation", In: Vladik Kreinovich, Songsak Sriboonchitta, and
      Woraphon Yamaka (eds.), Machine Learning for Econometrics and
      Related Topics, Springer, Cham, Switzerland, 2024, pp. 161-167.
    pdf file
    
    [8] Barnabas Bede, Vladik Kreinovich, and Peter Toth, "On equivalence
      between Takagi-Sugeno-Kang fuzzy systems with triangular membership
      functions and Neural Networks with ReLU activation in two or more
      dimensions", International Journal of Computers Communications &
      Control, 2025, Vol. 20. No. 4, Paper 7127,
    pdf file
    
    [9] Michael Beer, Julio Urenda, Olga Kosheleva, and Vladik Kreinovich,
      "Why Spiking Neural Networks Are Efficient: A Theorem", In:
      Marie-Jeanne Lesot, Susana Vieira, Marek Z. Reformat,
      Joao Paulo Carvalho, Anna Wilbik, Bernadette Bouchon-Meunier, and
      Ronald R. Yager (eds.), Proceedings of the 18th International
      Conference on Information Processing and Management of Uncertainty
      in Knowledge-Based Systems IPMU'2020, Lisbon, Portugal,
      June 15-19, 2020, pp. 59-69.
    pdf file
    
    [10] Laxman Bokati, Olga Kosheleva, Vladik Kreinovich, and Anibal Sosa,
      "Why Deep Learning Is More Efficient than Support Vector Machines,
      and How It Is Related to Sparsity Techniques in Signal Processing",
      Proceedings of the 2020 4th International Conference on Intelligent
      Systems, Metaheuristics & Swarm Intelligence ISMSI'2020, Thimpu,
      Bhutan, April 18-19, 2020.
    pdf file
    
    [11] Laxman Bokati, Olga Kosheleva, Vladik Kreinovich, and Nguyen Ngoc
      Thach, "Why Shapley Value and Its Variants Are Useful in Machine
      Learning (and in Other Applications)", In: Vladik Kreinovich,
      Songsak Sriboonchitta, and Woraphon Yamaka (eds.), Machine
      Learning for Econometrics and Related Topics, Springer, Cham,
      Switzerland, 2024, pp. 169-174.
    pdf file
    
    [12] Laxman Bokati, Vladik Kreinovich, Joseph Baca, and Natasha Rovelli,
      "Why Rectified Power (RePU) Activation Functions Are Efficient in
      Deep Learning: A Theoretical Explanation", In: Martine Ceberio and
      Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision
      Making, Springer, Cham, Switzerland, 2023, pp. 7-13.
    pdf file
    
    [13] Kelly Cohen, Laxman Bokati, Martine Ceberio, Olga Kosheleva, and
      Vladik Kreinovich, "Why Fuzzy Techniques in Explainable AI? Which
      Fuzzy Techniques in Explainable AI?", In: Julia Rayz, Victor Raskin,
      Scott Dick, and Vladik Kreinovich (eds.), "Explainable AI and Other
      Applications of Fuzzy Techniques, Proceedings of the Annual
      Conference of the North American Fuzzy Information Processing
      Society NAFIPS'2021, West Lafayette, Indiana, June 7-9, 2021",
      Springer, Cham, Switzerland, 2022, pp. 74-78.
    pdf file
    
    [14] Jonatan Contreras, Martine Ceberio, Olga Kosheleva, and Vladik
      Kreinovich, "Why neural networks in the first place: a theoretical
      explanation", Journal of Intelligent and Fuzzy Systems, 2022,
      Vol. 43, No. 6, pp. 6947-6951.
    pdf file
    
    [15] Jonatan Contreras, Martine Ceberio, Olga Kosheleva, and Vladik
      Kreinovich, "Why Gradient Descent -- Not the Best Optimization
      Technique -- Works Best in Neural Networks: Qualitative Explanation",
      Journal of Combinatorics, Information, and System Sciences JCISS,
      2021, Vol. 45, No. 1-4, pp. 1-10.
    pdf file
    
    [16] Jonatan Contreras, Martine Ceberio, Olga Kosheleva, Vladik
      Kreinovich, and Nguyen Hoang Phuong, "Computational Paradox of
      Deep Learning: A Qualitative Explanation", In: Nguyen Hoang Phuong
      and Vladik Kreinovich (eds.), Deep Learning and Other Soft
      Computing Techniques: Biomedical and Related Applications,
      Springer, Cham, Switzerland, 2023, pp. 245-252.
    pdf file
    
    [17] Jonatan Contreras, Martine Ceberio, Olga Kosheleva,
      Vladik Kreinovich, and Nguyen Hoang Phuong, "Why Rectified
      Linear Neurons: Two Convexity-Related Explanations", In:
      Nguyen Hoang Phuong and Vladik Kreinovich (eds.), "Biomedical
      and Other Applications of Soft Computing", Springer, Cham,
      Switzerland, 2023, pp. 41-47.
    pdf file
    
    [18] Jonatan Contreras, Martine Ceberio, and Vladik Kreinovich, "Why
      rectified linear neurons: a possible interval-based explanation",
      In: Nguyen Ngoc Thach, Nguyen Duc Trung, Doan Thanh Ha, and Vladik
      Kreinovich, Artificial Intelligence and Machine Learning for
      Econometrics: Applications and Regulation (and Related Topics),
      Springer, Cham, Switzerland, to appear.
    pdf file
    
    [19] Jonatan Contreras, Martine Ceberio, and Vladik Kreinovich, "Why
      Dilated Convolutional Neural Networks: A Proof of Their Optimality",
      Entropy, 2021, Vol. 23, Paper 767.
    pdf file
    
    [20] Jonatan Contreras, Martine Ceberio, and Vladik Kreinovich, "One
      More Physics-Based Explanation for Rectified Linear Neurons", In:
      Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty,
      Constraints, and Decision Making, Springer, Cham, Switzerland,
      2023, pp. 195-198.
    pdf file
    
    [21] Daniel Cruz, Ricardo Godoy, and Vladik Kreinovich,
      "Why, in Deep Learning, Non-Smooth Activation Function Works
      Better Than Smooth Ones", In: Martine Ceberio and
      Vladik Kreinovich (eds.), "Decision Making under Uncertainty
      and Constraints: A Why-Book", Springer, Cham, Switzerland,
      2023, pp. 111-115.
    pdf file
    
    [22] Lehel Denes-Fazakas, Laszlo Szilagyi, Gyorgy Eigner, Olga Kosheleva,
      Martine Ceberio, and Vladik Kreinovich, "Which Activation Function
      Works Best for Training Artificial Pancreas: Empirical Fact and Its
      Theoretical Explanation", Proceedings of the  IEEE Series of
      Symposia on Computational Intelligence SSCI 2023, Mexico City,
      Mexico, December 6-8, 2023.
    pdf file
    
    [23] Lehel Denes-Fazakas, Laszlo Szilagyi, and Vladik Kreinovich, "Why
      Linear and Sigmoid Last Layers Work Better in Classification", In:
      Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty,
      Constraints, AI, and Decision Making, Springer, Cham, Switzerland,
      to appear.
    pdf file
    
    [24] Damian Lorenzo Gallegas Espinosa, Olga Kosheleva, and Vladik
      Kreinovich, "Why Skew-Normal Distributions and How They Are
      Related to ReLU Activation Function in Deep Learning", In: Hien
      Thi Thu Nguyen, Hai Hong Phan, and Van Nam Huynh (eds.), Data
      Science in Finance and Accounting, Springer, Cham, Switzerland, to
      appear.
    pdf file
    
    [25] Sofia Holguin and Vladik Kreinovich, "Why Residual Neural
      Networks", In: Martine Ceberio and Vladik Kreinovich (eds.),
      "Decision Making under Uncertainty and Constraints: A Why-Book",
      Springer, Cham, Switzerland, 2023, pp. 117-120.
    pdf file
    
    [26] Olga Kosheleva and Vladik Kreinovich, "Why Semi-Supervised
      Learning Makes Sense: A Pedagogical Note", In: Martine Ceberio
      and Vladik Kreinovich (eds.), "Decision Making under
      Uncertainty and Constraints: A Why-Book", Springer, Cham,
      Switzerland, 2023, pp. 121-124.
    pdf file
    
    [27] Olga Kosheleva, Vladik Kreinovich, Victor Timchenko, and Yuriy
      Kondratenko, "Two Is Enough, but Three (or More) Is Better: in AI
      and Beyond", In: A. I. Shevchenko and Yuriy P. Kondratenko (eds.),
      Artificial Intelligence: Achievements and Recent Developments,
      River Publishers, Aalborg, Denmark, 2025, pp. 343-356.
    pdf file
    
    [28] Vladik Kreinovich, "Which AI/ML Techniques to Select for
      Applications to Decision Making: Towards Theoretical Explanations
      of Empirical Discoveries", In: Nicholas Daras, Antonios
      Fytopoulos, and Panos Pardalos (eds.), Handbook of Artificial
      Intelligence and Machine Learning in Decision Making, Springer,
      Cham, Switzerland, 2026, to appear.
    pdf file
    
    [29] Vladik Kreinovich and Olga Kosheleva, "Fuzzy Or Neural,
      Type-1 Or Type-2 -- When Each Is Better: First-Approximation
      Analysis", In: Yuriy P. Kondratenko, Vladik Kreinovich, Witold
      Pedrycz, Arkadiy A. Chikrii, Anna M. Gil Lafuente (eds.),
      Artificial Intelligence in Control and Decision-Making Systems,
      Springer, 2023, pp. 67-74.
    pdf file
    
    [30] Vladik Kreinovich and Olga Kosheleva, "Optimization under
      uncertainty explains empirical success of deep learning heuristics",
      In: Panos Pardalos, Varvara Rasskazova, and Michael N. Vrahatis
      (eds.), Black Box Optimization, Machine Learning and No-Free Lunch
      Theorems, Springer, Cham, Switzerland, 2021, pp. 195-220.
    pdf file
    
    [31] Vladik Kreinovich and Olga Kosheleva, "Deep Learning (Partly)
      Demystified", Proceedings of the 2020 4th International Conference
      on Intelligent Systems, Metaheuristics & Swarm Intelligence
      ISMSI'2020, Thimpu, Bhutan, April 18-19, 2020.
    pdf file
    
    [32] Vladik Kreinovich and Chon Van Le, "Predicting (Economic)
      Trends: Why Signature Method in Machine Learning", In:
      Songsak Sriboonchitta, Vladik Kreinovich, Woraphon Yamaka
      (eds.), "Credible Asset Allocation, Optimal Transport Methods,
      and Related Topics", Springer, Cham, Switzerland, 2022,
      pp. 185-193.
    pdf file
    
    [33] Vladik Kreinovich, Olga Kosheleva, and Andres Ortiz-Munoz,
      "Need for Simplicity and Everything Is a Matter of Degree:
      How Zadeh's Philosophy is Related to Kolmogorov Complexity,
      Quantum Physics, and Deep Learning", In: Shahnaz N. Shahbazova,
      Ali M. Abbasov, Vladik Kreinovich, Janusz Kacprzyk, and Ildar
      Batyrshin (eds.), "Recent Developments and the New Directions of
      Research, Foundations, and Applications", Springer, Cham,
      Switzerland, 2023, Vol. 1, pp. 203-215.
    pdf file
    
    [34] Anatole Lokshin, Olga Kosheleva, and Vladik Kreinovich, "Why
      Softmax? Because It Is the Only Consistent Approach to
      Probability-Based Classification", In: Martine Ceberio and Vladik
      Kreinovich (eds.), Uncertainty, Constraints, AI, and Decision
      Making, Springer, Cham, Switzerland, to appear.
    pdf file
    
    [35] Ricardo Lozano, Ivan Montoya Sanchez, and Vladik Kreinovich, "Why
      Deep Neural Networks: Yet Another Explanation", In: Martine
      Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints,
      and Decision Making, Springer, Cham, Switzerland, 2023, pp. 199-202.
    pdf file
    
    [36] Dinh Tuan Nguyen, Vladik Kreinovich, Olga Koshevela, and Nguyen
      Hoang Phuong, "How to Generalize Softmax to the Case When an
      Object May Not Belong to Any Given Class", In: Nguyen Hoang Phuong,
      Nguyen Thi Huyen Chau, and Vladik Kreinovich (eds.), AI and
      Computational Intelligence, Springer, Cham, Switzerland, to
      appear.
    pdf file
    
    [37] Hung T. Nguyen, Vladik Kreinovich, and Olga Kosheleva, "Why
      Kolmogorov-Arnold Networks (KAN) Work So Well: A Qualitative
      Explanation", In: Nguyen Hoang Phuong, Nguyen Thi Huyen Chau, and
      Vladik Kreinovich (eds.), Explainable Artificial Intelligence and
      Other Soft Computing Techniques: Biomedical and Related
      Applications, Springer, Cham, Switzerland, to appear.
    pdf file
    
    [38] Salvador Robles Herrera, Martine Ceberio, and Vladik Kreinovich,
      "Foundations of Neural Networks Explain the Empirical Success of
      the "Surrogate" Approach to Ordinal Regression -- and Recommend
      What Next", In: Martine Ceberio and Vladik Kreinovich (eds.),
      Uncertainty, Constraints, AI, and Decision Making, Springer, Cham,
      Switzerland, to appear.
    pdf file
    
    [39] Salvador Robles Herrera, Martine Ceberio, and
      Vladik Kreinovich, "When Is Deep Learning Better and When Is
      Shallow Learning Better: Qualitative Analysis",
      International Journal of Parallel, Emergent and Distributed
      Systems, 2022, DOI: 10.1080/17445760.2022.2070748.
    pdf file
    
    [40] Christian Servin, Olga Kosheleva, and Vladik Kreinovich, "From
      Aristotle to Newton, from Sets to Fuzzy Sets, and from Sigmoid to
      ReLU: What Do All These Transitions Have in Common?", Proceedings
      of the NAFIPS International Conference on Fuzzy Systems, Soft
      Computing, and Explainable AI NAFIPS'2024, South Padre Island,
      Texas, May 27-29, 2024.
    pdf file
    
    [41] Miroslav Svitek, Olga Kosheleva, Vladik Kreinovich, "Education in
      the Era of Google, Wikipedia, and Deep Learning: Are We Humans
      Still Needed and If Yes for What?", In: Evgeny Dantsin and Vladik
      Kreinovich (eds.), Uncertainty Quantification and Uncertainty
      Propagation under Traditional and AI-Based Data Processing (and
      Related Topics): Legacy of Grigory Tseytin, Springer, Cham,
      Switzerland, to appear.
    pdf file
    
    [42] Miroslav Svitek, Olga Kosheleva, Vladik Kreinovich, and Nguyen
      Hoang Phuong, "Why embedding-decoder arrangement helps machine
      learning", In: Nguyen Hoang Phuong, Nguyen Thi Huyen Chau, and
      Vladik Kreinovich (eds.), AI and Computational Intelligence,
      Springer, Cham, Switzerland, to appear.
    pdf file
    
    [43] Miroslav Svitek, Olga Kosheleva, Vladik Kreinovich, and Nguyen
      Hoang Phuong, "Bohr's Observation about Deep Truths, Quantum
      Computing, Multiple-Valued Logic, Neural Networks, LLMs, Hegel's
      Negation of Negation (and Maybe Even Laws of Sexual Attraction)",
      In: Nguyen Hoang Phuong, Nguyen Thi Huyen Chau, and Vladik
      Kreinovich (eds.), AI and Computational Intelligence, Springer,
      Cham, Switzerland, to appear.
    pdf file
    
    [44] Miroslav Svitek, Niklas Winnewisser, Michael Beer, Olga Kosheleva,
      and Vladik Kreinovich, "Why Shapley Value and Its Generalizations
      Are Effective in Economics and Finance, Machine Learning, and
      Systems Engineering", In: Hien Thi Thu Nguyen, Hai Hong Phan, and
      Van Nam Huynh (eds.), Data Science in Finance and Accounting,
      Springer, Cham, Switzerland, to appear.
    pdf file
    
    [45] Tho M. Nguyen, Saeid Tizpaz-Niari, and Vladik Kreinovich, "How to
      Make Machine Learning Financial Recommendations More Fair:
      Theoretical Explanation of Empirical Results", In: Nguyen Ngoc
      Thach, Nguyen Duc Trung, Doan Thanh Ha, and Vladik Kreinovich
      (eds.), Partial Identification in Econometrics and Related Topics,
      Springer, Cham, Switzerland, 2024, pp. 147-152.
    pdf file
    
    [46] Saeid Tizpaz-Niari and Vladik Kreinovich, "How to Best Retrain a
      Neural Network If We Added One More Input Variable",
      In: Nguyen Hoang Phuong, Nguyen Thi Huyen Chau, and Vladik
      Kreinovich (eds.), Machine Learning and Other Soft Computing
      Techniques: Biomedical and Related Applications, Springer, Cham,
      Switzerland, 2024, pp. 23-38.
    pdf file
    
    [47] Julio C. Urenda, Orsolya Csiszar, Gabor Csiszar, Jozsef Dombi,
      Olga Kosheleva, Vladik Kreinovich, and Gyorgy Eigner, "Why
      Squashing Functions in Multi-Layer Neural Networks", Proceedings of
      the 2020 IEEE International Conference on Systems, Man, and
      Cybernetics SMC'2020, Toronto, Canada, October 11-14, 2020,
      pp. 296-300.
    pdf file
    
    [48] Julio C. Urenda, Olga Kosheleva, and Vladik Kreinovich, "Why Deep
      Learning Is Under-Determined? Why Usual Numerical Methods for
      Solving Partial Differential Equations Do Not Preserve Energy? The
      Answers May Be Related to Chevalley-Warning Theorem (and Thus to
      Fermat Last Theorem)", In: Martine Ceberio and Vladik Kreinovich
      (eds.), Uncertainty, Constraints, AI, and Decision Making,
      Springer, Cham, Switzerland, to appear.
    pdf file
    
    [49] Julio C. Urenda, Olga Kosheleva, and Vladik Kreinovich, "Fuzzy
      Techniques Explain the Effectiveness of ReLU Activation Function
      in Deep Learning", In: Patricia Melin and Oscar Castillo (eds.),
      Proceedings of the International Seminar on Computational
      Intelligence ISCI'2023, Tijuana, Mexico, August 30-31, 2023.
    pdf file
    
    [50] Julio C. Urenda and Vladik Kreinovich, "Why Rectified Linear
      Activation Functions? Why Max-Pooling? A Possible Explanation",
      In: Oscar Castillo and Patricia Melin (eds.), New Perspectives
      on Hybrid Intelligent System Design based on Fuzzy Logic,
      Neural Networks and Metaheuristics, Springer, 2023, pp. 459-463.
    pdf file
    
    [51] Min Xian, Olga Kosheleva, Martine Ceberio, and Vladik Kreinovich,
      "Why drop-max is effective in making convolutional neural networks
      (CNNs) more robust", Proceedings of the 2026 Annual Conference of
      North American Fuzzy Information Processing Society NAFIPS 2026,
      El Paso, Texas, March 14-16, 2026, to appear.
    pdf file
    
    [52] Sobita Alam, Arman Hossain, Samin Islam, Arin Rahman, and Vladik
      Kreinovich, "Attention in machine learning: how to explain the
      empirical formula", Abstracts of the NMSU/UTEP Workshop on
      Mathematics, Computer Science, and Computational Science, Las
      Cruces, New Mexico, April 12, 2025.
    pdf file