MSc Artificial Intelligence at Texas Business School
The Master of Science in Artificial Intelligence at Texas Business School is a 12–18 month hybrid graduate program combining advanced AI theory, hands-on experience, and ethical leadership training to prepare students for interdisciplinary roles in AI-driven industries, with a curriculum aligned to top certifications and a strong emphasis on practical application and industry relevance.
Programme Overview
The Master of Science in Artificial Intelligence at Texas Business School is a 12–18 month graduate programme blending rigorous academic training with real-world application. The curriculum equips students with a deep understanding of AI principles—spanning machine learning, deep learning, NLP, computer vision, reinforcement learning, and AI systems engineering—while emphasising ethical leadership, strategic alignment with business goals, and socially responsible deployment. Learning is delivered through a mix of online lectures, remote labs, hands-on projects, and optional in-person workshops. Students gain experience with tools like Python, TensorFlow, PyTorch, and cloud ML platforms, and may complete the programme in as little as 12 months or up to 18 months with internship or extended project work. A required AI Lab Practicum and a Capstone or Thesis ensure experiential learning and industry relevance, while dedicated courses on AI ethics prepare graduates for responsible innovation and interdisciplinary leadership.

To earn the degree, students complete a minimum of 36 credit hours, including nine core courses (27 credits), two electives (6 credits), and either a 3-credit Capstone or 6-credit Research Thesis. The structure also includes a mandatory, non-credit AI Lab Practicum for practical experience. Electives allow focus in areas like Intelligent Robotics or AI in Business, and the programme’s alignment with top certifications (e.g. TensorFlow, AWS ML, Azure AI) gives students a competitive advantage. Coursework and lab components are mapped to certification standards, helping students build a project portfolio demonstrating skills in predictive modelling, computer vision, and NLP. The hybrid format accommodates working professionals and international learners, while the programme’s comprehensive scope ensures that graduates are not only technically proficient but also ready to lead in a fast-evolving AI-driven world.

Core Courses
All students must complete the following core courses, which provide comprehensive coverage of the fundamental knowledge areas in artificial intelligence. Each course is 3 credits. Core courses include both technical depth and applied assignments to ensure mastery of both theory and practice.
Core Course Title |
Credits |
Prerequisites |
Mathematics for Artificial Intelligence |
3 |
Undergraduate calculus & linear algebra (recommended) |
Foundations of AI and Intelligent Systems |
3 |
Basic programming; corequisite: Mathematics for AI |
Machine Learning |
3 |
Mathematics for AI (or equiv. background) |
Deep Learning |
3 |
Machine Learning (recommended) |
Natural Language Processing |
3 |
Machine Learning (recommended) |
Computer Vision |
3 |
Machine Learning or Foundations of AI (recommended) |
AI Systems Design and Deployment (MLOps) |
3 |
Machine Learning (recommended) |
Ethics and Policy in Artificial Intelligence |
3 |
Foundations of AI (recommended) |
AI Strategy and Business Applications |
3 |
None (background in business helpful) |
1. Mathematics for Artificial Intelligence
This course provides the mathematical foundations necessary to understand and develop AI and machine learning models. It covers four key areas of math: linear algebra (vectors, matrices, eigenvalues) for representing and manipulating data; probability and statistics (random variables, distributions, inferential stats) which underpin many learning algorithms; optimization theory (convex optimization, gradient-based methods) for training models; and numerical methods for implementing algorithms efficiently on computers. Students will learn how concepts like matrix decompositions or gradient descent are applied in AI – for example, how eigenvalues relate to principal component analysis, or how Lagrange multipliers inform machine learning loss minimization. The course balances theory with practice: students solve problem sets by hand to derive results (e.g. proving convergence of an optimization algorithm) and use Python libraries (NumPy/SciPy) to implement computations. By the end, students will have a “toolkit” of mathematical techniques for subsequent AI courses, enabling them to understand algorithmic details and troubleshoot model behavior..
Learning Outcomes
Upon completion, students will be able to:
I. Apply linear algebra concepts (matrix operations, vector spaces, projections) to formalize data and transformations in AI algorithms.
II. Explain and compute probabilistic concepts (conditional probability, Bayes’ rule, expectations) and use them to model uncertainty in AI systems.
III. Utilize optimization methods (gradient descent, linear programming, etc.) to formulate and solve model training problems, including setting up objective functions and constraints.
IV. Analyse the numerical stability and complexity of AI algorithms, leveraging knowledge of numerical methods (floating-point errors, convergence rates) to choose appropriate solution techniques.
V.
Read and understand the mathematical portions of AI
research papers or documentation, providing a basis for continual learning of
new AI techniques.
2. Foundations of AI and Intelligent Systems
This course offers a broad introduction to the field of Artificial Intelligence, covering its history, core concepts, and classic techniques beyond machine learning. Students learn about the definition of AI and primary subfields within AI, gaining context for how topics like ML, reasoning, and robotics interrelate. Key focus areas include state-space search algorithms (uninformed search like BFS/DFS, heuristic search like A*, game search/minimax) for problem solving, knowledge representation and reasoning (logic, inference, expert systems), and planning under uncertainty. Additional topics include constraint satisfaction problems, basics of agent architectures, and an overview of evolutionary algorithms. The course often uses examples from game AI (e.g. solving puzzles, playing chess) and decision support systems to illustrate concepts. Programming assignments involve implementing classic AI algorithms such as A* pathfinding or rule-based inference. This course lays the groundwork for understanding AI approaches that are not solely data-driven, complementing the statistical learning techniques covered in other courses.
Learning Outcomes
Upon completion, students will be able to:
I. Describe the scope of AI as a discipline, identifying its major subfields (search, knowledge representation, machine learning, robotics, etc.) and how they contribute to building intelligent agents.
II. Implement and analyze search algorithms for problem solving, including formulating problems as state spaces and applying appropriate search strategies (such as heuristic search for optimal solutions).
III. Utilize formal logic representations (propositional and first-order logic) to encode knowledge and apply inference rules/algorithms to draw conclusions (e.g. using resolution or forward/backward chaining).
IV. Explain techniques for planning and decision-making under uncertainty, such as Markov decision processes (introductory concept, which is further built on in electives like Reinforcement Learning).
V. Articulate the differences between symbolic AI and statistical AI approaches, discussing the types of problems each is suited for.
VI.
Develop simple intelligent systems (e.g. a rule-based
expert system or an AI that plays a game) and evaluate their performance,
setting the stage for more advanced AI system development in later courses.
3. Machine Learning
This core course delves into the fundamental algorithms and techniques of machine learning – enabling computers to learn from data. It covers both supervised learning (learning from labeled examples) and unsupervised learning (finding patterns in unlabeled data). Key topics include: regression and classification methods (linear regression, logistic regression, k-Nearest Neighbors, decision trees, ensemble methods like random forests and boosting), fundamental statistical learning theory concepts (bias-variance tradeoff, model evaluation, cross-validation), support vector machines, and clustering algorithms (k-means, hierarchical clustering, density-based methods). Students also learn about model selection and dimensionality reduction techniques such as principal component analysis. The course emphasizes practical implementation – students will use Python libraries (scikit-learn) to experiment with algorithms on real datasets (e.g. building a classifier for a business dataset or a predictor for housing prices) and will complete programming assignments and written analyses of results. Mathematical foundations from the “Mathematics for AI” course are applied here, for example using calculus for gradient-based optimization in training models and linear algebra for understanding algorithms like PCA. By providing a strong ML foundation, this course prepares students for advanced AI topics like deep learning and acts as a prerequisite for several electives.
Learning Outcomes
Upon completion, students will be able to:
I. Understand and differentiate common machine learning algorithms for regression, classification, and clustering, including knowing the assumptions, strengths, and limitations of each approach.
II. Preprocess and analyze datasets, handling tasks like feature engineering, normalization, and splitting data for training/validation, in order to feed data appropriately into ML models.
III. Implement and train various ML models using Python (from scratch for simple algorithms and using libraries for more complex ones) and tune their hyperparameters to improve performance.
IV. Evaluate model performance using appropriate metrics (accuracy, precision/recall, RMSE, etc.) and techniques like cross-validation, and interpret the results to make informed decisions about model selection.
V. Compare model performance and complexity to choose a suitable algorithm for a given problem, justifying the choice based on data characteristics and use-case requirements.
VI. Understand basic theoretical concepts such as overfitting vs. underfitting and how techniques like regularization or ensemble methods help improve generalization. This includes being able to articulate the bias-variance trade-off in the context of model complexity.
4. Deep Learning
This course builds upon Machine Learning to focus on deep neural networks and modern representation-learning techniques. Students explore the concepts and architectures that have driven recent breakthroughs in AI. The course begins with an introduction to neural network fundamentals: perceptrons, activation functions, multilayer feed-forward networks, and backpropagation for training. It then progresses to advanced architectures such as Convolutional Neural Networks (CNNs) for image data, Recurrent Neural Networks (RNNs) and sequence models (including LSTM and GRU networks) for sequential data, and Transformer architectures for sequence transduction and attention mechanisms (the basis of modern NLP). Topics like autoencoders, generative models (e.g. GANs and variational autoencoders), and deep reinforcement learning basics may be introduced as well. Practical skills are heavily emphasized: students will implement neural networks using frameworks like TensorFlow or PyTorch, and train models on GPUs for tasks such as image classification (e.g. recognizing objects in images) and text classification or generation. The course covers training techniques (optimization algorithms beyond basic SGD, like Adam; initialization; batch normalization), as well as issues of tuning deep models and preventing overfitting (dropout, regularization). Recent research papers may be discussed to highlight state-of-the-art developments (for example, breakthroughs in generative AI like GPT for language or diffusion models for image generation). By the end, students will be able to design, train, and deploy deep neural network models for a variety of data types.
Learning Outcomes
Upon completion, students will be able to:
I. Explain the architecture of neural networks, including how neurons are organized into layers and how backpropagation updates model weights through gradient descent.
II. Construct and train deep learning models for different tasks: CNNs for vision tasks (e.g. image recognition), RNN/Transformer models for sequence data (e.g. language modeling), and others, using industry-standard libraries.
III. Configure training processes effectively – selecting appropriate loss functions and optimization algorithms, tuning hyperparameters like learning rate and network depth, and using regularization techniques to improve generalization.
IV. Analyze training outcomes via learning curves and performance metrics, and troubleshoot common issues in deep learning (such as vanishing/exploding gradients, overfitting, or data imbalance).
V. Understand and implement basic generative modeling techniques (e.g. designing an autoencoder or a simple GAN) and discuss their applications in creating new data (such as image synthesis or text generation).
VI. Stay abreast of current research trends in deep learning, demonstrating the ability to read a contemporary research paper (for instance on a new model or optimization method) and summarize its contributions. (This “learning to learn” skill is cultivated to help graduates keep up with AI advances).
5. Natural Language Processing
This course provides a comprehensive introduction to Natural Language Processing (NLP) – the branch of AI concerned with understanding and generating human language. It covers both foundational linguistic concepts and the modern, machine learning-based approaches to NLP. Major topics include: text preprocessing (tokenization, stemming, embeddings), language models (from n-grams to neural language models), syntactic parsing (POS tagging, context-free grammars, dependency parsing), and semantic processing (word sense disambiguation, ontology use). The course places significant emphasis on deep learning for NLP, including word embeddings (word2vec, GloVe) and advanced architectures like Recurrent Neural Networks and Transformer-based models (such as BERT and GPT) which enable tasks like named entity recognition, machine translation, question answering, sentiment analysis, and chatbot dialogue systems. Students will engage in lab-style assignments to build NLP pipelines – for example, constructing a sentiment analyzer for social media text or a simple machine translation system – using Python NLP libraries (NLTK, spaCy) and deep learning frameworks. The interplay between NLP and cognitive science is touched upon, and ethical considerations (such as bias in language models or responsible use of generative text like ChatGPT) are discussed. By completing this course, students gain the ability to develop systems that process and derive meaning from natural language data.
Learning Outcomes
Upon completion, students will be able to:
I. Apply linguistic preprocessing techniques to raw text (tokenizing sentences/words, normalizing text, extracting features) as a necessary step before feeding data into NLP models.
II. Understand and implement core NLP algorithms for several tasks: e.g. a part-of-speech tagger (using sequence labeling methods), a basic parser for sentence structure, and a named entity recognition system.
III. Utilize machine learning and deep learning methods in NLP, including training and using recurrent or transformer models for tasks like language modeling, text classification, and translation.
IV. For instance, students might train a simple transformer-based classifier for news topics or fine-tune a pre-trained BERT model for an NLP task.
V. Represent text data through embeddings (learning vector representations for words or sentences) and explain how these representations capture semantic meaning (e.g. similar words having vectors close together).
VI. Evaluate NLP models with appropriate metrics (BLEU scores for translation, F1 for named entity recognition, etc.) and error analysis, and iterate to improve their performance.
VII. Appreciate the ethical and societal implications of NLP technologies – including issues of bias in language models or misinformation – and articulate strategies to mitigate harm (this ties in with the Ethics course).
VIII. (Stretch goal) Experiment with a large language model (LLM) or generative model and understand at a high level how such models are enabling the current “Generative AI” trend, as well as limitations like hallucination or large compute requirements.
6. Computer Vision
This course covers the principles and techniques of Computer Vision, enabling computers to interpret and understand visual information from images or videos. It starts with fundamentals of image processing and classic vision algorithms: image formation and representation, color spaces, filtering and convolution, edge detection, feature extraction (SIFT, SURF), and segmentation methods. Students learn about geometric computer vision concepts such as camera models, stereoscopic vision, and 3D reconstruction basics. Building on these fundamentals, the course then focuses on modern deep learning approaches to vision, including Convolutional Neural Networks (CNNs) for image classification and object detection (architectures like ResNet, YOLO, or Mask R-CNN are discussed). Topics like image classification, object recognition, face detection, motion and tracking in video, and perhaps an introduction to image generation (GANs) are included. Throughout the course, students gain practical experience via programming projects: for example, implementing an edge detector, building an image classifier for a set of images, or using OpenCV and PyTorch to create a real-time object detection system. The course also surveys applications of computer vision in industry (e.g. in autonomous vehicles, medical image analysis, or retail analytics). By combining foundational vision knowledge with hands-on neural network training for images, students acquire the skills to build and deploy vision-based AI solutions.
Learning Outcomes
Upon completion, students will be able to:
I. Understand the image formation process and basic image properties (pixels, color channels), and perform fundamental image processing operations (smoothing, edge detection, segmentation) to extract useful features.
II. Implement and apply feature detection and description algorithms (e.g. edge detectors like Canny, corner detectors, SIFT features) to identify salient points or regions in images for further analysis.
III. Design and train deep CNN models for vision tasks – such as classifying images into categories or detecting objects within images – using frameworks like TensorFlow/PyTorch. This includes understanding how convolution, pooling, and deep layers enable hierarchical feature learning in vision.
IV. Evaluate the performance of vision systems using metrics like accuracy, Intersection-over-Union (for object detection), etc., and improve models by techniques such as data augmentation or transfer learning from pre-trained models.
V. Solve a real-world computer vision problem end-to-end (for instance, build a prototype that takes webcam input and identifies a set of objects), demonstrating integration of vision algorithms into an application.
VI. Discuss advanced or emerging topics in vision (e.g. the role of vision in autonomous systems or video analytics), showing awareness of how classical methods and deep learning combine in state-of-the-art solutions.
VII. Recognize the challenges and limitations of computer vision systems (lighting variation, viewpoint variation, need for large training data, etc.) and propose methods to address these in practical deployments.
7. AI Systems Design and Deployment (MLOps)
This course focuses on the engineering aspects of building and deploying AI systems in real-world settings. Students learn how to take AI models from the research/prototype stage to production-ready systems – a process often referred to as MLOps (Machine Learning Operations). Key topics include: data pipeline design (ingesting, cleaning, and processing data at scale, e.g. using distributed frameworks or databases), model deployment architectures (embedding AI models in applications, building REST APIs or microservices for model serving), and scalable computing for AI (using cloud services, containers like Docker, and orchestration tools such as Kubernetes). The course also covers model monitoring and maintenance: setting up systems to monitor performance drift, automate retraining, and ensure reliability and security of AI services. Concepts of AI system architecture are discussed, for instance choosing between batch processing vs. real-time streaming for data, or edge vs. cloud deployment for inference. Students get hands-on experience with tools like MLflow for managing experiments, and CI/CD pipelines adapted for ML (continuous integration/continuous deployment of data and model changes). Cloud AI platforms (such as AWS SageMaker, Google AI Platform, or Azure ML) are introduced, and students might practice deploying a model on a cloud instance with proper environment and dependencies. The course often includes a mini-project where students, in teams, design an end-to-end AI application – for example, developing a web service that hosts a trained model (from a prior course) and exposes it via an API, complete with logging and basic user interface. Through this course, students gain the practical know-how to bridge the gap between data science and software engineering, ensuring their AI solutions are robust, scalable, and maintainable in production.
Learning Outcomes
Upon completion, students will be able to:
I. Design an AI/ML pipeline for a given problem, selecting appropriate tools for each stage: data collection, storage (SQL/NoSQL, data lakes), processing (ETL jobs, feature stores), model training, validation, and deployment.
II. Containerize and deploy an AI model as a web service or API, using technologies like Docker (for packaging code and model) and Flask/FastAPI or cloud functions (to serve predictions). They will understand how to expose model predictions to other software components or end-users securely.
III. Utilize cloud computing resources for AI: e.g. launching and configuring a cloud VM or using a managed service to train a model on GPU/TPU, then deploying the model with auto-scaling. They will be familiar with one major cloud provider’s AI ecosystem (AWS, Azure, or GCP) in the context of deploying machine learning.
IV. Implement monitoring and logging for AI systems in production – capturing metrics like latency, throughput, and prediction accuracy over time – and set up alerts or triggers for when models degrade or data drifts.
V. Explain and apply DevOps principles to ML (MLOps), including version control for datasets and models, continuous integration (automated testing of model code), and continuous deployment (automated rollout of model updates).
VI. Address important practical considerations such as scalability (understanding how to handle increasing loads or larger data via distributed computing), privacy and security (ensuring data is handled securely, models are protected from abuse, and compliance with regulations like GDPR when deploying AI), and cost optimization (balancing computational cost with performance needs in cloud deployments).
VII. Work effectively in a multidisciplinary team (data scientists, software engineers, domain experts) to deliver an AI system, communicating requirements and results – this mimics the real-world scenario of deploying AI in a business or organizational context.
8. Ethics and Policy in Artificial Intelligence
This core course examines the ethical, legal, and policy implications of AI technologies. As AI systems become increasingly integrated into society, professionals must be equipped to address questions of fairness, accountability, transparency, and social impact.
The course is structured in three modules: (1) Ethics of AI – covering fundamental ethical frameworks (deontology, utilitarianism, etc.) as applied to AI, and key issues such as algorithmic bias/discrimination, privacy and surveillance, autonomy (e.g. lethal autonomous weapons) and the implications of AI on labor and the economy; (2) AI Policy and Governance – exploring how governments and institutions are responding to AI (national AI strategies, emerging regulations like the EU AI Act, guidelines for AI ethics), including topics like data protection law, intellectual property issues for AI-generated content, and governance of AI in specific sectors (healthcare, finance, criminal justice); (3) Responsible AI Practice – focusing on practical approaches to developing and deploying AI ethically, such as fairness-aware machine learning, explainability techniques, model auditing, and the role of AI ethics committees or review boards. Case studies (e.g. facial recognition in law enforcement, use of AI in hiring decisions, autonomous vehicle accidents, large language models like ChatGPT generating misinformation) are analyzed to illustrate the real-world dilemmas and decision-making processes. Students will engage in discussions, write reflective essays or policy memos, and possibly collaborate on creating an “AI ethics impact assessment” for a hypothetical AI product. The course may feature guest lectures from experts in AI law or industry ethics panels. By the end, students gain a nuanced understanding of how to balance innovation with responsibility and are prepared to make informed judgments about AI use in their careers.
Learning Outcomes
Upon completion, students will be able to:
I.
Identify and analyze ethical issues in AI scenarios,
applying principles of ethical reasoning to questions like: Is an AI decision
fair? Is it transparent? Does it respect user privacy and autonomy?
II. Evaluate the societal impact of AI systems, including potential benefits and harms to different stakeholder groups, and articulate these in the context of case studies (e.g. evaluating an AI system’s impact on employment or on marginalized communities).
III. Demonstrate knowledge of current and emerging AI regulations and policies – for instance, students should be able to summarize key points of guidelines like the OECD AI Principles or provisions of the EU’s draft AI Act, and understand how these affect AI deployment in practice.
IV. Implement basic practices of ethical AI development, such as conducting bias audits on a dataset/model (and interpreting bias metrics), improving model transparency (through documentation and explainable AI techniques), and considering accessibility and inclusivity in design.
V. Understand the concept of AI governance within organizations, including the roles of ethics committees, frameworks for AI accountability, and how companies can self-regulate through standards (like IEEE’s AI ethics standards or NIST’s AI risk management framework).
VI. Formulate a policy or management recommendation concerning an AI issue (for example, writing a memo on whether a city should adopt predictive policing software, weighing ethical concerns and evidence). This demonstrates the ability to bridge technical and policy perspectives, a skill valuable in leadership roles.
VII. Appreciate the long-term and global challenges of AI (such as AI’s effect on democracy, the environment, or the future of work) and discuss them critically – aligning with the programme’s goal of producing responsible leaders in AI.
9. AI Strategy and Business Applications
This capstone-core course connects the technical facets of AI with business strategy and industry applications. It is tailored for a business school context, focusing on how organizations can generate value from AI and how to lead AI-driven innovation. The course covers frameworks for developing an AI strategy aligned with business objectives, including identifying suitable AI use cases, conducting cost-benefit and ROI analysis for AI projects, and managing data as a strategic asset. Students survey AI applications across key industries – e.g. predictive analytics in finance, personalization in marketing, diagnostic AI in healthcare, manufacturing automation, supply chain optimization, etc. – analyzing what has made certain AI implementations successful or why some have failed. Concepts of AI product management are introduced: how to take an AI idea from concept to deployment, including gathering requirements, prototyping, testing with users, and iterating. The course also deals with change management and organizational readiness for AI: addressing how to build AI talent teams, foster data-driven culture, and manage the ethical and workforce implications of introducing AI into an enterprise. Students will learn about governance and oversight in a corporate context (complementing the Policy course, but here from a managerial perspective), such as establishing AI ethics guidelines within a company and ensuring compliance. Another component is understanding the landscape of AI solutions and vendors (e.g. evaluating when to use cloud AI services vs. custom solutions). Guest speakers from industry (AI project managers, consultants, or executives) may share real-world insights. Assessment may involve students developing an AI strategy proposal for an existing company or a startup idea, including business case, implementation roadmap, and risk analysis. By blending technical understanding with business insight, this course prepares students to take on roles interfacing between technical teams and business leadership.
Learning Outcomes
Upon completion, students will be able to:
I. Identify high-impact AI opportunities in a business context by analyzing business processes and challenges, and recognizing where AI techniques (like predictive modeling, computer vision, NLP, etc.) could provide innovative solutions or efficiency gains.
II. Develop a comprehensive AI project proposal or business plan that includes problem definition, solution approach (which AI technologies to use), resource estimation, cost-benefit analysis, and KPIs for success. Students will practice articulating the business value of the technical solution in language suitable for executives.
III. Understand how to integrate AI systems into existing operations, including considerations for IT infrastructure, data governance, and cross-department collaboration. They will be able to outline how an AI pilot project can be scaled up to organization-wide deployment.
IV. Discuss and plan for the organizational implications of AI adoption: for example, training/upskilling employees, redefining job roles (rather than simply automating tasks, how to use AI to augment human workers productively), and addressing employee or customer concerns about AI (trust, transparency).
V. Apply principles of AI product management, such as user-centered design for AI (ensuring the AI tool is usable and solves the right problem), iterative development with user feedback, and maintenance planning (updates as data changes or new features are needed).
VI. Evaluate commercial AI solutions and platforms – given a problem, decide whether to build a custom model in-house or use an existing API/service, considering factors like time-to-market, cost, data sensitivity, and competitive advantage.
VII. Present and communicate AI-driven initiatives effectively to non-technical stakeholders, demonstrating the ability to be an “AI translator” between data scientists and business leaders – a skill strongly sought after in modern enterprises.
VIII. This includes being conversant in both technical terms and business strategy terms, thereby leading cross-functional teams in executing AI projects.
1. Reinforcement Learning and Decision Making (3 Credits)
Reinforcement Learning and Decision Making explores advanced methods for sequential decision-making in uncertain environments. The course begins with foundational concepts such as Markov Decision Processes (MDPs) and dynamic programming techniques like policy and value iteration. It then covers model-free reinforcement learning algorithms including Q-learning and SARSA, before advancing to modern deep reinforcement learning methods such as Deep Q Networks (DQN) and policy gradient approaches like actor-critic models, which integrate neural networks with reinforcement learning. Practical applications in areas like game AI (including the AlphaGo case), robotics control, and recommendation systems are examined. Students engage in simulations and hands-on coding using environments like OpenAI Gym, training intelligent agents that learn and adapt through feedback and trial-and-error. Prior completion of Machine Learning is required; Deep Learning is recommended.
Course Learning Outcomes
Upon completing TBS’s AI & Python program, you will be able to:
- Confidently Code in Python
Write clear, efficient, and well-structured Python code that meets professional standards and supports quick adaptation to emerging AI technologies. - Integrate AI Models into Real Projects
Seamlessly incorporate machine learning and deep learning algorithms into Python applications, with a focus on delivering tangible business value. - Automate and Streamline Complex Tasks
Harness Python’s rich ecosystem of libraries to simplify data cleaning, accelerate predictive analytics, and optimize day-to-day operations. - Visualize and Communicate Insights
Present analytical findings effectively using Python’s visualization tools, transforming dense data into compelling narratives for stakeholders. - Strategize AI Deployments
Plan, execute, and manage AI initiatives that align with organizational objectives—becoming the catalyst for innovation in any team you join.
Elective Courses
In addition to the core courses, students will choose elective courses (2 or more) to tailor the curriculum to their interests and career goals. A total of at least 6 elective credits (typically two courses) is required. Electives allow deeper exploration of specialized AI topics or application domains beyond the core coverage. Students may select electives freely or follow suggested specialization tracks such as: Advanced AI Techniques, AI in Industry Domains, or Interactive AI. Each elective is 3 credits.
1. Reinforcement Learning and Decision Making (3 Credits)
Reinforcement Learning and Decision Making explores advanced methods for sequential decision-making in uncertain environments. The course begins with foundational concepts such as Markov Decision Processes (MDPs) and dynamic programming techniques like policy and value iteration. It then covers model-free reinforcement learning algorithms including Q-learning and SARSA, before advancing to modern deep reinforcement learning methods such as Deep Q Networks (DQN) and policy gradient approaches like actor-critic models, which integrate neural networks with reinforcement learning. Practical applications in areas like game AI (including the AlphaGo case), robotics control, and recommendation systems are examined. Students engage in simulations and hands-on coding using environments like OpenAI Gym, training intelligent agents that learn and adapt through feedback and trial-and-error. Prior completion of Machine Learning is required; Deep Learning is recommended.
Learning Outcomes
By the end of the course, students will be able to:
I. Model decision-making problems using Markov Decision Processes.
II. Implement dynamic programming algorithms for solving MDPs.
III. Apply model-free reinforcement learning techniques like Q-learning and SARSA.
IV. Use deep reinforcement learning methods such as DQN and policy gradients.
V. Train agents using simulation platforms like OpenAI Gym.
VI. Analyse and optimise agent behaviour in real-world and synthetic environments.
VII.
Evaluate
RL strategies in applications such as games, robotics, and personalised
systems.
2. Robotics and Autonomous Systems (3 Credits)
Robotics and Autonomous Systems (3 credits) examines the intersection of artificial intelligence and robotics, focusing on the development of autonomous machines capable of perceiving, navigating, and interacting with their environment. Key topics include robotic kinematics, sensor perception using cameras and LIDAR, and sensor fusion techniques that enhance environmental understanding. Students learn core algorithms in planning and navigation, such as SLAM (simultaneous localization and mapping) and obstacle avoidance, as well as decision-making strategies for autonomous control. The course may also introduce reinforcement learning and control theory within the robotics context. Through practical assignments, students gain hands-on experience programming virtual or physical robots using the Robot Operating System (ROS) to execute tasks like maze navigation or object manipulation. Prerequisites include Foundations of AI; Computer Vision is recommended.
Learning Outcomes
By the end of the course, students will be able to:
I. Understand and apply the fundamentals of robotic kinematics.
II. Integrate perception systems using LIDAR and camera-based vision.
III. Perform sensor fusion to enhance robot perception and localisation.
IV. Implement SLAM and obstacle avoidance algorithms for autonomous navigation.
V. Design decision-making systems for autonomous robots.
VI. Apply reinforcement learning or control theory in robotic applications.
VII. Use ROS to program robots for real-world or simulated tasks.
VIII. Evaluate the performance of autonomous systems in dynamic environments.
3. Advanced Deep Learning and Generative AI (3 credits)
Advanced Deep Learning and Generative AI offers an in-depth exploration of state-of-the-art neural network architectures and generative modelling techniques. The course covers Generative Adversarial Networks (GANs) and their variants for creating realistic images and videos, as well as variational autoencoders (VAEs) and advanced sequence models such as Transformers. Students will examine the inner workings of large-scale models, including building simplified versions of models like GPT or experimenting with diffusion models for image generation. Emerging areas such as multimodal AI, model compression techniques like knowledge distillation and quantisation, and the practical challenges of training large models are also addressed. A key component of the course is a research-based project where students replicate or extend a recent innovation in AI. Deep Learning is a prerequisite.
Learning Outcomes
By the end of the course, students will be able to:
I. Design and implement GANs, VAEs, and other generative models.
II. Build and analyse Transformer-based architectures for text and image tasks.
III. Understand and apply diffusion models in generative image creation.
IV. Explore and experiment with multimodal models combining vision and language.
V. Apply model compression techniques for efficient deployment of large AI models.
VI. Analyse the challenges and strategies in large-scale model training.
VII. Critically read, replicate, and extend recent research in advanced deep learning.
VIII.
Develop
a project that demonstrates mastery of cutting-edge generative AI techniques.
4. Data Science for AI (Big Data and Cloud Computing) (3 credits)
Data Science for AI (Big Data and Cloud Computing) focuses on the tools and techniques required to manage and process large-scale datasets in AI projects. Students learn to work with distributed computing frameworks such as Hadoop and Apache Spark for scalable data processing and explore the use of relational (SQL) and non-relational (NoSQL) databases based on data type and project needs. The course also covers the design and implementation of cloud-based data pipelines capable of ingesting massive datasets, training AI models, and deploying results efficiently. Through hands-on labs, students build complete data pipelines using cloud infrastructure, with a strong emphasis on scalability, speed, and resource optimisation. Prerequisite: AI Systems Design and Deployment.
Learning Outcomes
By the end of the course, students will be able to:
I. Apply distributed computing tools like Hadoop and Spark for big data processing.
II. Differentiate between SQL and NoSQL databases for storing and retrieving AI-relevant data.
III. Design scalable cloud-based data pipelines for AI model training and deployment.
IV. Optimise data handling processes for efficiency and performance on large datasets.
V. Integrate cloud platforms into end-to-end AI workflows.
VI. Build and test full AI pipelines from data ingestion to model output.
VII. Address scalability and resource management challenges in big data AI projects.
5. AI in Healthcare (3 credits)
AI in Healthcare (3 credits) is an application-oriented elective that explores the role of artificial intelligence in transforming the healthcare and biotechnology sectors. The course covers key topics such as medical image analysis using convolutional neural networks (CNNs), predictive modelling for clinical outcomes and personalised medicine, and natural language processing techniques for extracting insights from electronic health records and clinical notes. It also addresses the unique regulatory and ethical issues in healthcare AI, including FDA approval processes and compliance with patient privacy laws like HIPAA. Students engage in hands-on projects using de-identified healthcare datasets to develop diagnostic models, patient risk scoring systems, or tools for optimising hospital operations. Prerequisites include Machine Learning and Ethics in AI.
Learning Outcomes
By the end of the course, students will be able to:
I. Apply CNNs for analysing medical images such as MRI scans.
II. Build predictive models using patient health data for diagnosis or risk assessment.
III. Use NLP techniques to extract insights from clinical text data.
IV. Navigate ethical and legal considerations in healthcare AI applications.
V. Develop AI-based tools tailored to healthcare environments.
VI. Work with real-world de-identified clinical datasets.
VII. Understand regulatory pathways for AI approval in medical settings.
VIII. Evaluate the impact and limitations of AI in clinical decision-making and healthcare delivery.
Note: Not all electives will be available every semester; some may be offered once per year or as demand dictates. Students will be required to plan their elective choices with academic advisors to ensure a coherent and beneficial specialization path.
AI Lab Practicum (Experiential Learning)
The AI Lab Practicum is a core experiential component of the programme, designed to help students apply classroom knowledge to real-world AI problems. Typically undertaken after completing foundational courses, the practicum offers two pathways: an Industry Practicum or a Research Lab Practicum. Both require students to engage deeply with practical AI challenges, build working solutions, and reflect critically on the process.
In the Industry Practicum, students work with companies, government bodies, or non-profits on real AI projects—such as building predictive models or developing computer vision tools. These placements, facilitated by the school’s corporate office, usually last 8 to 10 weeks and may be completed remotely or on-site. Deliverables include a functional output and are evaluated by both a faculty supervisor and the host organisation. The experience equips students with essential workplace skills such as teamwork, project management, and navigating technical constraints in real environments.
The Research Lab Practicum allows students to join university research teams and contribute to ongoing AI projects. Ideal for those interested in academic or research careers, students may work on tasks like developing algorithms, analysing datasets, or co-authoring demos and papers. This path often leads to research publications or future doctoral study.
Regardless of the path chosen, students complete a Practicum Report and Presentation, demonstrating the AI solution built, outcomes achieved, and ethical or business considerations addressed. Graded on a pass/fail basis, the practicum builds job-ready skills and often leads directly to job offers or strong portfolio projects, ensuring graduates leave the programme with both theoretical depth and proven practical competence.
Capstone Project or Research Thesis
As a final requirement, each student must complete either a Capstone Project or a Master’s Thesis, both designed to consolidate learning and demonstrate the ability to tackle complex AI challenges independently.
The Capstone Project (3 credits) is an application-focused project typically completed in teams under faculty guidance. Students solve a real-world AI problem from start to finish—often in collaboration with an industry partner or based on a self-sourced idea. Projects span one semester and involve problem definition, data handling, model development, solution deployment, and ethical analysis. Deliverables include a report, a final presentation to a review panel, and technical artefacts like code and models. Projects are evaluated on technical quality, creativity, relevance, and presentation. Many capstones evolve into startup ventures or real-world implementations.
The Master’s Thesis (6 credits) is a research-intensive option for students interested in exploring theoretical or applied AI problems at greater depth, typically over two semesters. Under a faculty supervisor, students define a research question, review relevant literature, and design and execute a methodology that yields original insights. The process culminates in a formal written thesis and an oral defence. Topics may be theoretical (e.g. algorithm design) or applied (e.g. AI in healthcare or public policy). The thesis substitutes both the capstone and one elective, and suits students considering Ph.D. study or academic publishing.
Industry Certification Alignment (Optional)
To enhance professional readiness, the programme is aligned with key industry certifications in artificial intelligence and machine learning. Although optional, students are encouraged to use the curriculum to prepare for these certifications, with additional support provided through study resources and exam preparation guidance.
The TensorFlow Developer Certificate aligns closely with the Deep Learning course, where students build and train neural networks using TensorFlow/Keras for image and text-based tasks. By completing hands-on projects and understanding key concepts like CNNs and RNNs, students are well-prepared to take the certification with little extra study.
The AWS Certified Machine Learning – Specialty exam is supported through courses like AI Systems Design and Deployment, which cover cloud-based model training, deployment with AWS tools such as SageMaker, and data engineering using platforms like S3 and Lambda. Core algorithm knowledge is reinforced through Machine Learning and Deep Learning courses, while practicum experience helps develop real-world deployment skills.
The Microsoft Certified: Azure AI Engineer Associate certification is covered through a combination of foundational ML courses and the AI Systems Design and AI Strategy courses. These teach students how to build, deploy, and optimise AI models using Azure services. Access to Azure for Education credits allows students to gain practical experience with Azure AI tools, making them well-prepared for scenario-based certification questions.
Through
this optional certification alignment, students graduate not only with academic
credentials but also with the practical skills and readiness to pursue globally
recognised AI certifications.