Skip to content

A comprehensive AI learning roadmap covering Python fundamentals, mathematics, machine learning, deep learning, LLMs, and agentic systems — focused on hands-on projects, practical tools, and real-world deployment.

License

Notifications You must be signed in to change notification settings

romanyn36/agentic-ai-roadmap

Repository files navigation

🚀 The Complete AI Engineer Roadmap 2026

Agentic & Generative AI: From Zero to Deployment

Typing SVG

A structured roadmap from absolute beginner fundamentals in Python and mathematics, through machine learning and deep learning, to building and deploying advanced agentic AI systems.

GitHub stars GitHub forks GitHub issues License


📋 About This Roadmap

before we dive in,iam Romani Nasrat— AI Engineer @ Penny Software, been freelancing since 2023. See full bio below

💡so this isn’t just a normal roadmap — I’m sharing my actual personal learning journey, from college all the way to professional AI work. Every resource here is personally tested and verified based on what I studied and used to reach where I am today. You’ll find both theoretical and practical topics in mathematics, computer science, and AI, along with carefully chosen learning resources. Everything is organized step-by-step, so just keep learning and don’t worry. This roadmap represents a 4-year journey, including 2 years of professional experience freelancing and full-time work, so don’t rush. Take your time, okay? You won’t become great in two months — and that’s totally fine.

let me say it in Arabic-Egyptian 😊

💡 الرود ماب دي مش مجرد رود ماب عادية — أنا بشارك رحلتي الشخصية الحقيقية في التعلم، من أيام الكلية لحد الشغل في مجال الـ AI بشكل احترافي. كل مصدر هنا أنا جربته وذاكرته بنفسي، بناءً على اللي اتعلمته واستخدمته عشان أوصل للمستوى اللي أنا فيه دلوقتي. هتلاقي فيها توبيكس نظرية وعملية في الرياضيات وعلوم الحاسب والـ AI، مع مصادر تعليمية مختارة بعناية. كل حاجة منظمة خطوة بخطوة، الرود ماب دي بتمثل رحلة 4 سنين، منهم سنتين خبرة عملية فريلانسنج وشغل فل تايم، فما تستعجلش. خد وقتك، تمام؟ مش هتبقى محترف في شهرين ودي حاجة عادية تماماً.

🎯 Who is this for?

  • Beginners starting from scratch with Python and math
  • Students transitioning into AI/ML careers
  • Developers who want to build AI models, deploy systems, or create intelligent AI agents
  • Anyone looking for a clear, structured path from fundamentals all the way to production-ready

📚 Table of Contents - Click to expand/collapse

Phase 0 – Foundations

Note: this is just a starting point. For me, it came after two years of college study and before I began learning AI, so I already had a background in computer science and programming. However, if you’re a complete beginner, you can start here as well by learning Python from scratch.

0.1 Python Programming

To build robust AI agents, you need more than just "scripts". You need an engineering-first approach to Python. in this phase you will learn the fundamentals of Python programming, including data structures, OOP, and modern Python features that are essential for developing AI systems. This will give you the tools to write clean, efficient, and maintainable code for your AI agents.

🔧 1. Foundations & Core Mastery
  1. Absolute Basics: Variables, Data Types (int, float, str, bool), and Basic Operators.
  2. Functions: Functions, args, kwargs, parameters, return values, and scope.
  3. Syntax & Logic: Loops, list comprehensions, and Python 3.10+ match statements.
  4. Data Structures: Deep dive into list, dict, set, tuple, plus collections.deque and dataclasses.
  5. Functional Tools: lambda, map, filter, and the itertools module.
  6. Exception Handling: Try/Except/Finally, custom exceptions, and context-aware error handling.
🏗️ 2. Advanced OOP & Design Patterns
  1. OOP 4 pillars: Inheritance, Abstraction, Encapsulation, Polymorphism.
  2. OOP Fundamentals: Mixins (critical for framework customization), and Composition.
  3. Magic Methods: Understanding __init__, __call__, __repr__, and __getattr__.
  4. Decorators: Building custom decorators loic (logging, timing, and agent retry logic,etc).
  5. Context Managers: Resource management using with statements and contextlib.
3. Modern Python (The "Agentic" Stack)
  1. Type Hinting: Using typing (Annotated, Optional, Union) for robust, self-documenting code. Python Typing Docs
  2. Pydantic V2: Data validation and structured outputs—the backbone of LLM communication.
  3. Asyncio: async/await, event loops, and concurrent task execution for responsive agents.
  4. Testing: Writing unit and integration tests with pytest to ensure agent reliability.
  5. Threading & Multiprocessing: Understanding the differences between threading and multiprocessing and when to use each.
  6. Generators & Iterators: Understanding how to use generators and iterators to process large datasets.
🌐 4. Connectivity & Data Extraction
  1. HTTP Clients: requests and httpx (for async) to interact with external tools and APIs.
  2. Data Serialization: Handling JSON, YAML, and Markdown programmatically.
  3. Web Scraping: Utilizing BeautifulSoup4 and Playwright/Selenium for browser automation.
📦 5. Environment & Package Management
  1. High-Performance Tooling: Using uv for ultra-fast dependency management.
  2. Standard Tooling: pip, venv, and pyproject.toml configurations.
  3. Observability: Implementing structured logging to trace agent reasoning steps.
📚 6. Resources this books what i used to learn python
  1. Book: Python Crash Course
  2. Book: Python All in one for dummies
  3. Docs: [Python Docs](https://www.python.org/doc/)
  4. Docs: [The Hitchhiker’s Guide to Python](http://docs.python-guide.org/en/latest/)
  5. Docs: [Python 3 Quick Reference](https://pewscorner.github.io/programming/python3_quick_ref.html)
  6. Extra resources for i studied during learning basic python

    intermediate python

    advanced python



0.2 Math for AI (The "Intuition" Level)

You don't need a PhD, but you need to understand the mechanics.

📐 Mathematical Foundations AI is fundamentally built on math and statistics, so having a strong understanding of these foundations is essential to becoming a good AI engineer. If you're looking for a great place to learn, I highly recommend the Arabic YouTube channel of my college professor, Dr. Ahmed Hagag. He’s excellent at explaining both academic and practical concepts in a simple way. Honestly, he’s one of those rare teachers who truly leaves a lasting impact on his students.

Phase 1 – Introduction to AI

1.1 Introduction to AI and AI Agents

the goal of this phase is to give a high-level understanding of what AI is and why it matters, and to introduce the concept of AI agents. This will provide the necessary context and motivation for diving deeper into the technical aspects of AI in the subsequent phases.

  • What is AI?
  • What is an AI agent?
  • Thinking Humanly (The Cognitive Modelling approach)
  • Acting Humanly (The Turing Test approach)
  • Thinking Rationally (The Laws-of-Thought approach)
  • Acting Rationally (The Rational Agent approach)
  • Why is AI important?
  • applications of AI in various industries (healthcare, finance, transportation, etc.)

1.2 Search Algorithms for Problem Solving and AI Planning

the goal of this phase is to introduce the concept of search algorithms and how they are used in AI for problem solving and planning. This will give you a solid foundation in the fundamental techniques that underlie many AI systems, including agentic AI.

🔍 Search Strategies - Search Problem Components (State, Actions, Transition Model, Goal Test, Path Cost) - Search Strategies: - Uninformed (Blind) Search: - Depth-First Search (DFS) - Breadth-First Search (BFS) - Uniform Cost Search (UCS) - Iterative Deepening Search - Bidirectional Search - Informed (Heuristic) Search: - A* Search - Greedy Best-First Search - Adversarial Search (Game Search) - Minimax Algorithm - Alpha-Beta Pruning
🧬 Genetic Algorithms GAs
  • Definition
  • Main Motivation for Heuristic Techniques (Including GAs)
  • GA Overview and Principle
  • Stochastic Operators / Evolutionary Cycle Steps
    • Initialization
    • Parent Selection
    • Recombination (Crossover)
    • Mutation
    • Survivor Selection (Replacement)
    • Evaluation (Fitness Function)
🧠 Knowledge and Experts
  • Knowledge
  • Experts
  • Rules as Knowledge Representation
  • Expert Systems
  • Components of a Rule-Based Expert System
  • Expert Systems vs. Conventional Programs

Resources:


Phase 2 – AI & Machine Learning & Deep Learning & Data Science fundamentals

This phase provides a clear overview of the core AI fields and branches, and introduces the key techniques and algorithms used across the discipline — including machine learning, deep learning, reinforcement learning, and data science. Together these topics form a solid foundation for understanding how AI agents work and for designing and building them.

2.1 Machine Learning

In this section, you will learn the fundamentals of machine learning, including core concepts, common algorithms, model evaluation, and optimization techniques used to build reliable predictive models.

🧠 1. Core Concepts
  • Foundations: Supervised vs Unsupervised learning, train/validation/test split, bias–variance tradeoff, overfitting vs underfitting.
  • Supervised Learning: Linear & Logistic Regression, KNN, SVM, Naive Bayes, Decision Trees, Random Forests, XGBoost, LDA/QDA.
  • Unsupervised Learning: K-Means, DBSCAN, Hierarchical Clustering, PCA, ICA, Anomaly Detection.
  • Model Evaluation: Accuracy, Precision/Recall, F1-score, ROC-AUC, MSE/MAE, Confusion Matrix, Cross-Validation.
  • Model Improvement: Feature engineering, regularization, hyperparameter tuning (Grid/Random Search), handling imbalanced data.
  • Ensemble Methods: Bagging, Boosting, Stacking.
  • Reinforcement Learning: Q-Learning, DQN, Policy Gradients, Actor-Critic.
  • Tools & Libraries: Python, NumPy, Pandas, Scikit-learn, Matplotlib/Seaborn(visualization), Jupyter.
📚 2. Resources

These are the resources I personally used to learn Machine Learning, and I highly recommend them
I was kinda a clever student 😂 — I mostly studied from my college lectures and followed the course sections step-by-step, then added Andrew Ng’s specialization + random helpful articles and videos.
So there isn’t one fixed source… learning ML is more like exploring from multiple places.

2.2 Neural Networks & Deep Learning

In this section, you will learn how neural networks work from the ground up, then move to advanced deep learning techniques used in real-world AI systems.

🔹 Neural Networks — Basics

Learn the core building blocks of deep learning:

  • what is a neural network and how it works
  • Perceptron, neurons, layers (input / hidden / output)
  • Activation functions (ReLU, Sigmoid, Tanh, Softmax)
  • Forward & backward propagation
  • Loss functions
  • Gradient descent
  • Backpropagation fundamentals
  • Overfitting vs underfitting basics Goal: Understand how a neural network learns mathematically before using frameworks.
🚀 Deep Learning — Advanced & Practical

Apply neural networks to real-world problems and large-scale models:

Training & Optimization

  • Epochs, batch size, iterations
  • Gradient descent variants (SGD, Momentum, Nesterov)
  • Optimizers: SGD, Adam, RMSProp, AdaGrad
  • Regularization: L1/L2, Dropout
  • Batch normalization, Layer normalization
  • Learning rate schedules, Early stopping

Architectures & Models

  • Computer Vision: CNNs, ResNet, EfficientNet, VGG, MobileNet
  • Sequential / Time Series: RNNs, LSTM, GRU, BiLSTM
  • Attention & Transformers: Transformers, BERT, GPT, T5
  • Generative Models: Autoencoders, Variational Autoencoders (VAE), GANs, Diffusion Models
  • Graph Neural Networks (GNNs)

Practical Skills

  • Model building, training, evaluation, debugging
  • Model saving/loading (checkpoints, serialization)
  • Fine-tuning pretrained models
  • Transfer learning
  • Multi-GPU / TPU training
  • Deployment basics (ONNX, TorchScript)

Frameworks & Tools

  • TensorFlow / Keras
  • PyTorch
  • Hugging Face Transformers
  • CUDA & GPU acceleration
📚 Resources

2.3 Natural Language Processing (NLP)

Natural Language Processing (NLP) is the field that focuses on enabling machines to understand, interpret, generate, and interact using human language. This section summarizes core concepts, tasks, models, tooling, evaluation, and practical considerations for building NLP systems.

🔹 Core Concepts
  • Linguistic Levels: Morphology, Syntax (POS, parsing), Semantics, Pragmatics, Discourse.
  • Representation: Tokens, subwords (BPE/WordPiece/SentencePiece), characters.
  • Classical Methods: Bag-of-words, n-grams, TF-IDF.
  • Embeddings: word2vec, GloVe, fastText, contextual embeddings (ELMo, BERT-style).
  • Text Cleaning: lowercasing, normalization, stopword removal, stemming/lemmatization, de-duplication, cleaning noisy text.
🔧 NLP Tasks
  • Text Classification: Sentiment analysis, intent detection.
  • Sequence Tagging: POS tagging, Named Entity Recognition (NER).
  • Parsing & Syntax: Dependency & constituency parsing.
  • Coreference & Anaphora Resolution.
  • Information Extraction: Slot filling, relation extraction.
  • Question Answering (QA): Extractive, abstractive, open-domain.
  • Machine Translation (MT).
  • Summarization: Extractive and abstractive.
  • Dialogue Systems & Conversational AI.
  • Text Generation & Controlled Generation.
🧠 Models & Architectures
  • Sequence Models: RNN, LSTM, GRU, seq2seq with attention.
  • Transformers: modern replacement for many sequence models; see the dedicated Foundations: Transformers & LLMs section for transformer specifics (self-attention, encoder/decoder variants, pretraining, PEFT, RAG, multilingual models).
🧰 Tooling & Libraries
  • Hugging Face Transformers & Datasets: model hub, tokenizers, pipelines.
  • sentence-transformers: semantic embeddings and retrieval.
  • spaCy, NLTK, Stanza, Flair: preprocessing, tagging, parsing.
  • Vector DBs & Search: FAISS, Chroma, Milvus, Pinecone.
📚 Resources

2.4 Introduction to Computer Vision

Computer Vision (CV) is the field of AI that enables machines to interpret and understand visual information from the world, such as images and videos. This section covers core concepts, tasks, models, tooling, and practical considerations for building CV systems.

Actually I am not a big fan of computer vision, so I will share with the basics that I have learned. If you want to learn more about it, you can check online courses and resources.

🔹 Core Concepts
  • Image Representation: Pixels, channels (RGB, grayscale), resolution, aspect ratio.
  • Feature Extraction: Edges, corners, textures, shapes, keypoints.
  • Image Processing: Filtering, convolution, morphological operations, color spaces (RGB, HSV, LAB).
  • Geometric Transformations: Scaling, rotation, translation, affine transformations.
  • Image Enhancement: Contrast adjustment, noise reduction, sharpening.
🔧 CV Tasks
  • Image Classification: Assigning labels to images (e.g., cat vs dog).
  • Object Detection: Locating and classifying objects in images (bounding boxes).
  • Segmentation: Pixel-level classification (semantic, instance, panoptic).
  • Image Generation: Creating new images from scratch or editing existing ones.
  • Pose Estimation: Detecting human or object poses and keypoints.
  • Optical Character Recognition (OCR): Extracting text from images.
  • Face Recognition & Analysis: Face detection, verification, emotion recognition.
  • Video Analysis: Action recognition, tracking, scene understanding.
🧠 Models & Architectures
  • Classical Methods: Haar cascades, HOG + SVM, template matching.
  • Deep Learning Models: CNNs, ResNet, EfficientNet, VGG, MobileNet, YOLO, SSD, Faster R-CNN.
  • Generative Models: GANs, VAEs, Diffusion Models for image generation.
  • Vision Transformers (ViT): Transformer-based architectures for vision tasks.
  • Foundation Models: CLIP, DALL-E, Stable Diffusion.
🧰 Tooling & Libraries
  • OpenCV: Core library for image processing and computer vision tasks.
  • PyTorch & TensorFlow: Deep learning frameworks with vision modules.
  • Hugging Face Transformers: Pretrained models for vision tasks.
  • Scikit-Image: Image processing in Python.
📚 Resources

2.5 Data Preparation

In this section, you will learn how to convert raw, messy data into clean, structured, and model-ready data. Good data preparation often improves your model more than changing the algorithm itself, this is most important topics to study.

🧹 Data Cleaning (First Step Always)
  • Handling missing values (**drop**, **fill**, **interpolate**)
  • Removing duplicates
  • Fixing incorrect data types (string → number/date)
  • Detecting and treating outliers
  • Cleaning inconsistent labels (e.g., Male/male/M)
🔄 Data Transformation
    • Feature scaling: Normalization, Standardization
    • Encoding categorical variables: Label Encoding, One-Hot Encoding
    • Basic feature engineering: Creating new features, combining columns, feature selection
⚖️ Handling Imbalanced Data
  • Oversampling
  • Undersampling
  • SMOTE (Synthetic sampling)
  • Using class weights
📊 Data Splitting & Validation
    • Train / Validation / Test split
    • Stratified splitting
    • K-Fold Cross-Validation
    • Avoiding data leakage
🎨 Data Augmentation
    • Image augmentation (flip, rotate, crop, noise)
    • Text augmentation (synonyms, paraphrasing)
    • Audio augmentation (noise, pitch shift, time stretch)
💾 Working with Large Datasets
  • Data generators / batch loading
  • Efficient formats (CSV)

Libraries:Pandas & NumPy, Scikit-learn, TensorFlow data loaders

2.6 Data Science & Analysis & Visualization

Data Science involves extracting insights from data through analysis and visualization. This section covers the basics of exploring, analyzing, and visualizing data to inform decision-making. most of the topics if reach here that means you are already familiar with them.

🔍 Exploratory Data Analysis (EDA)
  • Data Inspection: Checking data types, missing values, duplicates, summary statistics.
  • Univariate Analysis: Distributions, outliers, central tendency (mean, median, mode).
  • Bivariate/Multivariate Analysis: Correlations, relationships between variables.
  • Data Profiling: Understanding data structure and quality.
Data Types & Categorical Variables
  • Numerical Data: Continuous vs discrete, scaling, normalization.
  • Categorical Data: Nominal vs ordinal, encoding techniques (label, one-hot).
  • Datetime Data: Parsing, extracting features (year, month, day), handling time zones.
📈 Data Visualization
  • Chart Types: Bar charts, line plots, scatter plots, histograms, box plots, heatmaps.
  • Best Practices: Choosing the right visualization, avoiding misleading charts, color theory.
  • Interactive Visualizations: Basic dashboards with libraries like Plotly.
🧰 Tools & Libraries
  • Pandas: Data manipulation and analysis.
  • NumPy: Numerical computing.
  • Matplotlib & Seaborn: Static visualizations.
  • Plotly & Bokeh: Interactive visualizations.
  • Jupyter Notebooks: For exploratory analysis.
📚 Resources

Phase 3 – Agentic AI Systems

3.1 Foundations: Transformers & LLMs

Understand the core technology behind modern AI systems like ChatGPT, Gemini, Claude, and open-source LLMs.

🔹 Transformers — Basics
  • Tokens & tokenization
  • Embeddings & positional encoding
  • Self-attention & multi-head attention
  • Encoder vs Decoder architecture
  • Feed-forward layers
  • Why Transformers replaced RNNs/CNNs in NLP
  • Popular models: BERT, GPT, T5
🚀 Large Language Models (LLMs)
  • Pretraining (next-token prediction)
  • Instruction tuning
  • RLHF / alignment techniques
  • Scaling laws & model sizes
  • Context windows & long-context models
  • Inference basics (sampling, temperature, top-k/top-p)
⚙️ Fine-Tuning & Customization
  • Prompt engineering vs fine-tuning
  • Full fine-tuning
  • Parameter-efficient methods (LoRA, QLoRA, adapters)
  • Instruction / supervised fine-tuning (SFT)
  • Retrieval-Augmented Generation (RAG)
  • Domain adaptation & evaluation
🛠️ Applications & Practical Usage
  • Chatbots & assistants
  • RAG systems & knowledge search
  • Summarization & question answering
  • Code generation
  • Agents & tool calling
  • Deployment & optimization (quantization, batching, GPUs)
  • Open-source ecosystem: Hugging Face, vLLM, Ollama

3.2 AI Agents: Concepts & Architectures

Learn how to build intelligent systems that can reason, plan, use tools, and interact autonomously with users and environments.

📝 Prompt Engineering
  • Understanding prompts & instructions
  • Prompt structure (system / user / tools)
  • Types of prompting: zero-shot, few-shot, chain-of-thought
  • Role prompting & persona design
  • Output formatting (JSON, structured outputs)
  • Prompt optimization & evaluation
  • Guardrails & prompt safety
🧠 Agentic Concepts (Core Building Blocks)
  • Environment & context handling
  • Short-term vs long-term memory
  • Memory stores (vector DB, cache, databases)
  • Persistence & state management
  • Tool / function calling/ enable-auto-tools choice in VLLMs and SGLang
  • Planning & reasoning loops
  • Reflection & self-correction
  • Social ability & human-in-the-loop
⚙️ Agent Architectures & Patterns
📚 Knowledge & Retrieval (RAG)
  • Document chunking strategies
  • Embeddings & vector databases
  • Similarity search
  • Retrieval-Augmented Generation (RAG)
  • Hybrid search (keyword + vector)
  • Context injection techniques
🛠️ Tools, Frameworks & Deployment
  • LangChain, LlamaIndex
  • CrewAI / AutoGen / multi-agent frameworks
  • OpenAI function calling / tool APIs
  • FastAPI / backend integration
  • Async workflows & task queues
  • Monitoring, logging, evaluation
  • Cost optimization & latency control
🤖 AI agent frameworks

These are the most popular frameworks for building AI agents, and they are all open-source and free to use. I highly recommend learning at least one of them, as they will make your life much easier when building AI agents.

  • 🖥️ LLMs Servers & APIs providers
    🔒 Safety, Evaluation & Reliability
    • Hallucination reduction (model temperature)
    • Output validation
    • Guardrails & constraints
    • Agent testing & benchmarking
    • Human feedback loops
    • Security & prompt injection protection
  • 3.3 Building Agentic Systems (Tools & Infrastructure)

    3.4 Monitoring, Evaluation & Cost

    • Agent Monitoring & Evaluation
    • Evaluation Metrics: Measuring AI Agent Performance
    • Monitoring Tools: Keeping an Eye on AI Agents in Action
    • Cost Management: Optimizing Expenses for AI Agent Operations

    3.5 Deployment Production Ready Systems

    (Saying Goodbye to Localhost and welcoming the World)

    ⚙️ Deployment Optimization
    • Inference: batching, streaming, caching, quantization (int8, 4-bit),fp16
    • Latency & Cost: model size vs latency trade-offs, distillation, pruning.
    • Safety: input validation, hallucination mitigation, output filters, user feedback loops.
    🐳 Containerization & Orchestration
    • Docker: Containerizing AI Agents for Scalable Deployment, Docker Foundations Professional Certificate
    • Nginx:Load Balancing and Reverse Proxy for AI Agent APIs (if building your own API from scratch without using a service).
    🚀 High-Performance Inference Engines
      • VLLM & SGLang: High-Performance Inference Engines for AI Agents - VLLM Docs, SGLang Docs
        • Custom Inference Templates: Enabling auto-tool choice in VLLM (Tool Calling) and SGLang.
    ☁️ Cloud Platforms & GPU Providers

    actually there are thousands of cloud providers, but these are what I've used over the past 3 years and recommend.

    • GCP: Google Cloud Platform and its vertex AI services(Google Generative AI,model garden for open-source models)
    • Digital Ocean:i use its vps for hosting small projects and APIs
    • Heroku: Platform as a Service, it's good for hosting small projects
    • Hosting.com & Hostinger: the cheapest hosting providers.
    • Vast.ai & RunPod: GPU cloud providers when you need serve your LLM or any other models
    • But please, when dealing with hosting, keep your eyes on 3 things: 1. your visa 😭😭, 2. security and firewalls, 3. resource usage.

    • GPUs: Accelerating AI Agents with Graphics Processing Units

    📂 Additional Resources

    🌟 Open to contributions — feel free to suggest improvements or additional resources!

    This repository includes additional files with extra learning materials:

    ⚠️ Note: Resources in these additional files may include community contributions and materials I haven't personally verified or watched yet. The main roadmap above contains only resources I've personally used and recommend.


    🤝 Contributing

    Contributions are welcome! If you have suggestions, resources, or improvements:

    1. 🍴 Fork the repository
    2. 🔨 Create your feature branch (git checkout -b feature/AmazingResource)
    3. 💾 Commit your changes (git commit -m 'Add some AmazingResource')
    4. 📤 Push to the branch (git push origin feature/AmazingResource)
    5. 🎯 Open a Pull Request

    ⭐ Show Your Support

    If this roadmap helped you in your AI learning journey, please consider:

    • Starring this repository
    • 🔄 Sharing it with others
    • 💬 Providing feedback or suggestions

    👨‍💻 About Me

    Before we dive in — I'm Romani. Been freelancing since 2023, now building AI systems at Penny Software. Navigating the overwhelming world of AI learning pushed me to create this roadmap: a clear, practical path built on hands-on projects and verified resources, not endless theory.

    Romani Nasrat

    Romani Nasrat

    AI Engineer @ Penny Software | AI/ML & Agentic AI Specialist

    🎓 Graduate of Faculty of Computers and Artificial Intelligence, Benha University
    💼 AI Engineer at Penny Software - Building production-grade AI solutions
    🤖 Specializing in LLMs, AI Agents, RAG Pipelines, Vector Databases
    ⚡ Building scalable agentic systems with LangChain, CrewAI, PyTorch, FastAPI, Django
    🌐 Portfolio: romaninasrat.com
    🎻 Violin player | ♟️ Chess enthusiast | 🎮 Minecraft Expert haha


    📬 Contact & Connect

    Have questions? Want to collaborate? Let's connect!

    Portfolio LinkedIn GitHub Twitter Kaggle Telegram Email


    Made with ❤️ by Romani Nasrat

    © 2026 Agentic AI Roadmap • MIT License


    About

    A comprehensive AI learning roadmap covering Python fundamentals, mathematics, machine learning, deep learning, LLMs, and agentic systems — focused on hands-on projects, practical tools, and real-world deployment.

    Topics

    Resources

    License

    Stars

    Watchers

    Forks

    Releases

    No releases published

    Packages

    No packages published