Skip to content
View atikul-islam-sajib's full-sized avatar
🏠
Working from home
🏠
Working from home

Highlights

  • Pro

Block or report atikul-islam-sajib

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
atikul-islam-sajib/README.md

banner

Hi 👋, I'm Atikul Islam Sajib

🌍 Machine Learning Engineer | Research Assistant | AI Innovator

Advancing Computer Science research and innovation through Machine Learning and Artificial Intelligence.


I am currently pursuing advanced studies in Computer Science as an international student in Germany. I hold a B.Sc. in Computer Science and Engineering (CSE) from United International University (UIU), Bangladesh.

My expertise lies in Machine Learning Theory and Algorithms, Bioinformatics, and Computer Systems. I am deeply passionate about developing efficient and interpretable algorithms that expand the frontiers of modern AI.

With 7+ years of experience as a Machine Learning Engineer and Research Assistant, I specialize in designing and implementing advanced models that drive data-driven decision-making across academic and industrial domains.


💼 Professional Experience

Machine Learning Engineer (Part-Time) Siemens AG, Berlin, Germany
Applied AI for predictive maintenance and industrial automation — focused on model reliability, multimodal data integration, and scalable deployment.
Machine Learning Engineer Againsoft, Bangladesh
Developed and deployed scalable ML pipelines for financial analytics and automation, enhancing prediction accuracy and model efficiency.
Research Assistant Physikalisch-Technische Bundesanstalt (PTB), Germany
Contributed to AI-based sensor fusion, precision measurement, and time-series signal modeling for metrological applications.
Research Assistant Hochschule für Wirtschaft und Recht Berlin (HWR Berlin)
Focused on Explainable AI (XAI), deep learning interpretability, and multimodal model optimization for research projects.
Research Assistant Technische Universität Berlin (TU Berlin)
Worked on Transformer-based architectures for vision-language integration and multimodal learning systems.
Research Assistant Fraunhofer Institute, Germany
Contributed to industrial AI systems research, focusing on cloud-based deployment, model compression, and real-time inference.

🧠 Research & Technical Expertise

Core Domains: Machine Learning Theory, Deep Learning, Multimodal AI, Bioinformatics, Explainable AI (XAI)
Key Architectures: CNN, RNN, Transformer, ViT, GAN, GPT, LLaMA3, Gemma
Frameworks & Tools: PyTorch, TensorFlow, Scikit-learn, MLFlow, DVC, Docker, FastAPI, Flask
Cloud & MLOps: AWS, Azure, Docker, Model Versioning, CI/CD Pipelines
Programming: Python, Java, SQL, Bash


🚀 Research Interests

  • Designing efficient Transformer architectures for multimodal understanding
  • Developing foundation models and vision-language systems from scratch using PyTorch
  • Improving explainability and interpretability in modern AI models
  • Applying AI in Bioinformatics and scientific data modeling

🛠️ Languages & Tools


🌐 Connect with Me

LinkedIn WhatsApp Email

atikul-islam-sajib


📊 GitHub Analytics

Top Languages

GitHub Stats

GitHub Streak

Pinned Loading

  1. FakeImageGenerate FakeImageGenerate Public

    This repo will create fake images using CelebA dataset with GANs. It can also generate domain-specific images such as anime faces, skin cancer lesion images, or other custom datasets. The model arc…

    Jupyter Notebook 1 1

  2. TransUNet TransUNet Public

    TransUNet is a hybrid deep learning model that integrates Transformers with the U-Net architecture for medical image segmentation

    Jupyter Notebook 2 3

  3. ViT-Scratch ViT-Scratch Public

    An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale* introduces the Vision Transformer (ViT), which applies Transformer architectures directly to image patches. It splits an …

    Jupyter Notebook 2 1

  4. LLaMA3 LLaMA3 Public

    This is a simple unofficial implementation of the LLaMA3 transformer language model using PyTorch. It replicates the core architecture, including multi-head self-attention, feed-forward networks, a…

    Python 1

  5. tinyMultiModalClassifier tinyMultiModalClassifier Public

    This is for learning purposes—showcasing how a multi-modal classification model works with both images and text. It demonstrates combining visual features from CNNs or Vision Transformers with text…

    Python 2 1

  6. TransformerScratch TransformerScratch Public

    "Attention Is All You Need" is a 2017 landmark research paper authored by eight scientists working at Google, that introduced a new deep learning architecture known as the transformer based on atte…

    Jupyter Notebook 1