Skip to content
View nielseni6's full-sized avatar

Block or report nielseni6

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
nielseni6/README.md

Ian E. Nielsen (PhD)

Conducting eXplainable Artificial Intelligence (XAI) Research with an Emphasis on Computer Vision, Object Detection, and Image Generation [LinkedIn] [Scholar]

About Me

I am a machine learning researcher and engineer with over 6 years of experience implementing, developing, and training state-of-the-art machine learning models. My expertise is in computer vision, LLMs, and generative image/video models. Much of my research focuses on eXplainable Artificial Intelligence (XAI), which gives me a unique insight into the inner workings of black-box machine learning algorithms. I enjoy using these visually intuitive explanations of complex machine learning models to create AI models that are interpretable, trustworthy, and reliable. This includes creating inherently interpretable models, and using XAI as a tool to debug and enhance models through novel architecture and training schema.

Education

  • 2020-2025 Henry M. Rowan College of Engineering, Rowan University Ph.D. in Electrical and Computer Engineering
  • 2016-2020 Henry M. Rowan College of Engineering, Rowan University B.S. in Electrical and Computer Engineering

Publications (Google Scholar, ORCID)

  • [DOI] Transformers in time-series analysis: A tutorial.
    Circuits, Systems, and Signal Processing 42, no. 12 (2023): 7433-7466.
    Sabeen Ahmed, Ian E. Nielsen, Aakash Tripathi, Shamoon Siddiqui, Ravi P. Ramachandran, and Ghulam Rasool.

  • [DOI] Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks
    IEEE Signal Processing Magazine, vol. 39, no. 4, pp. 73-84, (2022).
    Ian E. Nielsen, Dimah Dera, Ghulam Rasool, Ravi P Ramachandran, Nidhal Carla Bouaynaya.

  • [DOI] EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models
    IEEE Access, vol. 11, pp. 82556-82569, (2023).
    Ian E. Nielsen, R. P. Ramachandran, N. Bouaynaya, H. M. Fathallah-Shaykh and G. Rasool

  • [DOI] Targeted Background Removal Creates Interpretable Feature Visualizations
    2023 IEEE 66th International Midwest Symposium on Circuits and Systems (MWSCAS), Tempe, AZ, USA, pp. 1050-1054, (2023).
    Ian E. Nielsen, R. P. Ramachandran, N. Bouaynaya, H. M. Fathallah-Shaykh and G. Rasool

Popular repositories Loading

  1. Robust_Explainability_Experiments Robust_Explainability_Experiments Public

    Python 3 1

  2. yolov7_mavrc yolov7_mavrc Public

    Forked from naddeok96/yolov7_mavrc

    Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

    Jupyter Notebook 2

  3. ADV-EMRG-TPCS-IN-CI-ML-AND-DM ADV-EMRG-TPCS-IN-CI-ML-AND-DM Public

    Jupyter Notebook 1

  4. EvalAttAI EvalAttAI Public

    Python 1

  5. U-2-Net U-2-Net Public

    Forked from xuebinqin/U-2-Net

    The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."

    Python

  6. ShiftSmoothedAttributions ShiftSmoothedAttributions Public

    Ian Nielsen Systems Devices and Algorithms in Bioinformatics Final Project Spring 2021

    Python