cv

Basics

Name Howard Prioleau
Email howarddoesai@gmail.com
Location Washington, DC

Education

Work

  • JAN 2024 - Present
    Machine Learning Researcher
    Artificial Intelligence for Positive Change Lab @ Howard University
    Advanced research on large language model adaptation, alignment, and instruction tuning for clinical NLP, achieving state-of-the-art ADE detection and leading cross-lab collaborations on multimodal and agentic AI systems.
    • Advanced research on LLM adaptation, supervised fine-tuning, and instruction tuning, achieving state-of-the-art ADE detection across biomedical corpora.
    • Improved LLM alignment and robustness using RLHF and DPO, reducing factual inconsistency and overprediction by 30% in clinical NLP.
    • Led a team of five researchers in developing multimodal fusion models and agentic reasoning systems integrating speech, text, and knowledge-grounded inference, with publications in ACL, IEEE, AAAI, and PSB.
    • Directed the National Geospatial-Intelligence Agency (NGA) team on multimodal and geospatial AI, delivering ML solutions for GIS and signal-based data analysis.
  • MAY 2025 - AUG 2025
    Machine Learning Engineer Intern
    Reddit
    Designed and implemented a novel time-aware Transformer architecture introducing long-horizon engagement forecasting, achieving 85% prediction accuracy and Reddit’s first production-scale engagement forecast model.
    • Designed and implemented a time-aware Transformer architecture for long-horizon engagement forecasting across billions of interactions, achieving 85% prediction accuracy.
    • Built a distributed training and inference pipeline (PyTorch Lightning + Ray) reducing training latency by 40% and enabling a 600× faster inference pipeline for near real-time forecasts.
    • Conducted LLM interpretability analyses via attention and embedding probing, deploying learned embeddings as user representations to enhance personalization and recommendation quality.
  • MAY 2024 - AUG 2024
    Machine Learning Engineer Intern
    Reddit
    Developed an LLM-driven onboarding recommender system leveraging retrieval-augmented generation and structured prompting, reducing cold-start friction by 25% and improving recommendation variety through multi-agent orchestration.
    • Developed an LLM-driven onboarding recommender system using RAG and structured prompting, reducing cold-start friction by 25% in internal A/B testing.
    • Designed a prompt orchestration layer (Go + gRPC + GraphQL) for multi-agent content workflows, enabling scalable real-time personalization.
    • Built LLM-as-a-Judge evaluation pipelines combining human preference scoring and reward modeling, improving recommendation accuracy by 14%.
  • MAY 2023 - AUG 2023
    Software Engineer Intern
    Reddit
    Built developer SDKs and APIs for Reddit’s Developer Platform, improving integration workflows and adoption across internal and external teams.
    • Developed React/TypeScript SDKs and REST APIs for Reddit’s Developer Platform, enabling third-party integrations and adoption by over 300 developers.
    • Created frontend tools improving usability, documentation, and integration workflows across partner engineering teams.
    • Optimized API performance and scalability in collaboration with backend teams, improving response latency and platform reliability.
  • JUN 2021 - JAN 2024
    Undergraduate Machine Learning Researcher
    Human Centered Artificial Intelligence Institute @ Howard University
    Led multilingual NLP, computer vision, and acoustic analysis projects achieving state-of-the-art performance in code-switched and low-resource tasks, resulting in publications at ACL, ICLR, and PSB.
    • Led multilingual NLP, computer vision, and acoustic analysis projects achieving state-of-the-art performance in code-switched sentiment analysis, language identification, and dementia MMSE prediction.
    • Designed language-specific transformer fusion systems that improved multilingual classification accuracy by 21%, leading to publications at ACL (SemEval), ICLR, and PSB.
    • Organized and taught an undergraduate NLP Bootcamp covering model interpretability, evaluation design, and reproducible ML workflows.
  • JAN 2021 - JUN 2021
    Machine Learning Research Intern
    Excella
    Built scalable ML experimentation frameworks and synthetic data generation pipelines leveraging GPT and BERT embeddings, enhancing data efficiency and model robustness.
    • Built and deployed cloud-native ML experimentation frameworks (AWS/GCP) reducing development time and supporting distributed experimentation.
    • Designed synthetic data augmentation pipelines leveraging GPT and BERT embeddings, improving classification robustness by 18%.
    • Automated dataset normalization and benchmarking workflows to enhance reproducibility and data quality across ML deployments.

Awards

  • 2024
    Google PhD Fellowship Recipient
    Google
    The Google PhD Fellowship recognizes exceptional graduate students conducting innovative and impactful research in computer science and related fields, with the NLP Fellowship specifically awarded for outstanding contributions to Natural Language Processing.
  • 2024
    NSF Fellowship Honorable Mention
    National Science Foundation
    The NSF Fellowship Honorable Mention recognizes promising graduate students who are pursuing research-based master's and doctoral degrees in NSF-supported science, technology, engineering, and mathematics disciplines.
  • 2024
    AIM-AHEAD Research Fellowship Recipient
    AIM-AHEAD
    The AIM-AHEAD Research Fellowship supports researchers leveraging AI/ML to advance health equity, fostering innovation, diversity, and collaboration in addressing healthcare disparities.

Publications

Skills

Programing Languages
Python
TypeScript
JavaScript
Java
HTML/CSS
C++
C
PHP
Swift
Technologies
Pandas
Keras
Tensorflow
Sci-Kit Learn
PyTorch
Numpy
Huggingface Transformers
ReactJS
SQL
Flask
Web Development
HTML
CSS
JavaScript