As artificial intelligence evolves, my current research focuses on the privacy risks and ethical implications of Large Language Models (LLMs) like GPT and BERT. These models can generate highly realistic text but may also unintentionally expose private information.
- Data Leakage: Risk of reproducing memorized personal or confidential training data.
- Membership Inference: Attacks that infer whether a user’s data was part of the training set.
- Re-identification Threats: De-anonymizing users through model outputs.
- Ethical Deployment: Study of secure and responsible use in sensitive contexts.
- Adaptive Mitigation: Exploring differential privacy, federated learning, and prompt filtering as defensive strategies.
My Ph.D. addressed a pressing cybersecurity challenge: detecting and mitigating the spread of fake news, misinformation, and disinformation on online social networks.
- Multimodal Content Analysis: Combining text and image signals to identify deception.
- Contextual Analysis: Considering metadata, post interactions, and social cues.
- External Evidence: Using web scraping to validate claims against trusted sources.
- Explainability (XAI): Interpretable results using LIME and VilBERT.
My Master's focused on optimizing virtual machine placement in cloud computing environments where multiple VMs share physical resources.
I designed methods to mitigate performance interference between co-hosted VMs by:
- Modeling resource contention dynamics.
- Implementing smart placement algorithms in CloudSim.
- Improving energy efficiency and allocation fairness.