<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://blenwbegashaw.github.io//feed.xml" rel="self" type="application/atom+xml" /><link href="https://blenwbegashaw.github.io//" rel="alternate" type="text/html" /><updated>2026-03-11T22:55:22+00:00</updated><id>https://blenwbegashaw.github.io//feed.xml</id><title type="html">Blen Begashaw</title><subtitle>Computer science graduate </subtitle><entry><title type="html">DramaBuddy</title><link href="https://blenwbegashaw.github.io//work/2025/12/20/Dramabuddy.html" rel="alternate" type="text/html" title="DramaBuddy" /><published>2025-12-20T10:00:00+00:00</published><updated>2025-12-20T10:00:00+00:00</updated><id>https://blenwbegashaw.github.io//work/2025/12/20/Dramabuddy</id><content type="html" xml:base="https://blenwbegashaw.github.io//work/2025/12/20/Dramabuddy.html"><![CDATA[<blockquote>
  <p>Flutter, Dart, TMDb API, WebView, URL Launcher, SharedPreferences, Git, GitHub</p>
</blockquote>

<p>DramaBuddy was inspired by my love for Korean Dramas and the need for a single, mobile first app to track, rate, and discover content. The goal is to make drama tracking fun, simple, and visually engaging.</p>

<p><a href="https://github.com/BlenWBegashaw/dramabuddy/">Github Repo</a></p>

<h1 id="overview">Overview</h1>
<p>DramaBuddy is a K-Drama tracking and discovery application built with a strong focus on clarity, usability, and mobile-first design. The app allows users to manage their watchlist, track viewing progress, and explore new content through structured metadata and social-inspired discovery.</p>

<p>The project is designed as a scalable product with future App Store release in mind.</p>

<h1 id="what-it-does">What it does</h1>
<p>DramaBuddy provides tools to help users manage their drama experience:</p>

<ul>
  <li>Organize shows by <strong>Watching</strong>, <strong>Completed</strong>, and <strong>Planned</strong></li>
  <li>Track episode progress per show</li>
  <li>Add private ratings and notes</li>
  <li>Discover dramas using curated metadata</li>
  <li>Explore trending drama-related content</li>
  <li>View a visual <strong>Monthly Wrapped</strong> summary</li>
</ul>

<h1 id="screens">Screens</h1>
<!-- Replace with final App Store–ready screenshots -->

<h3 id="core-experience">Core Experience</h3>
<div style="display: flex; gap: 16px; flex-wrap: wrap;">
  <img src="/assets/images/9.png" width="200" />
  <img src="/assets/images/2.png" width="200" />
  <img src="/assets/images/6.png" width="200" />
  <img src="/assets/images/5.png" width="200" />
</div>

<h3 id="discovery--profile">Discovery &amp; Profile</h3>
<div style="display: flex; gap: 16px; flex-wrap: wrap;">
  <img src="/assets/images/12.png" width="200" />
  <img src="/assets/images/8.png" width="200" />
</div>

<h3 id="details--insights">Details &amp; Insights</h3>
<div style="display: flex; gap: 16px; flex-wrap: wrap;">
  <img src="/assets/images/10.png" width="200" />
  <img src="/assets/images/11.png" width="200" />
  <img src="/assets/images/13.png" width="200" />
</div>

<h1 id="product--design-focus">Product &amp; Design Focus</h1>
<p>DramaBuddy emphasizes:</p>

<ul>
  <li>A <strong>minimal, iOS-inspired interface</strong></li>
  <li>Clear information hierarchy</li>
  <li>Smooth navigation between sections</li>
  <li>Visual consistency across screens</li>
  <li>Thoughtful empty and loading states</li>
</ul>

<p>The design prioritizes approachability and ease of use over complexity.</p>

<h1 id="technical-approach">Technical Approach</h1>
<p>The application is built using modern mobile development practices:</p>

<ul>
  <li>Cross-platform development with <strong>Flutter</strong></li>
  <li>Modular UI components for reusability</li>
  <li>External API integration for metadata</li>
  <li>Local persistence for user data</li>
  <li>Separation between presentation and data layers</li>
</ul>

<p>Implementation details are intentionally abstracted at this stage.</p>

<h1 id="architecture-overview">Architecture Overview</h1>
<p>DramaBuddy follows a layered architecture:</p>

<ul>
  <li><strong>Presentation Layer:</strong> Screens and reusable UI components</li>
  <li><strong>Domain Layer:</strong> Models representing dramas and user state</li>
  <li><strong>Data Layer:</strong> Services responsible for fetching and storing data</li>
</ul>

<p>This structure supports maintainability and future feature expansion.</p>

<h1 id="what-i-focused-on">What I focused on</h1>
<ul>
  <li>Designing a product-ready mobile experience</li>
  <li>Balancing feature richness with simplicity</li>
  <li>Creating a foundation suitable for App Store deployment</li>
  <li>Thinking beyond a demo toward a real consumer app</li>
</ul>

<h1 id="future-direction">Future Direction</h1>
<p>Planned enhancements include:</p>

<ul>
  <li>User authentication and cloud sync</li>
  <li>Smart recommendations</li>
  <li>Push notifications for new content</li>
  <li>Shareable viewing summaries</li>
  <li>App Store release</li>
</ul>

<h1 id="acknowledgments">Acknowledgments</h1>
<ul>
  <li><strong>Korean Drama Industry:</strong> For the creative work that inspired the product concept</li>
  <li><strong>Flutter Ecosystem:</strong> For tools and open-source packages</li>
  <li><strong>Mobile Design Community:</strong> For iOS UI and UX inspiration</li>
</ul>]]></content><author><name></name></author><category term="work" /><category term="Mobile Development" /><category term="Full-Stack" /><summary type="html"><![CDATA[DramaBuddy is a mobile application designed to help users track Korean dramas, manage watch progress, and discover new content in a clean, intuitive interface.]]></summary></entry><entry><title type="html">Web Vulnerability Scanner</title><link href="https://blenwbegashaw.github.io//work/2025/07/10/webvulnerability.html" rel="alternate" type="text/html" title="Web Vulnerability Scanner" /><published>2025-07-10T12:00:00+00:00</published><updated>2025-07-10T12:00:00+00:00</updated><id>https://blenwbegashaw.github.io//work/2025/07/10/webvulnerability</id><content type="html" xml:base="https://blenwbegashaw.github.io//work/2025/07/10/webvulnerability.html"><![CDATA[<blockquote>
  <p>Python, Flask, BeautifulSoup, Requests, HTML/CSS, Render</p>
</blockquote>

<p>I developed this web-based educational vulnerability scanner to provide a safe and controlled environment for demonstrating common web security flaws. Built with Flask and Python, the tool allows users to scan predefined demo targets for vulnerabilities like SQL Injection (SQLi) and Cross-Site Scripting (XSS).</p>

<p><a href="https://web-vulnerability-scanner-ma2w.onrender.com">Live Demo</a></p>

<p><a href="https://github.com/BlenWBegashaw/web-vulnerability-scanner">Github Repo</a></p>

<h1 id="inspiration">Inspiration</h1>
<p>I wanted to create a hands-on tool that helps students and aspiring security researchers understand how vulnerabilities are detected at the code level. I realized that many people find “black-box” scanning intimidating, so I built this scanner to be transparent and ethical—restricting it to safe demo environments so users can learn without the risk of harming live production websites.</p>
<h2 id="demo">Demo</h2>
<div class="video-container">
    <video id="iomt-video" width="800" height="450" controls="">
      <source src="/assets/webvul.mp4" type="video/mp4" />
      Your browser does not support the video tag.
    </video>
</div>

<h1 id="what-it-does">What it does</h1>
<p>I designed the scanner to focus on the most common web-based attack vectors found in the OWASP Top 10:</p>

<ul>
  <li><strong>SQL Injection (SQLi) Detection:</strong> I implemented logic to identify forms that are susceptible to malicious SQL queries.</li>
  <li><strong>Cross-Site Scripting (XSS) Scanning:</strong> The tool analyzes how web pages handle user-supplied data to detect potential script injection points.</li>
  <li><strong>Target Verification:</strong> For safety, I built a verification system that restricts scans to predefined targets like “Juice Shop” or “Localhost.”</li>
  <li><strong>Real-time Results:</strong> I created a clean, intuitive web interface that displays simulated vulnerability results immediately after a scan.</li>
</ul>

<h1 id="how-i-built-it">How I built it</h1>
<ul>
  <li><strong>Core Logic:</strong> I wrote the scanning engine in Python, utilizing <strong>BeautifulSoup</strong> to parse HTML and identify input forms and <strong>Requests</strong> to simulate interactions with the target server.</li>
  <li><strong>Backend:</strong> I used <strong>Flask</strong> to handle the web routing and the communication between the UI and the scanning core.</li>
  <li><strong>Security layer:</strong> I developed a <code class="language-plaintext highlighter-rouge">security.py</code> module specifically to act as a whitelist, ensuring that the scanner cannot be used for unauthorized activities.</li>
  <li><strong>Deployment:</strong> I deployed the final application on <strong>Render</strong>, configuring it to automatically bind to the correct ports for seamless web access.</li>
</ul>

<h1 id="challenges-i-ran-into">Challenges I ran into</h1>
<ul>
  <li><strong>Safe Scoping:</strong> One of my biggest challenges was ensuring the scanner couldn’t be “tricked” into scanning non-whitelisted sites. I had to refine my URL parsing logic to prevent bypass attempts.</li>
  <li><strong>Parsing Complex Forms:</strong> I found that many modern web applications use dynamic forms that are difficult for basic scrapers to read. I spent extra time optimizing my BeautifulSoup selectors to ensure accurate form detection.</li>
  <li><strong>Deployment Port Mapping:</strong> I initially struggled with getting the Flask app to communicate correctly with Render’s environment variables, which required a deep dive into gunicorn and port binding.</li>
</ul>

<h1 id="accomplishments-that-i-am-proud-of">Accomplishments that I am proud of</h1>
<ul>
  <li>I successfully created a functional, ethical hacking tool that is accessible directly through a web browser.</li>
  <li>I maintained a strict security posture by successfully implementing a robust whitelist system.</li>
  <li>I delivered a project that serves as a practical educational resource for understanding web security.</li>
</ul>

<h1 id="what-i-learned">What I learned</h1>
<ul>
  <li>I gained a much deeper understanding of how automated vulnerability scanners “see” the web.</li>
  <li>I learned how to manage security-sensitive Python applications in a cloud-hosted environment.</li>
  <li>I improved my skills in ethical software design, specifically how to build tools that prevent misuse while still being useful for learning.</li>
</ul>

<h1 id="what-is-next-for-my-scanner">What is next for my Scanner</h1>
<ul>
  <li>I plan to add more vulnerability checks, such as <strong>Insecure Security Headers</strong> and <strong>Command Injection</strong>.</li>
  <li>I want to enhance the UI/UX with better result visualization, perhaps using D3.js for interactive threat reports.</li>
  <li>I am looking to <strong>Dockerize</strong> the entire application to allow for more consistent and isolated local deployments for classroom settings.</li>
</ul>]]></content><author><name></name></author><category term="work" /><category term="Cybersecurity" /><summary type="html"><![CDATA[Python, Flask, BeautifulSoup, Requests, HTML/CSS, Render]]></summary></entry><entry><title type="html">IoMT Secure Dashboard</title><link href="https://blenwbegashaw.github.io//work/2025/04/25/IOMTSecure.html" rel="alternate" type="text/html" title="IoMT Secure Dashboard" /><published>2025-04-25T12:00:00+00:00</published><updated>2025-04-25T12:00:00+00:00</updated><id>https://blenwbegashaw.github.io//work/2025/04/25/IOMTSecure</id><content type="html" xml:base="https://blenwbegashaw.github.io//work/2025/04/25/IOMTSecure.html"><![CDATA[<blockquote>
  <p>Python, Streamlit, Scikit-learn, Random Forest, SVM, FastAPI, Matplotlib, Joblib, Pandas</p>
</blockquote>

<p>This project is a real-time anomaly detection dashboard built to monitor and detect cyber threats in Internet of Medical Things (IoMT) environments. By leveraging machine learning, it provides a vital layer of security for smart hospital systems.</p>

<p><a href="https://github.com/BlenWBegashaw/IoMT-Anomaly-Detection2/">Github Repo</a></p>

<h1 id="inspiration">Inspiration</h1>
<p>As healthcare systems become increasingly connected, they also become more vulnerable to cyberattacks. I created this dashboard to provide healthcare IT professionals with a real-time tool to detect and visualize anomalies—such as spoofing, unauthorized access, and ransomware—before they can compromise patient safety or data.</p>

<h1 id="what-it-does">What it does</h1>
<p>The IoMT Anomaly Detection Dashboard offers a suite of tools for real-time monitoring and model evaluation:</p>

<ul>
  <li><strong>Real-time Simulation:</strong> Simulates IoMT data streams to provide live predictions on network health.</li>
  <li><strong>Dual-Model Detection:</strong> Utilizes both <strong>Random Forest</strong> and <strong>Support Vector Machine (SVM)</strong> models to identify anomalies.</li>
  <li><strong>Model Evaluation:</strong> Displays live confusion matrices to help researchers understand model performance.</li>
  <li><strong>Automated Logging:</strong> Maintains a <code class="language-plaintext highlighter-rouge">detection_log.csv</code> of all predictions for post-incident analysis and performance auditing.</li>
</ul>

<h2 id="demo">Demo</h2>
<div class="video-container">
    <video id="iomt-video" width="800" height="450" controls="">
      <source src="/assets/IOMT.mp4" type="video/mp4" />
      Your browser does not support the video tag.
    </video>
</div>

<script>
  const video = document.getElementById('iomt-video');
  video.addEventListener('loadedmetadata', () => {
    video.currentTime = 17; // start at 16 seconds
  });
</script>

<h1 id="how-i-built-it">How I built it</h1>
<ul>
  <li><strong>Data Source:</strong> IoMT.csv dataset containing labeled network traffic patterns.</li>
  <li><strong>Machine Learning:</strong> Developed using Python and Scikit-learn, focusing on Random Forest and SVM classifiers for high-accuracy threat detection.</li>
  <li><strong>Dashboard:</strong> Built with Streamlit to create an interactive, web-based UI that handles live data processing.</li>
  <li><strong>Preprocessing:</strong> Utilized StandardScaler and Joblib for efficient model serialization and real-time feature scaling.</li>
</ul>

<h3 id="architecture">Architecture</h3>

<p><img src="/assets/images/IOMT2.png" alt="Architecture" /></p>

<h1 id="challenges-i-ran-into">Challenges I ran into</h1>
<ul>
  <li><strong>Real-time Performance:</strong> Ensuring the dashboard could process and visualize data points rapidly without lag.</li>
  <li><strong>Model Accuracy:</strong> Tuning the SVM model to minimize false positives, which are critical in a healthcare setting to avoid “alert fatigue.”</li>
  <li><strong>Data Structuring:</strong> Handling the specific feature requirements of medical IoT devices while maintaining a clean preprocessing pipeline.</li>
</ul>

<h1 id="accomplishments-that-i-am-proud-of">Accomplishments that I am proud of</h1>
<ul>
  <li>Successfully integrated two distinct ML models into a single, cohesive dashboard.</li>
  <li>Developed a functional logging system that records threats automatically.</li>
  <li>Created a tool that bridges the gap between complex machine learning research and practical cybersecurity application.</li>
</ul>

<h1 id="what-i-learned">What I learned</h1>
<ul>
  <li>How to deploy machine learning models into a live Streamlit environment.</li>
  <li>The specific characteristics of IoMT network traffic and how they differ from standard IT environments.</li>
  <li>Advanced visualization techniques for displaying model evaluation metrics like confusion matrices in real-time.</li>
</ul>

<h1 id="what-is-next-for-iomt-dashboard">What is next for IoMT Dashboard</h1>
<ul>
  <li>Integrate the FastAPI backend to support remote data ingestion from actual medical devices.</li>
  <li>Implement deep learning models to better detect time-series based attack patterns.</li>
  <li>Add an automated alert system that sends notifications via email or SMS when high-severity threats are detected.</li>
</ul>

<hr />]]></content><author><name></name></author><category term="work" /><category term="Cybersecurity" /><category term="Machine Learning" /><summary type="html"><![CDATA[Python, Streamlit, Scikit-learn, Random Forest, SVM, FastAPI, Matplotlib, Joblib, Pandas]]></summary></entry><entry><title type="html">Case(y) for Salesforce</title><link href="https://blenwbegashaw.github.io//work/2024/07/01/CaseyBot.html" rel="alternate" type="text/html" title="Case(y) for Salesforce" /><published>2024-07-01T10:00:00+00:00</published><updated>2024-07-01T10:00:00+00:00</updated><id>https://blenwbegashaw.github.io//work/2024/07/01/CaseyBot</id><content type="html" xml:base="https://blenwbegashaw.github.io//work/2024/07/01/CaseyBot.html"><![CDATA[<blockquote>
  <p>Python, Salesforce API, NLP, JavaScript, Shell Scripting</p>
</blockquote>

<p>CaseyBot is an AI-driven tool that transforms case management within Salesforce by intelligently matching new cases with historical ones. It streamlines customer support processes by suggesting relevant past cases and solutions, improving response times and enhancing the overall customer experience.</p>

<p><a href="https://github.com/BlenWBegashaw/caseybot">Github repository</a></p>

<h2 id="what-it-does">What it does</h2>

<p>CaseyBot revolutionizes Salesforce case management through these core features:</p>

<ul>
  <li><strong>Case Matching:</strong> Automatically identifies and matches new Salesforce cases with similar, previously solved cases to accelerate resolution time.</li>
  <li><strong>Solution Suggestions:</strong> Provides relevant solutions based on past cases, helping support teams respond to customer issues more efficiently.</li>
  <li><strong>Integration with Salesforce:</strong> Seamlessly integrates into Salesforce’s case management system, leveraging existing case data to make real-time recommendations.</li>
  <li><strong>AI-Powered Insights:</strong> Uses machine learning models to ensure accurate case matching by analyzing various case attributes like issue descriptions, categories, and resolution methods.</li>
</ul>

<h2 id="how-we-built-it">How we built it</h2>

<ul>
  <li><strong>Data Collection:</strong> Salesforce data was extracted and used to train CaseyBot’s machine learning models, allowing it to recognize and match cases with high accuracy.</li>
  <li><strong>AI Model:</strong> Developed using Python, CaseyBot uses natural language processing (NLP) to parse case descriptions and apply semantic similarity algorithms.</li>
  <li><strong>Machine Learning:</strong> Implemented techniques to rank the relevance of past cases to new ones, ensuring high-quality matches.</li>
  <li><strong>Frontend and Integration:</strong> The JavaScript and HTML-based frontend is embedded within Salesforce, providing an intuitive user interface for customer service teams.</li>
  <li><strong>Salesforce API:</strong> Integration allows CaseyBot to access real-time case data and make recommendations on the fly.</li>
  <li><strong>Automation:</strong> Shell scripts were used for deployment automation, while Nushell facilitates shell scripting in the workflow.</li>
  <li><strong>Custom Styling:</strong> CSS ensures that the user interface remains consistent with Salesforce’s design standards.</li>
</ul>

<p>Building CaseyBot required a deep dive into the Salesforce API to ensure seamless data flow between the AI model and the CRM. We focused on building a robust NLP pipeline that could handle the specific technical jargon often found in support tickets.</p>

<h2 id="challenges-we-ran-into">Challenges we ran into</h2>

<ul>
  <li><strong>Data Variability:</strong> Matching cases accurately required handling inconsistent or incomplete data across historical logs.</li>
  <li><strong>Performance:</strong> Ensuring real-time case matching without compromising Salesforce’s native performance was critical.</li>
  <li><strong>Integration Complexity:</strong> Integrating the AI model smoothly into the Salesforce UI required careful API management and handling various edge cases.</li>
</ul>

<h2 id="accomplishments">Accomplishments</h2>

<ul>
  <li>Successfully built a fully functional AI-powered case matcher that dramatically reduces case resolution times.</li>
  <li>Improved case matching accuracy through NLP and machine learning, leading to better customer support outcomes.</li>
  <li>Created a seamless user experience that fits naturally within the existing Salesforce ecosystem.</li>
</ul>

<h2 id="what-is-next-for-caseybot">What is next for CaseyBot</h2>

<ul>
  <li>Implement automated response drafting based on the matched solutions.</li>
  <li>Expand the machine learning model to support multi-language case matching.</li>
  <li>Add advanced analytics dashboards for support managers to track solution accuracy.</li>
</ul>]]></content><author><name></name></author><category term="work" /><category term="Machine Learning" /><summary type="html"><![CDATA[Python, Salesforce API, NLP, JavaScript, Shell Scripting]]></summary></entry><entry><title type="html">ML-Driven Recycling &amp;amp; Rewards Concept</title><link href="https://blenwbegashaw.github.io//work/2024/05/15/kohls.html" rel="alternate" type="text/html" title="ML-Driven Recycling &amp;amp; Rewards Concept" /><published>2024-05-15T12:00:00+00:00</published><updated>2024-05-15T12:00:00+00:00</updated><id>https://blenwbegashaw.github.io//work/2024/05/15/kohls</id><content type="html" xml:base="https://blenwbegashaw.github.io//work/2024/05/15/kohls.html"><![CDATA[<blockquote>
  <p>Kohl’s × Sephora × PACT Case Study</p>
</blockquote>

<h2 id="project-context">Project Context</h2>
<p><strong>Role:</strong> Conceptual System Designer (Case Competition Entry)<br />
<strong>Objective:</strong> Propose a technology-backed solution to increase sustainability engagement in retail.</p>

<hr />

<p><a href="/assets/Group.pdf">View the Full Group 3 Presentation (PDF)</a></p>

<h2 id="the-challenge">The Challenge</h2>
<p>Kohl’s and Sephora have a strong partnership, but bridging the gap between physical sustainability (recycling) and digital loyalty (Kohl’s Rewards) remains a manual process. The goal was to design a system that automates this bridge to drive foot traffic and environmental impact.</p>

<h2 id="the-proposed-solution-the-smart-pact-bin">The Proposed Solution: “The Smart-Pact Bin”</h2>
<p>The core concept is an AI-assisted recycling kiosk that identifies beauty product packaging and rewards the user instantly.</p>

<h3 id="conceptual-system-architecture">Conceptual System Architecture</h3>
<p>The system would require a three-tier integration:</p>
<ol>
  <li><strong>Hardware Layer:</strong> IoT-enabled bins with weight sensors and high-resolution cameras.</li>
  <li><strong>Processing Layer:</strong> A Computer Vision (CV) model to identify Sephora-brand packaging vs. non-recyclable waste.</li>
  <li><strong>Incentive Layer:</strong> Integration with the Kohl’s Rewards API to credit accounts in real-time.</li>
</ol>

<hr />

<h2 id="role-of-ai--machine-learning">Role of AI &amp; Machine Learning</h2>
<p>In this conceptual model, AI is the “trust layer” that removes the need for store associates to manually verify recycled items.</p>

<h3 id="1-object-recognition">1. Object Recognition</h3>
<p>Using a pre-trained image classification model, the bin would verify if the deposited item matches Sephora’s accepted materials (e.g., plastic bottles, glass jars, or tubes).</p>

<h3 id="2-weight-to-point-logic">2. Weight-to-Point Logic</h3>
<p>By combining visual data with a weight sensor, the system would estimate the mass of the material and apply a conversion formula:</p>
<ul>
  <li><strong>Formula Concept:</strong> $Points = (Weight_{material} \times Material_{value}) + Loyalty_{bonus}$</li>
</ul>

<hr />

<h2 id="feasibility--cost-analysis">Feasibility &amp; Cost Analysis</h2>
<p>A major part of the study involved high-level financial modeling for a pilot program:</p>
<ul>
  <li><strong>Total Estimated Pilot Cost:</strong> ~$862,500.</li>
  <li><strong>Hardware (Bins/Sensors):</strong> Primary cost driver.</li>
  <li><strong>Maintenance &amp; Logic:</strong> Assumed API integration costs and data handling fees.</li>
</ul>

<h2 id="key-takeaways">Key Takeaways</h2>
<p>This project allowed me to practice <strong>Technical Product Thinking</strong>—the ability to look at a business problem and break it down into modular technical components.</p>

<ul>
  <li><strong>System Design:</strong> Thinking about how IoT hardware interacts with cloud APIs.</li>
  <li><strong>Data Integrity:</strong> Addressing how to prevent “reward fraud” using sensor data.</li>
  <li><strong>User Experience:</strong> Designing a frictionless flow for the non-technical consumer.</li>
</ul>

<hr />

<h2 id="disclaimer">Disclaimer</h2>
<p>This project was developed for a case competition. It represents an exploration of technical feasibility and business strategy. All cost estimates and technical architectures are theoretical.This project is a <strong>conceptual proposal</strong>. No physical system was built, and no code was deployed. The following documentation outlines the high-level system architecture and the logic behind the proposed AI integration.</p>]]></content><author><name></name></author><category term="work" /><category term="Case Study" /><category term="Machine Learning" /><summary type="html"><![CDATA[Conceptual case study exploring how AI-driven recycling incentives could improve customer engagement in retail]]></summary></entry><entry><title type="html">Nvidia Stock Analysis</title><link href="https://blenwbegashaw.github.io//work/2024/04/20/NVIDIA.html" rel="alternate" type="text/html" title="Nvidia Stock Analysis" /><published>2024-04-20T12:00:00+00:00</published><updated>2024-04-20T12:00:00+00:00</updated><id>https://blenwbegashaw.github.io//work/2024/04/20/NVIDIA</id><content type="html" xml:base="https://blenwbegashaw.github.io//work/2024/04/20/NVIDIA.html"><![CDATA[<blockquote>
  <p>R, ggplot2, dplyr, tidyverse, readr, HTML, CSS, GitHub Pages</p>
</blockquote>

<p>I analyzed NVIDIA’s stock performance from 1999 to 2024 to understand historical trends, volatility, and key price movements. Using R, I created visualizations and statistical summaries to gain insights into the company’s market behavior.</p>

<p><a href="https://blenwbegashaw.github.io/NVIDIA-Stock-Analysis/">Live Report</a></p>

<p><a href="https://github.com/BlenWBegashaw/NVIDIA-Stock-Analysis/">GitHub Repo</a></p>

<h1 id="inspiration">Inspiration</h1>
<p>I wanted to see how NVIDIA evolved from a GPU company to an AI powerhouse and how this growth impacted its stock price over 25 years. I also aimed to identify periods of high volatility and major market events.</p>

<h1 id="what-it-does">What it does</h1>
<p>The NVIDIA Stock Analysis project examines historical stock data to uncover patterns and trends over time. Key features include:</p>

<ul>
  <li><strong>Summary Statistics:</strong> Provides descriptive statistics, variance, and price ranges for closing prices.</li>
  <li><strong>Stock Price Visualizations:</strong> Line charts showing opening, closing, high, and low prices from 1999–2024.</li>
  <li><strong>Daily Returns Analysis:</strong> Tracks daily percentage changes to detect volatility and market spikes.</li>
  <li><strong>Top Price Difference Days:</strong> Highlights days with the largest difference between open and close prices.</li>
</ul>

<h1 id="how-i-built-it">How I built it</h1>
<ul>
  <li><strong>Data Source:</strong> Historical NVIDIA stock prices 1999–2024</li>
  <li><strong>Processing:</strong> R with tidyverse, dplyr, readr, and ggplot2 for cleaning, merging, and visualization</li>
  <li><strong>Visualization:</strong> Line charts, bar charts, and scatter plots to highlight trends and volatility</li>
  <li><strong>Website:</strong> GitHub Pages for hosting the interactive HTML report</li>
</ul>

<h3 id="sample-r-analysis-work">Sample R Analysis Work</h3>

<ul>
  <li>
    <p><strong>Nvidia Closing Stock Price 2020-2024</strong><br />
<img src="/assets/images/NVI1.png" alt="Dataset Preview" /></p>
  </li>
  <li>
    <p><strong>Top Ten Days with Highest Price Differences</strong><br />
<img src="/assets/images/NVI2.png" alt="Closing Price Chart" /></p>
  </li>
  <li>
    <p><strong>Daily Returns</strong><br />
<img src="/assets/images/NVI4.png" alt="Closing Price Chart" /></p>
  </li>
</ul>

<h1 id="key-features">Key Features</h1>
<ul>
  <li>Explore NVIDIA stock trends and volatility</li>
  <li>Analyze daily returns and significant price movements</li>
  <li>Interactive HTML report hosted on GitHub Pages</li>
  <li>Visualizations that clearly explain trends and anomalies</li>
</ul>

<h1 id="challenges-i-ran-into">Challenges I ran into</h1>
<ul>
  <li><strong>Data Cleaning:</strong> Combining 25 years of stock data without errors</li>
  <li><strong>Volatility Analysis:</strong> Identifying periods of high market movement</li>
  <li><strong>Visualization:</strong> Making charts readable and informative for a broad audience</li>
</ul>

<h1 id="accomplishments-that-i-am-proud-of">Accomplishments that I am proud of</h1>
<ul>
  <li>Created a comprehensive analysis of NVIDIA stock from 1999–2024</li>
  <li>Developed clear visualizations for trends, daily returns, and volatility</li>
  <li>Hosted an interactive HTML report on GitHub Pages</li>
</ul>

<h1 id="what-i-learned">What I learned</h1>
<ul>
  <li>How to analyze long-term financial data and detect patterns</li>
  <li>How to visualize stock trends and volatility using R</li>
  <li>How to combine data analysis with an accessible, interactive web presentation</li>
</ul>

<h1 id="what-is-next-for-nvidia-stock-analysis">What is next for NVIDIA Stock Analysis</h1>
<ul>
  <li>Add predictive modeling for future stock trends</li>
  <li>Include interactive dashboards for deeper data exploration</li>
  <li>Incorporate additional financial indicators such as volume and market cap</li>
</ul>]]></content><author><name></name></author><category term="work" /><category term="Data Science" /><summary type="html"><![CDATA[R, ggplot2, dplyr, tidyverse, readr, HTML, CSS, GitHub Pages]]></summary></entry><entry><title type="html">Eduowl</title><link href="https://blenwbegashaw.github.io//work/2024/01/28/Eduowl.html" rel="alternate" type="text/html" title="Eduowl" /><published>2024-01-28T19:27:22+00:00</published><updated>2024-01-28T19:27:22+00:00</updated><id>https://blenwbegashaw.github.io//work/2024/01/28/Eduowl</id><content type="html" xml:base="https://blenwbegashaw.github.io//work/2024/01/28/Eduowl.html"><![CDATA[<blockquote>
  <p>Python, OpenAI API, LangChain, Web Scraping, NLP</p>
</blockquote>

<p>Eduowl was created to simplify the university admissions journey by guiding students through the application process and helping them discover academic majors that align with their interests, strengths, and career goals.</p>

<p><a href="https://github.com/BlenWBegashaw/EduOwl">Github repository</a></p>

<h2 id="inspiration">Inspiration</h2>

<p>As a group of college students who vividly recall the complexities and uncertainties of the university admissions process, we found our collective inspiration. Our team of four, each from different majors and backgrounds, recognized a common challenge faced by high school graduates: the overwhelming task of choosing a major while navigating complex admissions requirements. Eduowl was created to turn this daunting pre-college experience into an empowering and streamlined journey, particularly for students applying to Rowan University.</p>

<h2 id="what-it-does">What it does</h2>

<p>Eduowl revolutionizes the university admissions process by offering two core capabilities:</p>

<ul>
  <li><strong>Admissions Guidance Chatbot:</strong> An AI-driven chat interface that answers questions about Rowan University, including academics, admissions requirements, campus life, and student resources.</li>
  <li><strong>Major Recommendation System:</strong> Evaluates students’ interests, strengths, and career goals to recommend suitable academic majors.</li>
  <li><strong>Interactive AI Conversations:</strong> Provides engaging, real-time, context-aware responses.</li>
  <li><strong>Data-Driven Advising:</strong> Uses structured admissions data sourced directly from official university resources.</li>
</ul>

<h2 id="how-we-built-it">How we built it</h2>

<ul>
  <li><strong>Backend:</strong> Python</li>
  <li><strong>Web Scraping:</strong> BeautifulSoup, Requests</li>
  <li><strong>PDF Processing:</strong> PyPDF2</li>
  <li><strong>AI &amp; NLP:</strong> OpenAI API</li>
  <li><strong>Framework:</strong> LangChain</li>
  <li><strong>Conversation Memory:</strong> ConversationBufferMemory</li>
  <li><strong>Data Handling:</strong> Text chunking and token estimation</li>
</ul>

<p>Building Eduowl began by gathering the latest admissions information from Rowan University’s official website. We implemented web scraping techniques to extract accurate and up-to-date admissions data, with a focus on international admissions requirements. To extend the chatbot’s knowledge base, we processed PDF documents using PyPDF2.</p>

<p><img src="/assets/images/eduowl2.png" alt="Architecture" /></p>

<p>The chatbot was powered by OpenAI’s language models and integrated using LangChain. ConversationBufferMemory enabled the bot to maintain context across interactions, resulting in coherent and relevant responses. Text chunking and token estimation were used to efficiently handle large volumes of scraped and document-based content.</p>

<h2 id="demo">Demo</h2>
<div class="video-container">
    <iframe width="560" height="315" src="https://www.youtube.com/embed/wpuzgG5y8CA" frameborder="0" allowfullscreen=""></iframe>
</div>

<h2 id="challenges-we-ran-into">Challenges we ran into</h2>

<ul>
  <li>Persistent errors when attempting to integrate JSON data into Azure AI Studio</li>
  <li>Handling large volumes of scraped and PDF-based data</li>
  <li>Maintaining response accuracy and contextual relevance</li>
</ul>

<h2 id="accomplishments">Accomplishments</h2>

<ul>
  <li>Built a functional AI-powered admissions chatbot</li>
  <li>Designed a form-based major recommendation feature</li>
  <li>Successfully pivoted from Azure AI Studio to the OpenAI API</li>
  <li>Delivered a student-focused solution addressing real admissions challenges</li>
</ul>

<h2 id="what-we-learned">What we learned</h2>

<ul>
  <li>How to build conversational AI using OpenAI and LangChain</li>
  <li>How to scrape, structure, and process real-world admissions data</li>
  <li>The importance of adaptability when encountering technical roadblocks</li>
</ul>

<h2 id="what-is-next-for-eduowl">What is next for Eduowl</h2>

<ul>
  <li>Expand major recommendations across additional academic disciplines</li>
  <li>Integrate Microsoft Azure to enhance AI capabilities</li>
  <li>Improve chatbot memory, intelligence, and response accuracy</li>
  <li>Add academic advising and writing assistance features</li>
</ul>]]></content><author><name></name></author><category term="work" /><category term="Full-Stack" /><summary type="html"><![CDATA[Python, OpenAI API, LangChain, Web Scraping, NLP]]></summary></entry><entry><title type="html">Sentishelter</title><link href="https://blenwbegashaw.github.io//work/2023/10/22/SentiShelter.html" rel="alternate" type="text/html" title="Sentishelter" /><published>2023-10-22T12:00:00+00:00</published><updated>2023-10-22T12:00:00+00:00</updated><id>https://blenwbegashaw.github.io//work/2023/10/22/SentiShelter</id><content type="html" xml:base="https://blenwbegashaw.github.io//work/2023/10/22/SentiShelter.html"><![CDATA[<blockquote>
  <p>HTML, CSS, JavaScript, Python, SpaCy, Matplotlib, Seaborn, GitHub Pages, HuggingFace Spaces, Kaggle</p>
</blockquote>

<p>I collaborated with 3 people for this project. As a group, we combined our skills in data science, NLP, web development, and visualization to explore how climate change discussions affect housing sentiment.</p>

<p><a href="https://blenwbegashaw.github.io/sentishelter/">Live Website</a></p>

<p><a href="https://huggingface.co/spaces/blenbegashaw/sentishelter-chatbots/">Chatbot</a></p>

<p><a href="https://github.com/BlenWBegashaw/sentishelter/">Github Repo</a></p>

<h1 id="inspiration">Inspiration</h1>
<p>We created SentiShelter to help people understand how climate change conversations influence the housing market. From rising insurance rates to homes becoming less sustainable, we wanted to provide a tool that visualizes sentiment trends and highlights key topics and locations.</p>

<h1 id="what-it-does">What it does</h1>
<p>SentiShelter analyzes Reddit comments from 2010-2022, tracking sentiment over time related to climate change and housing. Key features include:</p>

<ul>
  <li><strong>Sentiment Analysis:</strong> Identify positive, negative, or neutral sentiment in discussions about climate and housing.</li>
  <li><strong>Topic Clusters:</strong> Extract major topics and trends from discussions.</li>
  <li><strong>Interactive Website:</strong> Users can explore data visualizations and access a chatbot to learn more about the dataset.</li>
  <li><strong>Data Visualization:</strong> Bar charts, line graphs, and scatter plots show sentiment trends over time.</li>
</ul>

<h2 id="final-product">Final Product</h2>

<video width="800" height="450" controls="" poster="/assets/posterpic.png">
  <source src="/assets/New%20Recording%20-%2012_9_2025,%2010_09_35%20PM.mp4" type="video/mp4" />
</video>

<h1 id="how-we-built-it">How we built it</h1>
<ul>
  <li><strong>Data Source:</strong> Kaggle Reddit Climate Change Dataset (2010-2022)</li>
  <li><strong>Processing:</strong> Python, SpaCy NLP for cleaning and extracting entities like people and locations</li>
  <li><strong>Visualization:</strong> Matplotlib and Seaborn for sentiment trends, topic clusters, and frequency charts</li>
  <li><strong>Website:</strong> Responsive HTML/CSS/JS site with integrated Hugging Face chatbot</li>
</ul>

<h3 id="sample-kaggle-data-science-work">Sample Kaggle Data Science Work</h3>

<ul>
  <li>
    <p><strong>Cleaning the Dataset</strong> 
<img src="/assets/images/kaggle4.png" alt="Topic Clusters" /></p>
  </li>
  <li>
    <p><strong>Average Sentiment Analaysis per month</strong><br />
<img src="/assets/images/kaggle2.png" alt="Top Entities" /></p>

    <h1 id="key-features">Key Features</h1>
  </li>
  <li>Explore Reddit sentiment trends (2010-2022)</li>
  <li>Identify top entities (persons, locations) and their associated sentiment</li>
  <li>Interactive visualizations and data exploration</li>
  <li>Hugging Face chatbot integration for learning about climate and housing</li>
</ul>

<h1 id="challenges-we-ran-into">Challenges we ran into</h1>
<ul>
  <li><strong>Large Dataset:</strong> Required sampling to maintain performance without losing insights</li>
  <li><strong>Entity Analysis:</strong> Identifying meaningful relationships between people, locations, and sentiment was complex</li>
  <li><strong>Visualization:</strong> Presenting data clearly while maintaining an interactive user experience</li>
</ul>

<h1 id="accomplishments-that-we-are-proud-of">Accomplishments that we are proud of</h1>
<ul>
  <li>Successfully combined NLP, data visualization, and web development</li>
  <li>Developed a user-friendly website with clear insights</li>
  <li>Integrated a chatbot interface for exploring climate change and housing discussions</li>
</ul>

<h1 id="what-we-learned">What we learned</h1>
<ul>
  <li>How to extract insights from large datasets using NLP</li>
  <li>How to visualize and present complex sentiment data interactively</li>
  <li>How to combine Python analysis with a responsive website</li>
</ul>

<h3 id="winner-of-technica-2023">Winner of Technica 2023</h3>

<ul>
  <li>My team and I won first place in the Fannie Mae “Climate Change Sentiment Analysis and Impacts on Housing” competition, and second place in the Bloomberg Industry Group “Best AI-Powered Solution” competition.</li>
</ul>

<h1 id="what-is-next-for-sentishelter">What is next for SentiShelter</h1>
<ul>
  <li>Add more real-time sentiment tracking as new data becomes available</li>
  <li>Expand chatbot capabilities to provide more detailed explanations of trends and insights</li>
  <li>Enhance topic clustering and sentiment analysis using advanced NLP models</li>
</ul>]]></content><author><name></name></author><category term="work" /><category term="Data Science" /><summary type="html"><![CDATA[HTML, CSS, JavaScript, Python, SpaCy, Matplotlib, Seaborn, GitHub Pages, HuggingFace Spaces, Kaggle]]></summary></entry><entry><title type="html">MedLingua</title><link href="https://blenwbegashaw.github.io//work/2023/09/18/medlingua.html" rel="alternate" type="text/html" title="MedLingua" /><published>2023-09-18T19:27:22+00:00</published><updated>2023-09-18T19:27:22+00:00</updated><id>https://blenwbegashaw.github.io//work/2023/09/18/medlingua</id><content type="html" xml:base="https://blenwbegashaw.github.io//work/2023/09/18/medlingua.html"><![CDATA[<blockquote>
  <p>FastAPI, SvelteKit, NLP, SQL, Data Visualization</p>
</blockquote>

<p>MedLingua was created to bridge the communication gap between doctors and patients by transforming complex medical data into clear, patient-friendly insights.</p>

<p><a href="https://github.com/BlenWBegashaw/Medlingua">Github repository</a></p>

<h1 id="inspiration">Inspiration</h1>
<p>MedLingua addresses the challenge that nearly 2 in 3 patients struggle to understand their healthcare providers due to complex medical jargon and disconnected patient data. The platform synthesizes structured health records and unstructured clinical notes into clear explanations, personalized recommendations, and meaningful visualizations.</p>

<h1 id="what-it-does">What it does</h1>
<p>MedLingua interprets medical data to improve communication between patients and providers. Key features include:</p>

<ul>
  <li><strong>Personalized Explanations:</strong> Converts complex medical terminology into patient-friendly language.</li>
  <li><strong>Smart Recommendations:</strong> Provides tailored healthcare recommendations based on patient data.</li>
  <li><strong>Medical Visualizations:</strong> Displays connected medical histories through intuitive charts.</li>
  <li><strong>Structured &amp; Unstructured Data Integration:</strong> Merges EHR data with clinical notes seamlessly.</li>
  <li><strong>Provider-Focused Insights:</strong> Supports clinical decision-making with organized data views.</li>
</ul>

<h2 id="architecture">Architecture</h2>
<p><img src="/assets/images/medlingua2.png" alt="MedLingua Architecture" /></p>

<h1 id="how-we-built-it">How we built it</h1>
<ul>
  <li><strong>Backend:</strong> FastAPI</li>
  <li><strong>Frontend:</strong> SvelteKit</li>
  <li><strong>Machine Learning:</strong> Codebox</li>
  <li><strong>NLP:</strong> Custom NLP models</li>
  <li><strong>Database:</strong> SQL</li>
  <li><strong>Data Processing:</strong> MIMIC Dataset &amp; Clinical Data Pipelines</li>
</ul>

<h1 id="challenges-we-ran-into">Challenges we ran into</h1>
<ul>
  <li>Integrating structured and unstructured medical data</li>
  <li>Ensuring patient-friendly explanations without losing clinical accuracy</li>
  <li>Building intuitive visualizations for complex health records</li>
</ul>

<h1 id="accomplishments">Accomplishments</h1>
<ul>
  <li>Successfully combined <strong>backend, frontend, NLP, and data visualization</strong></li>
  <li>Delivered a <strong>tool that helps patients and providers communicate more effectively</strong></li>
  <li>Built a <strong>fully functional platform</strong> with clear insights</li>
</ul>

<h1 id="what-we-learned">What we learned</h1>
<ul>
  <li>How to process structured and unstructured healthcare data</li>
  <li>How to visualize complex datasets in an accessible way</li>
  <li>How to integrate Python backend with a responsive frontend</li>
</ul>

<h1 id="what-is-next-for-medlingua">What is next for MedLingua</h1>
<ul>
  <li>Expand NLP models for more nuanced medical notes</li>
  <li>Include real-time analytics and patient dashboards</li>
  <li>Improve visualizations with more interactive charts</li>
</ul>]]></content><author><name></name></author><category term="work" /><category term="Full-Stack" /><summary type="html"><![CDATA[FastAPI, SvelteKit, NLP, SQL, Data Visualization]]></summary></entry><entry><title type="html">Coffee Recommender</title><link href="https://blenwbegashaw.github.io//work/2022/08/03/coffee.html" rel="alternate" type="text/html" title="Coffee Recommender" /><published>2022-08-03T12:00:00+00:00</published><updated>2022-08-03T12:00:00+00:00</updated><id>https://blenwbegashaw.github.io//work/2022/08/03/coffee</id><content type="html" xml:base="https://blenwbegashaw.github.io//work/2022/08/03/coffee.html"><![CDATA[<blockquote>
  <p>HTML, CSS, JavaScript</p>
</blockquote>

<p><a href="https://blenwbegashaw.github.io/Coffee-Recommender/">Live Website</a></p>

<p><a href="https://github.com/BlenWBegashaw/Coffee-Recommender">GitHub repository</a></p>

<h1 id="inspiration">Inspiration</h1>
<p>The Coffee Recommender is a fun and interactive web application designed to help users discover their perfect coffee based on their preferences for strength, milk, and sweetness. The app uses a simple quiz interface to guide users and provides a coffee suggestion based on their choices.</p>

<h1 id="what-it-does">What it does</h1>
<p>The application lets users select their coffee preferences and generates a recommendation. Key features include:</p>

<ul>
  <li><strong>Interactive Quiz:</strong> Users answer three questions to determine their coffee preference.</li>
  <li><strong>Coffee Recommendations:</strong> Suggests popular coffee types like Espresso, Latte, Cappuccino, Americano, Macchiato, and Flat White.</li>
  <li><strong>Responsive Design:</strong> Works seamlessly across desktop and mobile devices.</li>
  <li><strong>Separate Files:</strong> HTML, CSS, and JavaScript are in separate files for modularity and maintainability.</li>
</ul>

<h1 id="how-it-was-built">How it was built</h1>
<ul>
  <li><strong>Frontend:</strong> HTML5 for structure, CSS3 for styling, JavaScript for quiz logic</li>
  <li><strong>Responsive Design:</strong> Ensured proper display on desktop and mobile</li>
  <li><strong>Separate Modular Files:</strong> <code class="language-plaintext highlighter-rouge">index.html</code>, <code class="language-plaintext highlighter-rouge">style.css</code>, <code class="language-plaintext highlighter-rouge">script.js</code></li>
</ul>

<h3 id="how-it-works">How It Works</h3>

<ol>
  <li>Users select their preferences for coffee strength, milk, and sweetness.</li>
  <li>Clicking <strong>Submit</strong> triggers JavaScript to determine the recommended coffee.</li>
  <li>Recommendation is displayed on the screen; <strong>Retry</strong> allows retaking the quiz.</li>
  <li>Input validation ensures all questions are answered before showing results.</li>
</ol>

<h1 id="challenges">Challenges</h1>
<ul>
  <li>Validating user input and ensuring smooth quiz flow</li>
  <li>Making the layout responsive for different screen sizes</li>
  <li>Structuring modular files for maintainability</li>
</ul>

<h1 id="accomplishments">Accomplishments</h1>
<ul>
  <li>Developed a fully interactive quiz from scratch</li>
  <li>Practiced DOM manipulation and JavaScript logic</li>
  <li>Built a responsive and user-friendly design</li>
</ul>

<h1 id="what-i-learned">What I learned</h1>
<ul>
  <li>Improved JavaScript skills for handling user input and quiz logic</li>
  <li>Learned modular coding practices with separate HTML, CSS, and JS files</li>
  <li>Strengthened responsive web design skills for a better user experience</li>
</ul>]]></content><author><name></name></author><category term="work" /><category term="Frontend" /><summary type="html"><![CDATA[Interactive Coffee Quiz Web App]]></summary></entry></feed>