top of page

Projects

1. Otto: Full-stack web application (MERN stack)

Tech: MERN • JWT • Validation • Security (Helmet/CORS/rate-limit) • Socket.IO • Vite • Router • Day.js

​

A React front end talking to an Express API on MongoDB. I’m actively expanding features and tightening quality as I go.​

​

What’s done

  • Auth: JWT login/signup with hashed passwords (bcryptjs).

  • API: Versioned REST routes with express-validator and consistent error responses.

  • Data: Mongoose schemas/models; modular structure (config/, controllers/, middleware/, models/, routes/, utils/, websocket/).

  • Security: Helmet, CORS, rate limiting; optional CSRF protection for cookie flows.

  • Real-time: Initial Socket.IO channel for live updates.

  • Client setup: React 19+ Vite, routing via React Router; date/calendar utilities wired in.​

​

Up next​

2. EasyStocks: Market Analysis API tool (Python)

Tech (Python): Python • requests • pandas • matplotlib • CLI (argparse/typer) • Jupyter

​

I’m building a small, composable pipeline that fetches market data, standardizes/cleans it, adds indicators, and outputs charts/CSV reports from the CLI. The data collection & cleaning layer is the backbone for EDA, indicators, backtests, and reporting.

​

What’s done (foundation of the pipeline)

  • Fetch tickers via API/CSV, normalize timestamps/columns, handle missing values.

  • CLI entry point: simple commands that wire ingest → clean → summarize so it’s easy to extend.

  • Reporting scaffold: utilities to generate charts and tabular summaries (ready for HTML/CSV export).

​

Up next

  • EDA: returns, volatility, rolling stats; benchmark/sector comparisons.

  • Indicators: SMA/EMA, RSI, MACD, Bollinger, rolling beta vs. SPY.

  • Backtests: cost-aware MA crossover, equity curve, Sharpe, max drawdown.

  • Reports: exportable HTML/PDF per-ticker summaries with charts + tables.

3. Supervised Machine Learning for Stroke Risk Prediction

Tech (Python): Python • pandas • seaborn • matplotlib • scikit-learn • Jupyter

 

I built a supervised machine learning model to predict stroke risk from patient health and demographic data. The project combines exploratory data analysis, data cleaning, and predictive modeling to uncover key factors linked to stroke occurrence.

​

The workflow includes preprocessing with ColumnTransformer (imputation, scaling, one-hot encoding) and model training with Logistic Regression and Random Forest classifiers. Visual analytics highlight feature correlations, class imbalance, and top risk indicators such as age, glucose level, and hypertension. Model metrics (ROC-AUC ≈ 0.84) demonstrate strong discriminative performance.

bottom of page