TERMINAL.001
USER.SYSTEM
MULTI
DISCIPLINARY
DEVELOPER
DISCIPLINARY
DEVELOPER
$ Creative thinking and problem solving are where
$ my mind wanders, using my knowledge and passion
$ for development as my medium.
$
01 // 04
SCROLL
TERMINAL.002
ABOUT.SYSTEM
user.terminal ~ bash
$
Developer at Halic University, doing
mostly Machine Learning,
Front-End,
PHP, and sometimes
bash scripting. Loves going to
concerts, playing video
games, or listening to music on Spotify.
Follow on github
@muhkartal or
read the
blog at kartal.dev.
$
drwxr-xr-x | kartal | 4.0K | Apr 12 2024 | fr-framework |
drwxr-xr-x | kartal | 4.0K | Mar 05 2024 | neuromed |
drwxr-xr-x | kartal | 4.0K | Apr 28 2024 | xai-dashboard |
-rw-r--r-- | kartal | 1.2K | Apr 01 2024 | README.md |
drwxr-xr-x | kartal | 4.0K | Feb 17 2024 | personal-site |
Recent commands
$
$
$
$
$
02 // 04
TERMINAL.003
PROJECTS.SYSTEM
PROJECTS
$ Explore my recent projects and experiments
FR Framework
A modular and extensible face recognition framework featuring real-time detection, facial landmark analysis, and recognition with API support.
face_recognition.py
from fr_framework import FaceRecognizer
import cv2
# Initialize the framework
recognizer = FaceRecognizer()
# Load and process image
image = cv2.imread('photo.jpg')
faces = recognizer.detect_faces(image)
# Extract features and recognize
for face in faces:
landmarks = recognizer.get_landmarks(face)
identity = recognizer.recognize(face)
confidence = recognizer.get_confidence()
print(f"Identity: {identity}")
print(f"Confidence: {confidence:.2f}")
print(f"Landmarks: {len(landmarks)} points")
API Request
curl -X POST http://localhost:5000/api/recognize \
-H "Content-Type: multipart/form-data" \
-F "image=@photo.jpg" \
-F "threshold=0.8"
# Response
{
"status": "success",
"faces_detected": 2,
"results": [
{
"identity": "john_doe",
"confidence": 0.92,
"bbox": [150, 100, 200, 180],
"landmarks": {...}
}
]
}
Recognition Output
$ python face_recognition.py
✓ Model loaded successfully
Processing image: photo.jpg
Faces detected: 2
Identity: john_doe
Confidence: 0.92
Landmarks: 68 points
Processing time: 0.15s
Python
OpenCV
TensorFlow
Flask
Neuromed
MedExplain AI Pro Health intelligence platform to analyze symptoms, assess conditions, and deliver actionable healthcare insights.
symptom_analyzer.py
from neuromed import HealthAnalyzer
import streamlit as st
# Initialize AI health analyzer
analyzer = HealthAnalyzer()
# Patient input
symptoms = st.text_area("Describe your symptoms:")
age = st.number_input("Age:", min_value=1, max_value=120)
gender = st.selectbox("Gender:", ["Male", "Female", "Other"])
if st.button("Analyze Symptoms"):
# Process with NLP
processed_symptoms = analyzer.process_text(symptoms)
# Generate analysis
analysis = analyzer.analyze_health({
'symptoms': processed_symptoms,
'age': age,
'gender': gender
})
# Display results
st.json(analysis)
model_architecture.py
import tensorflow as tf
from transformers import AutoTokenizer, AutoModel
class NeuromedModel:
def __init__(self):
# Load pre-trained medical BERT
self.tokenizer = AutoTokenizer.from_pretrained(
'emilyalsentzer/Bio_ClinicalBERT'
)
self.bert_model = AutoModel.from_pretrained(
'emilyalsentzer/Bio_ClinicalBERT'
)
# Custom classification layers
self.classifier = tf.keras.Sequential([
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dense(100, activation='softmax') # 100 conditions
])
def predict_condition(self, symptoms_text):
# Tokenize and encode
inputs = self.tokenizer(symptoms_text, return_tensors='tf')
embeddings = self.bert_model(**inputs).last_hidden_state
# Classify
predictions = self.classifier(embeddings.mean(axis=1))
return predictions
Health Analysis Results
Primary Condition:
Common Cold
92% confidence
Risk Level:
Low
Recommendations:
• Rest and hydration
• Monitor temperature
• Consult doctor if symptoms worsen
Python
TensorFlow
Streamlit
NLP
XAI Dashboard
Interactive AI dashboard for machine learning model analysis and explainability, supports model training, dataset exploration, and feature importance analysis.
dashboard.py
import streamlit as st
import pandas as pd
from xai_dashboard import ModelExplainer
import plotly.express as px
# Load model and data
@st.cache_data
def load_model_data():
model = joblib.load('model.pkl')
data = pd.read_csv('dataset.csv')
explainer = ModelExplainer(model, data)
return model, data, explainer
# Dashboard layout
st.title("🤖 XAI Dashboard")
st.sidebar.header("Model Configuration")
# Model selection
model_type = st.sidebar.selectbox(
"Select Model:",
["Random Forest", "XGBoost", "Neural Network"]
)
# Feature importance
with st.container():
st.subheader("Feature Importance")
importance_data = explainer.get_feature_importance()
fig = px.bar(
importance_data,
x='importance',
y='feature',
title="Top 10 Most Important Features"
)
st.plotly_chart(fig, use_container_width=True)
model_explainer.py
import shap
import lime
from sklearn.inspection import permutation_importance
class ModelExplainer:
def __init__(self, model, X_train, feature_names):
self.model = model
self.X_train = X_train
self.feature_names = feature_names
# Initialize explainers
self.shap_explainer = shap.TreeExplainer(model)
self.lime_explainer = lime.tabular.LimeTabularExplainer(
X_train.values,
feature_names=feature_names,
mode='classification'
)
def explain_prediction(self, instance):
"""Generate multiple explanations for a single prediction"""
# SHAP explanation
shap_values = self.shap_explainer.shap_values(instance)
# LIME explanation
lime_explanation = self.lime_explainer.explain_instance(
instance.flatten(),
self.model.predict_proba
)
# Permutation importance
perm_importance = permutation_importance(
self.model, self.X_train, y_train
)
return {
'shap': shap_values,
'lime': lime_explanation,
'permutation': perm_importance
}
Model Explainability Dashboard
Feature Importance
Model Performance
Accuracy:
94.2%
Precision:
91.8%
Recall:
93.5%
Python
React
Scikit-learn
D3.js
03 // 04
TERMINAL.004
CONTACT.SYSTEM
CONTACT
$ Connect with me through various channels
04 // 04