# ErdalToprak.com > This website is Erdal Toprak's personal website. It is built with Astro and deployed on Cloudflare Pages. ## Pages - [Home](https://erdaltoprak.com) - [Blog](https://erdaltoprak.com/blog) - [Projects](https://erdaltoprak.com/projects) - [About](https://erdaltoprak.com/about) # Docs - [Github](https://github.com/erdaltoprak/erdaltoprak.com) : Source code of this website. # Full Content ## Page: projects.astro ```astro --- import Layout from '../layouts/Layout.astro'; import ProjectCard from '../components/ProjectCard.astro'; import collections from '../assets/hf_collections.json'; import projects from '../assets/github.json' assert { type: "json" }; import rawModels from '../assets/hf_models.json'; import rawDatasets from '../assets/hf_datasets.json'; interface HFModel { id: string; author: string; title: string; description: string; url: string; lastModified: string; model_type: string; base_model: string[]; } interface HFDataset { id: string; author: string; title: string; url: string; lastModified: string; } const models = rawModels as HFModel[]; const datasets = rawDatasets as HFDataset[]; // Group projects by author const githubByAuthor = projects.reduce((acc, project) => { const author = project.author || 'Unknown'; if (!acc[author]) acc[author] = []; acc[author].push(project); return acc; }, {} as Record); const modelsByAuthor = models.reduce((acc, model) => { const author = model.author || 'Unknown'; if (!acc[author]) acc[author] = []; acc[author].push(model); return acc; }, {} as Record); const datasetsByAuthor = datasets.reduce((acc, dataset) => { const author = dataset.author || 'Unknown'; if (!acc[author]) acc[author] = []; acc[author].push(dataset); return acc; }, {} as Record); // Sort authors alphabetically and ensure consistent order const sortedGithubAuthors = Object.entries(githubByAuthor) .sort(([a], [b]) => a.localeCompare(b)); const sortedModelAuthors = Object.entries(modelsByAuthor) .sort(([a], [b]) => a.localeCompare(b)); const sortedDatasetAuthors = Object.entries(datasetsByAuthor) .sort(([a], [b]) => a.localeCompare(b)); ---

Projects

{/* GitHub Projects by Author */} {sortedGithubAuthors.map(([author, authorProjects]) => ( authorProjects.length > 0 && (

{author}'s github projects

    {authorProjects.map(project => ( ))}
) ))} {/* Hugging Face Models by Author */} {sortedModelAuthors.map(([author, authorModels]) => ( authorModels.length > 0 && (

{author}'s huggingface models

    {authorModels.map(model => ( ))}
) ))} {/* Hugging Face Datasets by Author */} {sortedDatasetAuthors.map(([author, authorDatasets]) => ( authorDatasets.length > 0 && (

{author}'s huggingface datasets

    {authorDatasets.map(dataset => ( ))}
) ))}
``` ## Page: models.astro ```astro --- import Layout from '../layouts/Layout.astro'; import ModelCard from '../components/ModelCard.astro'; import models from '../assets/hf_models.json'; import type { HFModel } from '../types'; // Type assertion and validation const validModels = (Array.isArray(models) ? models : []) as HFModel[]; ---

Models

{validModels.length > 0 ? ( validModels.map((model: HFModel) => ( )) ) : (

No models available.

)}
``` ## Page: index.astro ```astro --- import Layout from '../layouts/Layout.astro'; import { Image } from 'astro:assets'; import memoji from '../assets/memoji.png'; import { getCollection } from 'astro:content'; import ProjectCard from '../components/ProjectCard.astro'; import { formatDate } from '../utils'; import githubProjects from '../assets/github.json'; import hfModels from '../assets/hf_models.json'; import hfDatasets from '../assets/hf_datasets.json'; import type { HFModel, HFDataset } from '../types'; // Type assertions and validation const validModels = (Array.isArray(hfModels) ? hfModels : []) as HFModel[]; const validDatasets = (Array.isArray(hfDatasets) ? hfDatasets : []) as HFDataset[]; // Get latest blog posts const posts = await getCollection('blog'); const latestPosts = posts .sort((a, b) => b.data.pubDate.valueOf() - a.data.pubDate.valueOf()) .slice(0, 4); // Get latest items const latestModels = validModels .slice(0, 3) .map((model: HFModel) => ({ title: model.title || '', description: model.description || '', url: model.url || '', lastModified: model.lastModified || '', author: model.author || '', base_model: model.base_model || [] })); const latestDatasets = validDatasets .slice(0, 3) .map((dataset: HFDataset) => ({ title: dataset.title || '', url: dataset.url || '', lastModified: dataset.lastModified || '', author: dataset.author || '' })); // Combine all projects from different sources // @ts-ignore const allProjects = [ ...githubProjects.map(project => ({ ...project, source: 'github' })), ...hfModels.map(model => ({ // @ts-ignore title: model.title, // @ts-ignore description: model.description, // @ts-ignore url: model.url, // @ts-ignore lastModified: model.lastModified, // @ts-ignore author: model.author, // @ts-ignore base_model: model.base_model, source: 'huggingface-model' })), ...hfDatasets.map(dataset => ({ // @ts-ignore title: dataset.title, // @ts-ignore url: dataset.url, // @ts-ignore lastModified: dataset.lastModified, // @ts-ignore author: dataset.author, source: 'huggingface-dataset' })) ] .filter(project => project.title && !project.title.includes('.github.io')) .sort((a, b) => new Date(b.lastModified).getTime() - new Date(a.lastModified).getTime()) .slice(0, 3); // More detailed logging // console.log('Combined projects:', allProjects); // console.log('Projects length:', allProjects.length); ---

Hi 👋🏻, I'm Erdal!

I'm a software engineer passionate about AI, decentralization, privacy and open source. On this site, I share my thoughts, projects, and experiments.

Latest Posts

View all posts →
{allProjects.length > 0 && (

Latest Projects

{allProjects.map(project => ( ))}
View all projects →
)}
``` ## Page: datasets.astro ```astro --- import Layout from '../layouts/Layout.astro'; import DatasetCard from '../components/DatasetCard.astro'; import datasets from '../assets/hf_datasets.json'; import type { HFDataset } from '../types'; // Type assertion and validation const validDatasets = (Array.isArray(datasets) ? datasets : []) as HFDataset[]; ---

Datasets

{validDatasets.length > 0 ? ( validDatasets.map((dataset: HFDataset) => ( )) ) : (

No datasets available.

)}
``` ## Page: about.astro ```astro --- import Layout from '../layouts/Layout.astro'; import { Image } from 'astro:assets'; import { Icon } from 'astro-icon/components' import profile from '../assets/profile.jpg'; import socialLinks from '../assets/social.json'; // Add these constants (you can also move them to src/consts.ts) const APPLE_MAPS_URL = "maps://?address=Nice,France"; const APPLE_MAPS_WEB_URL = "https://beta.maps.apple.com/?address=Nice%2CFrance&t=h"; // You can move these to a separate data file if preferred const publications = [ { title: "Study on the contribution of federated learning to autonomous driving", date: "2022", description: "An internship report on the contribution of federated learning to autonomous driving done at the University of Nice Sophia Antipolis and the I3S / CNRS laboratory.", url: "/about", type: "paper" }, // Add more publications ]; const podcastAppearance = [ { title: "Témoignage : comment remplir ses anneaux d'Apple Watch pendant 365 jours", show: "MacGeneration / WatchGeneration", type: "article", date: "2024", links: [ { platform: "Web", url: "https://www.watchgeneration.fr/sport/2024/02/temoignage-comment-remplir-ses-anneaux-dapple-watch-pendant-365-jours-16301", icon: "mdi:web" }, ] }, { title: "Erdal: The Case For Micro Apps", show: "Nostrovia", type: "podcast", date: "2023", links: [ { platform: "Apple Podcasts", url: "https://podcasts.apple.com/fr/podcast/nostrovia-the-original-nostr-podcast/id1678531266?i=1000605372568", icon: "mdi:apple" }, { platform: "Spotify", url: "https://open.spotify.com/episode/7bUnsCqJgzjezwoIwdRee2", icon: "mdi:spotify" }, { platform: "Web", url: "https://creators.spotify.com/pod/show/nostrovia/episodes/Erdal-The-Case-For-Micro-Apps-e1vi0af", icon: "mdi:web" }, ] } ]; --- ``` ## Page: blog/index.astro ```astro --- import Layout from "../../layouts/Layout.astro"; import { getCollection } from 'astro:content'; import type { CollectionEntry } from 'astro:content'; interface PostsByYear { [key: number]: CollectionEntry<'blog'>[]; } // Get all unique post types const posts = await getCollection('blog'); const postTypes = ['all', ...new Set(posts.map(post => post.data.type).filter(Boolean))]; const postsByYear = posts.reduce((acc, post) => { const year = new Date(post.data.pubDate).getFullYear(); if (!acc[year]) acc[year] = []; acc[year].push(post); return acc; }, {}); // Sort years in descending order const sortedYears = Object.keys(postsByYear).map(Number).sort((a, b) => b - a); // Function to format date as "25 Jan" const formatDate = (date: Date): string => { return date.toLocaleDateString('en-GB', { day: '2-digit', month: 'short' }); }; ---
{postTypes.map(type => { const isActive = type === 'all'; const href = type === 'all' ? '/blog' : `/blog/${type}`; return ( {typeof type === 'string' ? type.charAt(0).toUpperCase() + type.slice(1) : type} ); })}
{sortedYears.map(year => (

{year}

))}
``` ## Page: blog/[type].astro ```astro --- import Layout from "../../layouts/Layout.astro"; import { getCollection } from 'astro:content'; import type { CollectionEntry } from 'astro:content'; interface PostsByYear { [key: number]: CollectionEntry<'blog'>[]; } export function getStaticPaths() { return [ { params: { type: 'articles' } }, { params: { type: 'notes' } }, ]; } const { type } = Astro.params; // Get all unique post types const posts = await getCollection('blog'); const postTypes = ['all', ...new Set(posts.map(post => post.data.type).filter(Boolean))]; // Filter posts by type and group by year const filteredPosts = posts.filter(post => post.data.type === type); const postsByYear = filteredPosts.reduce((acc, post) => { const year = new Date(post.data.pubDate).getFullYear(); if (!acc[year]) acc[year] = []; acc[year].push(post); return acc; }, {}); // Sort years in descending order const sortedYears = Object.keys(postsByYear).map(Number).sort((a, b) => b - a); // Function to format date as "25 Jan" const formatDate = (date: Date): string => { return date.toLocaleDateString('en-GB', { day: '2-digit', month: 'short' }); }; ---
{postTypes.map(postType => { const isActive = type === postType; const href = postType === 'all' ? '/blog' : `/blog/${postType}`; return ( {typeof postType === 'string' ? postType.charAt(0).toUpperCase() + postType.slice(1) : postType} ); })}
{/* Show message if no posts match the filter */} {filteredPosts.length === 0 ? (

No posts found for this filter.

) : ( sortedYears.map(year => (

{year}

)) )}
``` ## Page: blog/[...slug].astro ```astro --- import { type CollectionEntry, getCollection } from 'astro:content'; import BlogPost from '../../layouts/LayoutBlog.astro'; import { Image } from 'astro:assets'; export async function getStaticPaths() { const posts = await getCollection('blog'); return posts.map((post) => ({ params: { slug: post.slug }, props: post, })); } const post = Astro.props; const { Content } = await post.render(); // Get the processed image URL from the heroImage const heroImage = post.data.heroImage; ---

{post.data.title}

``` ## Blog Post: website-redesign-v2.md Frontmatter: ```yaml --- title: Website redesign v2 pubDate: '2024-12-27' description: 'A redesign of my website with static search, themes, and more' author: Erdal Toprak heroImage: ../../assets/blog/12/banner.png type: notes id: 12 --- ``` Content: ![Banner](../../assets/blog/12/banner.png) In 2021, I started this website with [Astro](https://astro.build/). It's an all-in-one web framework that ships with the less JS, is UI agnostic, and has a great developer experience. Since the first version, my goal has been to make this website search-friendly, like a small wiki for the things I've learned and the things I'm passionate about. ## Release Notes #### Homepage I redesigned the homepage to be more modern and complete with the latest blog posts and projects. The layout maximizes the space for the content and the images. | ![Old Homepage](../../assets/blog/12/homepage-old.png) | |:--:| | *Old Homepage* | | ![New Homepage](../../assets/blog/12/homepage-new.png) | |:--:| | *New Homepage* | #### Blog I've been thinking for a long time about how to separate the blog posts depending on the type. For example, I have some posts that are more like notes, meaning they are more like thoughts, ideas, and reflections. On the other hand, I have some posts that are more like tutorials, meaning they are more like step-by-step guides. On this version, I've added the `type` field to the frontmatter of the blog posts. This reflects on the blog page where you can now filter the posts by type. | ![Old Blog](../../assets/blog/12/blog-old.png) | |:--:| | *Old Blog* | | ![New Blog](../../assets/blog/12/blog-new.png) | |:--:| | *New Blog* | In the blog layout, I added the [Expressive Code](https://expressive-code.com) plugin to enhance the code blocks with syntax highlighting (and a lot more). | ![Old Blog Layout](../../assets/blog/12/blog-layout-old.png) | |:--:| | *Old Blog Layout* | | ![New Blog Layout](../../assets/blog/12/blog-layout-new.png) | |:--:| | *New Blog Layout* | #### Theme The default theme is the system theme. While coding for dark mode was straightforward, getting the light theme to look good and readable for longer content was challenging. However, I think I've found a good balance | ![Dark Theme](../../assets/blog/12/homepage-new.png) | ![Light Theme](../../assets/blog/12/homepage-new-light.png) | |:--:|:--:| | *Dark Theme* | *Light Theme* | #### Projects One of the pages with the most improvements is the projects page. On build, I've added a script to fetch the projects from my GitHub repositories and display them on the page. ```js const fetchUserRepos = (username) => { const url = `https://api.github.com/users/${username}/repos?per_page=500`; return new Promise((resolve, reject) => { https.get(url, { headers: { 'User-Agent': 'node.js' } }, (res) => { let data = ''; res.on('data', (chunk) => { data += chunk; }); res.on('end', () => { try { const repos = JSON.parse(data) .map(repo => { const repoData = { title: repo.name, description: repo.description || '', url: repo.html_url, demo: repo.homepage || '', post: '', lastModified: repo.updated_at, author: repo.owner.login }; if (repo.topics && repo.topics.length > 0) { repoData.tags = repo.topics; } if (repo.license && repo.license.name) { repoData.license = repo.license.name; } return repoData; }); resolve(repos); } catch (error) { console.error(`Error parsing repository data for ${username}:`, error); reject(error); } }); }).on('error', (err) => { console.error(`Error fetching repository data for ${username}:`, err); reject(err); }); }); }; ``` I've also made the same scripts for HuggingFace collections, datasets and models. This allows me to keep the website static and mostly up to date while adding the latest projects automatically. | ![Old Projects](../../assets/blog/12/projects-old.png) | ![New Projects](../../assets/blog/12/projects-new.png) | |:--:|:--:| | *Old Projects* | *New Projects* | #### About The about page is now more than a simple presentation of myself. I've added publications and appearances. There is also an extensive list of social links where you can find me. The best thing about this page is a little code trick: when visitors click on the city name, it checks their User-Agent and redirects them to either the Apple Maps app or website. | ![Old About](../../assets/blog/12/about-old.png) | ![New About](../../assets/blog/12/about-new.png) | |:--:|:--:| | *Old About* | *New About* | #### Search While browsing [Astro Integrations](https://astro.build/integrations/), I stumbled upon [astro-collection-search](https://github.com/trebco/astro-collection-search), and it was exactly what I was looking for. This tool allows you to search through the [Astro Content Collections](https://docs.astro.build/en/guides/content-collections/). The repository includes examples, and the integration is very easy to set up. I've always loved the `cmd+k` shortcut to search through content on wikis like the [Tailwind CSS documentation](https://tailwindcss.com/docs/installation). This is the first time I've implemented it on my website, and I am very happy with the results. There are many small UX improvements that make the search better. For example, the search input immediately gains focus, arrow keys can be used to navigate through the results, and the first result can be opened by pressing the enter key. | ![Search Blank](../../assets/blog/12/search-blank.png) | |:--:| | *Search Initial State* | The search textfield looks at blog post titles, content and sorts by relevance. | ![Search Results](../../assets/blog/12/search-partial.png) | |:--:| | *Search Results* | The last improvement in the search bar is the share button that appears on the initial state before the user starts typing. This allows the user to share the current search results with a link. This clever little feature is thanks to [`share()` method of the Navigator interface](https://developer.mozilla.org/en-US/docs/Web/API/Navigator/share). Here is the code for the search component: ```js // Share functionality async function sharePage() { const shareData = { title: document.querySelector('meta[property="og:title"]')?.getAttribute('content') || document.title, url: window.location.href, }; console.log(shareData); try { // First check if Web Share API is supported if (navigator.share) { await navigator.share(shareData); return; } // Fallback to clipboard API if (navigator.clipboard && navigator.clipboard.writeText) { await navigator.clipboard.writeText(window.location.href); showNotification('URL copied to clipboard!'); return; } // Last resort fallback using execCommand const textArea = document.createElement('textarea'); textArea.value = window.location.href; textArea.style.position = 'fixed'; textArea.style.left = '-999999px'; textArea.style.top = '-999999px'; document.body.appendChild(textArea); textArea.focus(); textArea.select(); try { document.execCommand('copy'); showNotification('URL copied to clipboard!'); } catch (err) { console.error('Fallback clipboard copy failed:', err); showNotification('Failed to copy URL. Please copy it manually.'); } finally { textArea.remove(); } } catch (err) { console.error('Error sharing:', err); showNotification('Failed to share page'); } } ``` And this is how it looks like in Safari | ![Search](../../assets/blog/12/search-share.png) | |:--:| | *Search* | ## Conclusion This is the first redesign of my website since 2021. I'm very happy with the results, the search is a great addition mainly for my own use to reference things quickly. If you want to see the code and try it out, you can find it on [GitHub](https://github.com/erdaltoprak/erdaltoprak.com). ## Blog Post: using-venv-pyvenv-autoenv-on-macOS.md Frontmatter: ```yaml --- title: 'Using venv, pyvenv, autoenv on macOS' pubDate: '2023-11-07' description: 'Using venv, pyvenv, autoenv on macOS' author: Erdal Toprak heroImage: ../../assets/blog/8/banner.jpg type: articles id: 8 --- ``` Content: ![Banner](../../assets/blog/8/banner.jpg) At some point, you might work on a Python project that requires specific dependencies, such as a machine learning project with an exact PyTorch version. It becomes a necessity to structure your workflow in order to avoid conflicts and iterate quickly across projects. In this post I will explain the main tools that I use on macOS and show a neat trick in order to switch virtual environments automatically. ## Why Use Virtual Environments? The ability to replicate environments not only makes onboarding easier but also minimizes the « works on my machine » issue. To accomplish our goal, we will need three tools: pyenv, venv, and autoenv. #### Pyenv [Pyenv](https://github.com/pyenv/pyenv) is an incredible tool that allows you to switch between multiple versions of Python. You can even search and install Python versions and set local and global versions. #### Venv [Venv](https://docs.python.org/3/library/venv.html) is a built-in and simple method for creating isolated Python environments. While pyenv (through pyenv-virtualenv) could be used for isolating projects you should use venv. #### Autoenv [Autoenv](https://github.com/hyperupcall/autoenv) is magical tool that just makes using virtual environments seamless and uses .env and .env.leave files to activate and deactivate environments. ## Setting up the tools The tools require homebrew or another package manager and the ability to modify your shell. We first need to install pyenv: ```shell brew install pyenv ``` Then, we need to append the following lines to our .zshrc file: ```shell if command -v pyenv 1>/dev/null 2>&1; then eval "$(pyenv init -)"; fi ``` You can refer to the documentation but the minimal commands are: ```shell pyenv versions pyenv install your_python_version pyenv global your_python_version pyenv local your_python_version ``` Finally we need autoenv: ```shell brew install autoenv ``` Then executing the following in your zsh shell: ```shell printf '%s\n' "source $(brew --prefix autoenv)/activate.sh" >> "${ZDOTDIR:-$HOME}/.zprofile" ``` Before using the tool you should read the documentation and activate the « AUTOENV_ENABLE_LEAVE » option by setting it to any non empty string. ## Practical workflow Create or clone your project: ```shell mkdir exllama && cd exllama git clone https://github.com/turboderp/exllama ``` Set with pyenv the local python version needed: ```shell pyenv local 3.10.10 ``` Create the virtual environment: ```shell python3 -m venv venv ``` Add the .env and .env.leave and approve the autoenv changes: ```shell # .env file for autoenv # It looks quite cryptic but it's to preserve the virtual environment state across sub folders venv_dir="venv" currentvenv="" # Function to traverse up the directory structure to find the parent directory containing venv_folder get_project_root() { local current_dir="$PWD" while [[ "$current_dir" != "" && ! -d "$current_dir/$venv_dir" ]]; do current_dir=${current_dir%/*} done echo "$current_dir" } root_dir=$(get_project_root) if [[ -z "$root_dir" || ! -d "$root_dir/$venv_dir" ]]; then echo "Unable to find the virtual environment folder." return fi if [[ $VIRTUAL_ENV != "" ]]; then # Strip out the path and just leave the env name currentvenv="${VIRTUAL_ENV##*/}" fi if [[ "$currentvenv" != "$venv_dir" ]]; then python_version=$(python --version 2>&1) echo "Switching to environment: $venv_dir | $python_version" # Source the activation script source "$root_dir/$venv_dir/bin/activate" fi ``` ```shell # .env.leave for autoenv deactivate ``` ![autoenv](../../assets/blog/8/autoenv.jpg) ## Conclusion You are now ready to start your development with a clean and isolated environment! If you found this guide useful you can also check the previous ones about [Setting up macOS for development](https://erdaltoprak.com/blog/setting-up-macos-for-development/) and [AI Homelab: A guide into hardware to software considerations](https://erdaltoprak.com/blog/ai-homelab-a-guide-into-hardware-to-software-considerations/). ## Blog Post: setting-up-macos-for-development.md Frontmatter: ```yaml --- title: Setting up macOS for development pubDate: '2021-10-25' description: Setting up macOS for development author: Erdal Toprak heroImage: ../../assets/blog/4/banner.jpeg type: articles id: 4 --- ``` Content: ![Banner](../../assets/blog/4/banner.jpeg) > Update : As of the 20/04/2022, I'm no longer installing most my development environment in the same way, I've made[ this post](https://erdaltoprak.com/blog/abstracting-local-development-environments-through-containers) explaining my transition to development containers. With each release of macOS, I clean install everything on my MacBook just to be extra safe and avoid long debugging hours if there is an incompatibility. Thus this guide is about setting up your machine quickly and in a predictable way. I'm primarily doing ML but I also set up various environnements, so feel free to be selective while reading this guide. I removed a lot of very specific things to make a good balance between development and general macOS usage. ### Initial formatting steps **When everything you care about is backed up** you can proceed with pressing [CMD⌘+R on startup](https://support.apple.com/en-us/HT208496). Then go to Disk Utility, format your drive with APFS and install macOS. Complete the initial set up with your Apple ID and choose your privacy preferences, you should now be on the desktop. ### Homebrew, Zsh & Other Mac settings [Homebrew](https://brew.sh) is the most popular macOS package manager, we will use it to install all our apps ([except mas ones because it doesn't work anymore](https://github.com/mas-cli/mas/issues/164)) In your terminal let's copy & paste to install Homebrew: ```shell /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" ``` Then let's make sure everything is up to date: ```shell brew update && brew upgrade ```` You can now install your apps, or search for it on the [Homebrew website](https://brew.sh): ```shell # Note: you can install multiple apps in just one line, but this is a better visualization # Below the "--cask" refers to graphical applications instead of formulae. # Browsers brew install --cask firefox brew install --cask firefox-developer-edition brew install --cask google-chrome brew install --cask homebrew/cask-versions/google-chrome-dev # Media players brew install --cask iina brew install --cask vlc # File downloads and disk analyze brew install --cask transmission brew install --cask grandperspective # Media transcode brew install --cask handbrake # Flash images (to USB for example) brew install --cask balenaetcher # Development brew install --cask visual-studio-code brew install --cask docker brew install --cask local brew install --cask cyberduck brew install --cask tower # App cleaner brew install --cask appcleaner # Remote communication brew install --cask zoom brew install --cask discord # Vpn brew install --cask private-internet-access # Keyboard based window management brew install --cask rectangle # Markdown writer brew install --cask obsidian ``` Here are some formulae, make sure to understand each software that you install before trusting a random internet guide: ```shell # Logitech Options software brew install homebrew/cask-drivers/logitech-options # Development brew install docker-compose brew install node brew install htop brew install git brew install tree # Python brew install pyenv # Shell brew install romkatv/powerlevel10k/powerlevel10k brew install zsh-autosuggestions brew install zsh-syntax-highlighting brew install zsh-history-substring-search ``` We can now configure Python: ```shell pyenv install 3.9.7 pyenv global 3.9.7 echo -e 'if command -v pyenv 1>/dev/null 2>&1; then\n eval "$(pyenv init -)"\nfi' >> ~/.zshrc ```` To finish the powerlevel10k and zsh setup we need the following: ```shell # Plugins echo "source $(brew --prefix)/opt/powerlevel10k/powerlevel10k.zsh-theme" >>~/.zshrc echo "source /usr/local/share/zsh-autosuggestions/zsh-autosuggestions.zsh" >>~/.zshrc echo "source /usr/local/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh" >>~/.zshrc echo "source /usr/local/share/zsh-history-substring-search/zsh-history-substring-search.zsh" >>~/.zshrc # Zsh tweaks echo -e "autoload -Uz compinit" >>~/.zshrc echo -e "compinit" >>~/.zshrc echo -e "zstyle ':completion:*' menu select" >>~/.zshrc # Key bindings for history searching, the order is important echo -e "bindkey '^[[A' history-substring-search-up" >>~/.zshrc echo -e "bindkey '^[[B' history-substring-search-down" >>~/.zshrc # Note: lines below are my personal aliases, this might disturb your workflow echo -e "alias c='clear'" >>~/.zshrc echo -e "alias rmm='rm -rf'" >>~/.zshrc echo -e "alias lss='ls -lah'" >>~/.zshrc echo -e "alias edit='code ~/.zshrc'" >>~/.zshrc echo -e "alias reload='source ~/.zshrc'" >>~/.zshrc ```` We also need to configure git basics properly : ```shell git config --global user.email "YOUR_EMAIL" git config --global user.name "YOUR_NAME" ``` macOS is better with some tweaks: ```shell # Note: There are a lot of settings that you could change, this is just a few of them that I use # Always show file extensions defaults write NSGlobalDomain AppleShowAllExtensions -bool true # Show status bar in Finder defaults write com.apple.finder ShowStatusBar -bool true # Allow text selection in Quick Look defaults write com.apple.finder QLEnableTextSelection -bool true # Disable TimeMachine prompt defaults write com.apple.TimeMachine DoNotOfferNewDisksForBackup -bool true # This is needed to apply our changes killAll Finder ``` Finally before closing the terminal I setup [powerlevel10k](https://github.com/romkatv/powerlevel10k): ```shell # Note: This has already been installed in the fomulae section above, this is just the install source ~/.zshrc ``` ### Everthing else Once this is done I usually login into my password manager, retreive software licences, ssh keys and proceed to login to some applications like Google Chrome, Discord, etc. #### Here is everything I changed in the System Preferences app * General > Enable Dark Mode * Desktop > Live wallpaper selection * Desktop > Screensaver > Hot Corners > Bottom Left > CMD⌘ + Display sleep * Dock > Enable Automatically hide * Siri > Disable show Siri in menu bar * Notifications > Disable everything or remove sounds * Screen Time > Enable & share across devices * Security > General > Require password > Immediately * Trackpad > More Gestures > Enable everything * Sharing > Setup the computer name * iCloud > iCloud Drive > Enable Desktop & Document * Keyboard > Text > Disable spelling and capitalization #### Some Mac App Store apps that I use * 1Password (Password Manager) * Xcode (Code apps) * Amphetamine (Keep Mac awake) * Adguard (Safari ad disable) * The Unarchiver (Almost unrar for Mac) * Parcel (Track packages) ### Conclusion This was a quick look at how I install macOS, I hope this helped you in your next fresh install. If you enjoyed this guide you can also check the previous ones about [iCloud custom domains](https://erdaltoprak.com/blog/icloud-custom-domain-guide) or [Cloudflare argo & access on a RaspberryPi](https://erdaltoprak.com/blog/setting-up-cloudflare-argo-and-access-on-a-raspberry-pi). ## Blog Post: setting-up-cloudflare-argo-access-on-a-raspberry-pi.md Frontmatter: ```yaml --- title: Setting up Cloudflare Argo & Access on a Raspberry Pi pubDate: '2021-09-29' description: Setting up Cloudflare Argo & Access on a Raspberry Pi author: Erdal Toprak heroImage: ../../assets/blog/3/rpi-banner.jpeg type: articles id: 3 --- ``` Content: ![Banner](../../assets/blog/3/rpi-banner.jpeg) A few nights ago I was casually browsing on [/r/SelfHosted](https://www.reddit.com/r/selfhosted/) when I came across a post mentioning how insecure some of the home servers are regarding to their WAN access. The obvious answer is Cloudflare Argo & Cloudflare Access. Before explaining everything, let's clarify what this guide is about: If you are looking to access your homeserver from outside, for example, your Raspberry Pi, in a secure way without exposing ports on your own, this guide is for you. Before you begin you will need a few things: * A Cloudflare account * A registered domain name with access to the DNS panel (ideally through Cloudflare but at the very least point the nameservers to them) * A Raspberry Pi or a server with your favorite Linux distribution * Some basic knowledge of Docker, shell commands and networking ### What's Cloudflare Argo & Access ? Cloudflare Argo tunnels allows you to create an encrypted tunnel between your homeserver and the Cloudflare servers. This is done seamlessly, in a few lines of shell commands. As you can see from the image below ([from the Cloudflare Blog](https://www.cloudflare.com/products/tunnel)), you should consider the tunnel like a third party making sure you get the fastest access with the least risks of exposing your services. ![Argo Tunnel](../../assets/blog/3/argo-tunnel-diagram.jpg) On the technical side you get a few features as a bonus like TLS certificates, DDOS protection and smart routing. Another Cloudflare service is Access, which is part of Cloudflare Teams and it allows you to use their zero-trust infrastructure to access your services securely. What we are most interested in are the access policies and the application dashboard. ![Cloudflare Access](../../assets/blog/3/team-access-diagram.jpg) If you are interested in more technical information you should consider reading their [developer documentation](https://developers.cloudflare.com/cloudflare-one/). ### Practical deployment To illustrate how easy and magical it is, I will deploy from start to finish three docker containers (portainer, gluetun & librespeed) on a Raspberry Pi. Get your [Raspberry Pi OS Lite image](https://www.raspberrypi.org/software/operating-systems/#raspberry-pi-os-32-bit) and use [balenaEtcher](https://www.balena.io/etcher/) to write it down on your SD card. You can add an "ssh" file without any extensions to make your Raspberry Pi headless and accessible from your computer or just plug-it in. Let's get some updates: ```shell sudo apt update sudo apt upgrade ``` We can now install Docker: ```shell curl -sSL https://get.docker.com | sh ``` Add permissions to the current user: ```shell sudo usermod -aG docker ${USER} ``` Let's also install docker-compose: ```shell sudo apt-get install libffi-dev libssl-dev sudo apt install python3-dev sudo apt-get install -y python3 python3-pip sudo pip3 install docker-compose ``` You can enable the docker service: ```shell sudo systemctl enable docker ``` Let's deploy our docker containers, but before that a bit of explanation about the containers we are going to use: * [Portainer](https://docs.portainer.io) is a GUI for docker. * [Gluetun](https://github.com/qdm12/gluetun) is a super awesome, vpn docker container, that allows you to route any other service through that container for additional privacy. * [Librespeed](https://hub.docker.com/r/linuxserver/librespeed) is just a lightweight speedtest implementation and will serve as an exemple of network routing. All these containers are just here to illustrate this practical example and are not necessary for the Cloudflare side of things. Let's start with Portainer: ```shell docker run -d -p 8000:8000 -p 9443:9443 --name portainer \ --restart=always \ -v /var/run/docker.sock:/var/run/docker.sock \ -v portainer_data:/data \ portainer/portainer-ce:latest ``` You can go to http://[your-machine-ip]:9443 and finish the Portainer setup on your own (and follow the [official guide](https://docs.portainer.io/v/ce-2.9/start/install/server/docker/linux) if needed) If you did everything right, your Portainer dashboard should look like this (without the two other containers at this moment): ![Portainer](../../assets/blog/3/portainer-working-state.jpg) Now we can docker compose gluetun and librespeed in one file, please note that I'm using PIA vpn but you can use something else and even skip if needed. This is just an example of how to route a container through another one: ```shell mkdir gluetunAndLibrespeed cd gluetunAndLibrespeed touch docker-compose.yml nano docker-compose.yml ``` And paste the following lines: [The documentation of gluetun is here](https://github.com/qdm12/gluetun/wiki) if you need help for your vpn. ```yaml --- version: "2.1" services: gluetun: image: qmcgaw/gluetun container_name: gluetun cap_add: - NET_ADMIN volumes: - /home/pi/gluetunAndLibrespeed:/gluetun environment: - VPNSP=private internet access - OPENVPN_USER=[YOUR_USERNAME] - OPENVPN_PASSWORD=[YOUR_PASSWORD] - REGION=[YOUR_REGION] ports: - 7777:80 restart: unless-stopped librespeed: image: ghcr.io/linuxserver/librespeed container_name: librespeed environment: - PUID=1000 - PGID=1000 - TZ=Europe/Paris - PASSWORD=PASSWORD volumes: - /home/pi/gluetunAndLibrespeed:/config network_mode: "service:gluetun" depends_on: - gluetun restart: unless-stopped ``` To get this started, make sure to still be in the folder: ```shell docker-compose up -d ``` Finally the Cloudflare part! Let's setup Cloudflare teams to configure our access rules and our dashboard Go to the [Teams area](https://dash.teams.cloudflare.com/), you should have a configuration page with a teams name selection. I suggest you spend some time on the Teams dashboard to configure a default policy for your apps (I only use the one-time pin), once your understand the basics (policies, dashboard, etc..) let's add our first self-hosted application. ![Cloudflare Self-Hosted Selection](../../assets/blog/3/cloudflare-teams-apps.png) ![Cloudflare Self-Hosted Application setup](../../assets/blog/3/cloudflare-application-setup.png) Once you are ready to add your first application, just give it a name, then a subdomain (like librespeed.[YOUR_DOMAIN].tld), choose your domain in the list, then click next to add the policy configuration that you feel comfortable with and you're pretty much done for the web configuration. We can use the tunnel as a service, docker container or standalone like we are doing right now. I'm following (and you should too) the [great documentation provided by Cloudflare](https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/run-tunnel/run-as-service): ```shell cd wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-arm.tgz tar -xvzf cloudflared-stable-linux-arm.tgz sudo cp ./cloudflared /usr/local/bin sudo chmod +x /usr/local/bin/cloudflared cloudflared -v ``` Let's authenticate: ```shell cloudflared tunnel login ``` Once this is done, you should have choosen a hostname (like "pi") and we will use that for the creation of our tunnels. If I want to expose my librespeed container, I will create the tunnel: ```shell cloudflared tunnel create pi librespeed.[YOUR_DOMAIN].tld ``` Finally you modify the configuration file the .cloudflared directory and it should look like this: ```markdown # url: http://localhost:9000 tunnel: XXXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXX credentials-file: /home/pi/.cloudflared/XXXXXXXXXXXXXXXXXXX.json ingress: - hostname: librespeed.[YOUR_DOMAIN].tld service: http://localhost:7777 - service: http_status:404 ``` Congratulations, go to [YOUR_NAME].cloudflareaccess.com and that's it, I will include a few screenshots of how it looks like in the browser. ![Teams url access](../../assets/blog/3/teams-url-access.png) ![Teams pin code](../../assets/blog/3/teams-pin-code.png) ![Teams dashboard](../../assets/blog/3/teams-dashboard.png) ![Teams librespeed app](../../assets/blog/3/teams-librespeed.png) If you enjoyed this guide you can also check the previous one about [iCloud custom domains](https://erdaltoprak.com/blog/icloud-custom-domain-guide). ## Blog Post: setting-up-a-local-reverse-proxy-on-proxmox-with-traefik-and-cloudflare.md Frontmatter: ```yaml --- title: Setting up a local reverse proxy on Proxmox with Traefik and Cloudflare pubDate: '2024-05-08' description: >- Setting up a local reverse proxy on your homelab with Traefik v3 and Cloudflare author: Erdal Toprak heroImage: ../../assets/blog/11/banner.jpg type: articles id: 11 --- ``` Content: ![Banner](../../assets/blog/11/banner.jpg) After setting up my AI homelab and various other services in a [previous blog post](https://erdaltoprak.com/blog/ai-homelab-a-guide-into-hardware-to-software-considerations/), my friend [Nader](https://naderchatti.com) and I experimented with how to access some of these services using a domain instead of the IP address without exposing our home IP or opening ports. In this blog post, I will guide you through setting up a local reverse proxy on Proxmox with Traefik v3 and Cloudflare. This setup will allow you to access your services with a domain name and also secure them with SSL certificates. ### What is a reverse proxy? A reverse proxy is a server that sits between clients and servers. It forwards client requests to the appropriate backend server and then returns the server's response to the client. This allows you to host multiple services on a single server and route traffic based on the domain name. ![Reverse Proxy](../../assets/blog/11/reverse-proxy.png) In this simplified diagram, the user wants to access "MyService.MyDomain.tld", the requests goes through the DNS resolver, which gets a local IP address from Cloudflare and then the reverse proxy forwards the request to the correct service. ### My Proxmox setup On my Proxmox setup I have a few LXC containers running various services. I have a dedicated LXC container for Traefik v3, which is an underprivileged Alpine Linux container with Docker installed. This setup is, in my opinion, the most stable way to run Traefik on Proxmox. If you want to replicate the setup I used the [ttek](https://tteck.github.io/Proxmox/) script to create the LXC container. You should accept the docker compose installation and the script will install Docker and Docker Compose for you. ```shell bash -c "$(wget -qO - https://github.com/tteck/Proxmox/raw/main/ct/alpine-docker.sh)" ``` ### Setting up Traefik on Docker From this point on the setup is heavily inspired by the excellent [video tutorial](https://www.youtube.com/watch?v=liV3c9m_OX8) of Techno Tim. Inside the Traefik LXC container, create a folder for the Traefik configuration ```shell mkdir traefik cd traefik touch docker-compose.yml ``` ```yaml services: traefik: image: traefik:latest # Use the latest Traefik image container_name: traefik # Name of the container restart: unless-stopped # Ensures the container restarts if it stops unexpectedly security_opt: - no-new-privileges:true # Prevents the container from gaining additional privileges networks: proxy: # Connects to the predefined external network named 'proxy' ports: - 80:80 # HTTP port - 443:443 # HTTPS port - 8080:8080 # Traefik dashboard port environment: - CF_API_EMAIL=YOUR_CLOUDFLARE_ACCOUNT_EMAIL # Cloudflare account email for API access - CF_DNS_API_TOKEN=YOUR_CLOUDFLARE_API_TOKEN_HERE # Cloudflare DNS API token volumes: - /etc/localtime:/etc/localtime:ro # Sync time with the host - /var/run/docker.sock:/var/run/docker.sock:ro # Allows Traefik to interact with Docker - /root/traefik/data/traefik.yml:/traefik.yml:ro # Traefik configuration file - /root/traefik/data/acme.json:/acme.json # SSL certificate file - /root/traefik/data/config.yml:/config.yml:ro # Additional configuration file - /root/traefik/data/logs:/var/log/traefik # Log directory labels: - "traefik.enable=true" # Enable Traefik on this service - "traefik.http.routers.traefik.entrypoints=http" # Define HTTP entrypoint - "traefik.http.routers.traefik.rule=Host(`traefik.MyDomain.TLD`)" # Host rule for routing - "traefik.http.middlewares.traefik-auth.basicauth.users=traefik:$2y$$05$$fkKKsDM0LEQAG6nPuk7dxeJElSkGJxCeuCsZgwoQWqPzyZdRkfYeK" # Basic auth for security traefik for username/pass - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" # Redirect HTTP to HTTPS - "traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https" # Set forwarded headers for SSL - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" # Apply HTTPS redirect middleware - "traefik.http.routers.traefik-secure.entrypoints=https" # Secure entrypoint for HTTPS - "traefik.http.routers.traefik-secure.rule=Host(`traefik.MyDomain.TLD`)" # Host rule for secure routing - "traefik.http.routers.traefik-secure.middlewares=traefik-auth" # Apply authentication middleware - "traefik.http.routers.traefik-secure.tls=true" # Enable TLS for secure connection - "traefik.http.routers.traefik-secure.tls.certresolver=cloudflare" # Use Cloudflare for SSL certificate resolution - "traefik.http.routers.traefik-secure.tls.domains[0].main=MyDomain.TLD" # Main domain for SSL certificate - "traefik.http.routers.traefik-secure.tls.domains[0].sans=*.MyDomain.TLD" # SANs for SSL certificate - "traefik.http.routers.traefik-secure.service=api@internal" # Internal service for Traefik API networks: proxy: name: proxy # Specifies the external network to connect to external: true # Indicates that the network is external ``` ### Configuring Traefik v3 with our services Get your Cloudflare DNS API key with a restricted scope for the zone you want to use. Add a `docker-compose.yml` file with the following content: In the `data` folder we need 3 files: ```shell mkdir data cd data ```` Add a `traefik.yml` file with the following content: ```yaml api: dashboard: true debug: true entryPoints: http: address: ":80" http: redirections: entryPoint: to: https scheme: https https: address: ":443" traefik: address: ":8080" serversTransport: insecureSkipVerify: true providers: docker: endpoint: "unix:///var/run/docker.sock" exposedByDefault: false file: filename: config.yml certificatesResolvers: cloudflare: acme: email: YOUR_CLOUDFLARE_ACCOUNT_EMAIL storage: acme.json dnsChallenge: provider: cloudflare # uncomment this if you have issues pulling certificates through cloudflare, By setting this flag to true disables the need to wait for the #disablePropagationCheck: true resolvers: - "1.1.1.1:53" - "1.0.0.1:53" ``` Add a `config.yml` file with the following content: ```yaml http: #region routers routers: dashboard: entryPoints: - "https" rule: "Host(`MyDomain.TLD`)" middlewares: - default-headers - https-redirectscheme tls: {} service: dashboard #endregion #region services services: dashboard: loadBalancer: servers: - url: "http://192.168.1.200:8080" passHostHeader: true #endregion #region middlewares middlewares: https-redirectscheme: redirectScheme: scheme: https permanent: true default-headers: headers: frameDeny: true browserXssFilter: true contentTypeNosniff: true forceSTSHeader: true stsIncludeSubdomains: true stsPreload: true stsSeconds: 15552000 customFrameOptionsValue: SAMEORIGIN customRequestHeaders: X-Forwarded-Proto: https default-whitelist: ipWhiteList: sourceRange: - "10.0.0.0/8" - "192.168.0.0/16" - "172.16.0.0/12" secured: chain: middlewares: - default-whitelist - default-headers #endregion ``` Add a blank `acme.json` file: ```shell touch acme.json chmod 600 acme.json ``` At start this file will be empty, but Traefik will populate it with the SSL certificates it gets from certbot and the DNS verification through Cloudflare. ### Setting up your domain with Cloudflare Before running Traefik, it's essential to configure your domain's DNS settings on Cloudflare to ensure that your services are accessible via your domain name and secured with SSL. Here are the steps to set up the necessary DNS records: 1. **Log in to your Cloudflare account** and select the domain you want to configure. 2. **Navigate to the DNS section** of your Cloudflare dashboard. 3. **Add the following DNS records**: - **A Record**: This should point to the local IP address of your Traefik container. Set the name to `@` to represent your root domain (e.g., `MyDomain.TLD`). - **CNAME Record**: Create a CNAME record for each subdomain that points to your root domain. For example, if you have a service accessible at `MyService.MyDomain.TLD`, create a CNAME record with the name `MyService` and the value `MyDomain.TLD`. 4. **Ensure Proxy Status**: Set the proxy status to 'DNS Only' for these records. 5. **SSL/TLS Configuration**: - Go to the SSL/TLS section of your Cloudflare dashboard. - Ensure that the SSL/TLS encryption mode is set to 'Full (strict)'. This ensures that the connection between Cloudflare and your server is secure. ### Starting Traefik In your root folder `/traefik, run the following command to start the Traefik container: ```shell docker network create proxy docker compose up -d --force-recreate ``` At this point you should be able to access the Traefik dashboard at `https://traefik.MyDomain.TLD` with the username/password you set in the `docker-compose.yml` file. ![Traefik Dashboard](../../assets/blog/11/dashboard.png) ### Conclusion Setting up a local reverse proxy using Proxmox, Traefik, and Cloudflare enhances the security and accessibility of your services. By following the steps outlined in this guide, you can achieve a robust setup that protects your services with SSL certificates and makes them accessible via domain names instead of IP addresses. If you found this guide useful you can also check the previous ones about [Using venv, pyvenv-autoenv, and macOS](https://erdaltoprak.com/blog/using-venv-pyvenv-autoenv-on-macOS/) and [Abstracting local development environments through containers](https://erdaltoprak.com/blog/abstracting-local-development-environments-through-containers/). ## Blog Post: icloud-custom-domain-guide.md Frontmatter: ```yaml --- title: iCloud custom domain guide pubDate: '2021-09-09' description: iCloud custom domain guide author: Erdal Toprak heroImage: ../../assets/blog/2/banner.png type: articles id: 2 --- ``` Content: ![Banner](../../assets/blog/2/banner.png) In this guide I will show the few steps that are needed in order to get your domain up and running. This feature might be useful if you only handle a personal or a small business domain and could benefit from more centralisation with your existing iCloud Mail (and the web access that comes with it), you also get spam filtering, push notifications on iOS and maildrop for large attachments. Before you begin you will need a few things: * A registered domain name and access to the DNS panel * An iCloud account with an active subscription (then go to [iCloud Beta website](https://beta.icloud.com)) ![iCloud main page](../../assets/blog/2/icloud-1.png) Once on the webpage you can choose to share that domain with your family (as part of the [family sharing program](https://www.apple.com/family-sharing/)) or to only you. ![iCloud Guide](../../assets/blog/2/icloud-2.png) Once you enter your domain you can immediately add the existing addresses to keep continuity with your current ones (and also add more later). Once this is done you will need to add some records on your DNS panel. (Note: Do not remove any DNS records before this step as you need to click on the verification link to each of your email accounts) ![iCloud Guide](../../assets/blog/2/icloud-3.png) On the third step you will have [**unique instructions**](https://support.apple.com/en-us/HT212524), so please use the image as a visual guide for how things should look like. Please note that for Cloudflare users that I advise to not proxy the given CNAME. Congratulations, once you give the DNS some time to propagate you are all done! You can now set your default address, add addresses to iMessage or Facetime and send & receive emails from iCloud. ## Blog Post: hello-world.md Frontmatter: ```yaml --- title: Hello World pubDate: '2021-01-01' description: First blog post author: Erdal Toprak heroImage: ../../assets/blog/1/banner.jpg type: articles id: 1 --- ``` Content: ![Banner](../../assets/blog/1/banner.jpg) < Hello World! > ## Blog Post: graduating-with-ai-ml-degree-a-chatgpt-story.md Frontmatter: ```yaml --- title: Graduating with an AI/ML degree - a ChatGPT story pubDate: '2022-12-12' description: A ChatGPT written story about my graduation author: Erdal Toprak heroImage: ../../assets/blog/7/banner.jpg type: notes id: 7 --- ``` Content: ![Banner](../../assets/blog/7/banner.jpg) In early October, I finally graduated with an AI/ML degree from the University of [Nice Côte d'Azur](https://univ-cotedazur.fr/). Since all the hype is around language models, I thought it would be fun to write a story about my graduation blog post using [ChatGPT](https://openai.com/blog/chatgpt/). Graduating from university with a degree in AI/ML is a significant milestone that reflects both hard work and dedication. Pursuing a degree in AI/Data requires a unique blend of analytical skills, technical abilities, and creativity. Despite the challenges, however, the journey is both gratifying and exciting, with interesting projects and the opportunity to be surrounded by inspiring researchers and teachers. The degree involves a wide variety of projects, ranging from data analysis to machine learning and computer vision. Students often work on real-world projects to develop solutions to complex problems that organizations face today. Such projects may include identifying patterns in customer behavior, creating recommendation systems, or developing image recognition algorithms. Students learn to work with big data, write efficient code, and implement advanced algorithms to create solutions that are innovative, efficient, and scalable. Being surrounded by inspiring and knowledgeable researchers and teachers is one of the most significant benefits of pursuing an AI/ML degree. They challenge students to think outside the box and provide a unique perspective on the subject matter. Students learn from the best in the field and gain exposure to the latest developments in artificial intelligence. They have access to resources, tools, and industry connections that they can leverage to pursue their career goals. As an AI language model, I may not have personal experience with the challenges and gratification of graduating with an AI/ML degree, but I do understand the technicalities and possibilities of the field. So, if you're reading this and wondering how an AI could have written this post, well, that's the magic of technology! Congratulations on your graduation, and may your future endeavors be as exciting and rewarding as the journey that led you here. ## Blog Post: apple-music-is-the-last-library-focused-music-service.md Frontmatter: ```yaml --- title: Apple Music is the last library focused music service pubDate: '2021-11-30' description: Apple Music is the last library focused music service author: Erdal Toprak heroImage: ../../assets/blog/6/banner.jpeg type: notes id: 6 --- ``` Content: ![Banner](../../assets/blog/6/banner.jpeg) Back in the days my parents used to listen to cassette tapes and they had this huge custom cassette shelf that my father built. The organization was simple but It still made sense since each cassette was either a compilation of several artists or an album. You could easily find your way through with the labels and the only issue was the integrity of the tapes after several years or the physical space that you needed in the car for a week-end trip. Fast forward to the Youtube era and things were starting to be more chaotic with hundreds of unorganized music files stored in CDs being the norm. This was the period where I also got my first MacBook and discovered iTunes. You could buy songs, import existing ones and customize, albeit not very ergonomically, your entire library. Nowadays we are all familiar with the various music services, the catalogue is almost the same, your choice is based essentially on additional features like the ecosystem, lyrics or family pricing. Those services allow for the most part a centralization of our music needs, you can listen to radio stations, share playlists and find the publicly shared ones. The current trend of consuming music is very similar, in my opinion, to the trend in computer file management as described in this very fascinating [article by The Verge](https://www.theverge.com/22684730/students-file-folder-directory-structure-education-gen-z). This article explains how students do not follow the same organization paradigm based on folders and local file management. This could be, in part, attributed to the new ways young students learn, which is on online first operating systems or tablets, where by default, the local system is hidden and also where everything is done through applications. I personally foresee a future trend where everything is abstracted and algorithmic based. If you look closely at the services that we may use everyday it's clear that this is already happening at a large scale and that we are pushed on either suggestions based on activity or on abstracted search results. Let's get back to iTunes or rather Apple Music. With the launch of their music streaming service, Apple tried to include their previous customer with features like iTunes Match and the ability to still organize your library even if you only use their streaming catalogue. Here are a few features that are available on Apple Music (own added songs and cloud based catalogue) but not on the other big music services: * Custom Music Artworks * Custom Artists / Producer / Lyrics description * Custom rules to ignore songs on random selection * Custom rules to select equalizer per song * Folder based navigation * Smart Folder based playlists * Uploading your music to the cloud and streaming them as any other song This set of features aren't groundbreaking by any previous standards, yet in 2021, it's the only mainstream music streaming service that allows that. Sure you can still buy songs and organize everything yourself, through [Plex Music](https://www.plex.tv/your-media/music/) for example, but the idea that your own added and cloud based songs could co-exist and benefit from a voice based assistant like Siri or search with Spotlight is quite the commitment from Apple. I'll conclude by encouraging everyone to always consider the local first, self managed solutions, this could range from backing up a file on your computer, accessing your connected light switch to organizing your music library. Thank you for reading my thoughts on this subject, this is a departure from my previous, more technical posts, that I would like to write at times. You can also read my previous posts on [Cloudflare Argo and Access on a Raspberry Pi](https://erdaltoprak.com/blog/setting-up-cloudflare-argo-access-on-a-raspberry-pi) and [Setting up macOS for development](https://erdaltoprak.com/blog/setting-up-macos-for-development). ## Blog Post: ai-homelab-a-guide-into-hardware-to-software-considerations.md Frontmatter: ```yaml --- title: AI Homelab - A guide into hardware to software considerations pubDate: '2023-09-04' description: AI Homelab - A guide into hardware to software considerations author: Erdal Toprak heroImage: ../../assets/blog/9/banner.png type: articles id: 9 --- ``` Content: ![Banner](../../assets/blog/9/banner.png) The AI landscape has expanded significantly and become increasingly fragmented. Nowadays, each commercial project has its own website, app, and discord bot. We've reached a point where testing on one's own hardware often simplifies the process. Meanwhile, relying on "the cloud" can be cumbersome and costly, especially if you plan to run or train models regularly. In this post I will explain my hardware choices, software considerations and give some general recommendations that allowed me to not only experiment on AI but also make the most of my homelab. ## Hardware choices ![RTX4090](../../assets/blog/9/desktop.jpeg) In 2023, hardware prices remain steep. While there are deals to be found in the second-hand or refurbished markets, I'd advise caution. For instance, I'd be hesitant to purchase components like power supplies or RAM from these markets. Given that I live in France/Europe, the cost difference between new and used components for these specific items is minimal, making the savings hardly justifiable. Nevertheless, you should explore your local market for components such as cases and motherboards (especially if they've been decommissioned or replaced by a company). Also, consider CPUs, these components aren't typically in shortage. Since they don't move around a lot, the likelihood of receiving a damaged one is slim. #### Selecting the right hardware What constitutes the "right" hardware can vary considerably between individuals. However, when it comes to AI workloads, especially training, there are several key factors to take into account: ##### Case In order to fit all the hardware that you need, you will want at least an ATX compatible case or go for a 4U server if you already have some server-grade equipments. For my homelab I went with a [Define R5](https://www.fractal-design.com/products/cases/define/define-r5/black/) in order to not only fit the GPU but also many hard drives for the nas. ##### CPU For the CPU the main issue is PCIe lanes, you don't need a lot of cores for ML workloads but you definitely need the lanes between the ones allocated to NVME drives, network cards, GPU(s) it becomes very difficult to find consumer grade motherboards and CPU that can handle everything. If you're setting up from scratch, a second-gen Threadripper is a noteworthy option. I went with an [AMD Ryzen 5950x](https://en.wikichip.org/wiki/amd/ryzen_9/5950x). ##### Motherboard Your CPU choice inherently influences your motherboard options. This is why I previously highlighted the second-gen Threadripper for those seeking maximum PCIe lanes. It's essential to recognize that 20 lanes can be limiting, and you don't always have the flexibility to allocate these lanes as desired. | ![PCIe Lanes](../../assets/blog/9/pcie.png) | |:--:| | *[PCI Express link performance](https://en.wikipedia.org/wiki/PCI_Express#Comparison_table)* | The table above show the difference in throughput on consumer grade hardware. In my case I'm either using Gen4x16 for the first slot where my GPU is located or double Gen4x8 in case of two GPUs. If you're only looking at the throughput Gen4x8 is the "same" as Gen3x16 which many people still using for gaming purposes but you might run into some bottleneck if you need heavy parallelization. Depending on your motherboard you might also have some features like a 10G NIC, Thunderbolt and better software support (more on that later on the software part). I went with an ASRock X570 Taichi that has quite a robust support for virtualization and a decent balance on how the PCIe lanes are allocated. ##### GPU The first step into the GPU selection should be to read [this excellent article by Tim Dettmers](https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/) | ![PCIe Lanes](../../assets/blog/9/gpu-recommendations.png) | |:--:| | *[GPU Recommendation Chart](https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/#GPU_Recommendations)* | The hardest part of the GPU selection is actually to find one at MSRP let alone two or more unless you're an organization or a startup affiliated with the [Nvidia Inception Program](https://www.nvidia.com/en-gb/startups/). As a visual note, I would like to remind you that the RTX 4090 is massive and that the [12VHPWR connector should not be bent into extreme angles ](https://www.youtube.com/watch?v=ig2px7ofKhQ). | ![RTX 4090 Size](../../assets/blog/9/gpu-size.jpg) | |:--:| | *First test fit of the GPU* | Until this point I've only mentioned NVIDIA, that's because ROCm and oneAPI have a very long way to go and other people like [George Hotz, are trying to get AMD on MLPerf](https://geohot.github.io/blog/jekyll/update/2023/06/07/a-dive-into-amds-drivers.html). While it's promising for consumers to witness such competition, those engaged in serious work should prioritize the most stable hardware available, unless, of course, your hobby/work revolves around tinkering with drivers and software integration. ##### Other components The other components should also be carefully selected, here is a non exhaustive list: - **RAM**: AMD allows for ECC RAM, it is great for an always on server, check your motherboard QVL and forums for more informations - **Fans**: High-quality fans can significantly reduce noise while delivering enhanced static pressure. - **Thermal Paste**: The right thermal paste can help lower your system's temperature by a few crucial degrees - **M.2 NVME**: I allocated the first drive for Proxmox (covering the main OS, backups, and other VMs). I've dedicated a larger, second drive exclusively for the AI VM. It's worth noting that adding multiple M.2 drives might disable or reduce the speed of some PCIe slots - **Hard Drives**: As I also utilize the server for NAS purposes, I incorporated Seagate Exos drives. They're known for reliability, though they can be a tad noisy during intense write operations ##### Noise, Heat & Power consumption Noise tolerance varies among individuals, but I think many would agree: that hearing fans operate at full throttle during an ML workload is far from pleasant. If possible, relocate your server to a separate space, such as a well-ventilated closet or utility room. For the power consumption, I have an [Eve Energy Smart Plug](https://www.evehome.com/en/eve-energy) and given the current electricity rates, my system averages around 1kWh per day at idle loads. ## Software considerations When it comes to software, individual requirements and preferences vary widely. However, if you're running a multipurpose server, it's highly beneficial to opt for a modular ecosystem. This should allow you the flexibility to easily launch, modify, and back up your services and applications. In my case I chose a T1 Hypervisor called Proxmox. #### T1 Hypervisors A type 1 hypervisor, also called as bare-metal hypervisor, runs on the host machine and this approach results in improved performance and security, given that your services remain isolated within individual VMs. ![RTX4090](../../assets/blog/9/proxmox_architecture.png) Proxmox Virtual Environment (PVE) is an [open source](https://git.proxmox.com) server management platform that is quite popular in the homelab community. Its robust user base provides extensive documentation, and the standard version of Proxmox offers all the essential features required for remote VM management #### GPU Passthrough While I utilize Proxmox for passthrough of various devices, it's essential to keep a few things in mind before diving into the setup: - Set up the BIOS! As I mentioned on the hardware side, every motherboard is not suitable for a Proxmox setup, you need good [IOMMU groups](https://en.wikipedia.org/wiki/Input–output_memory_management_unit) and that varies even between two motherboards in the same brand. As a general rule you should get the highest tier chipset available (like a X570 or X670e) or check out forums on this topic to find a decent motherboard Here is the list of options I have enable or disabled: ```md Enabled: IOMMU, DMAr Support, Above 4G Decoding, Re-Size BAR Disabled: CSM, Fast Boot, DMA Protection, PCIe ARI Support, AER CAP, Secure Boot, SR-IOV, Deep Sleep ``` - Get the latest [Proxmox VE 8 ISO](https://www.proxmox.com/en/downloads/proxmox-virtual-environment) - Read the [documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html), especially the ["Prepare Installation Media" section](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#installation_prepare_media). I had several installation fail post-install because the USB Key was corrupted or because I use [Etcher](https://etcher.balena.io) instead of 'dd' and don't hesitate to try other USB Keys! - Update Proxmox, use [clean scripts](https://tteck.github.io/Proxmox/) if needed - Check PCIe devices ```bash lspci -vv ``` - Check if IOMMU is activated ```bash dmesg | grep -e DMAR -e IOMMU -e AMD-Vi ``` - Check IOMMU groups > If your GPU is in it's own group (with it's audio device) you're all set to continue ```bash find /sys/kernel/iommu_groups/ -type l ``` - Read the documentation on [GPU passthorugh](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough). > There are not many steps left, some include adding vfio to the /etc/modules, updating initramfs, blacklisting nouveau/nvidia from the host - Get Ubuntu Server and add it to the template folder ```bash cd /var/lib/vz/template/iso wget https://releases.ubuntu.com/22.04.3/ubuntu-22.04.3-live-server-amd64.iso ``` - Create your first VM with the following main settings > Note: do not pass any GPU at first launch and configure properly the VM (also make sure to disable secure boot) ```bash balloon: 0 bios: ovmf boot: order=scsi0;ide2;net0 cores: 12 cpu: host efidisk0: ssd:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M hostpci0: xx:xx:xx,pcie=1,rombar=0,x-vga=1 ide2: none,media=cdrom machine: q35 memory: 81920 meta: creation-qemu=8.0.2,ctime=xxxx name: ai net0: virtio=xx:xx:xx:xx:xx:xx,bridge=vmbr0,firewall=1 numa: 0 ostype: l26 scsi0: ssd:vm-100-disk-1,backup=0,discard=on,iothread=1,size=3700G,ssd=1 scsihw: virtio-scsi-single smbios1: uuid=xxxx-xxxx-xxxx sockets: 1 vmgenid: xxxx-xxxx-xxxx ``` #### Drivers: CUDA, CudNN The most compatible version right now of CUDA is 11.8 since tensorflow and most of the projects out there have not been written with CUDA 12+ in mind. Here are the two links that you need to get started: - [CUDA Toolkit 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=deb_network) - [CudNN for CUDA 11.8](https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html) If you need even more isolation with multiple CUDA versions for your project you should check out [NVIDIA NGC](https://catalog.ngc.nvidia.com/?filters=&orderBy=weightPopularDESC&query=), a repository of containerized applications for multiple use cases like [deep learning](https://catalog.ngc.nvidia.com/orgs/nvidia/collections/nvidia_dlfw). #### Remote access While you could definetely deploy a [Cloudflare tunnel like I showed in a previous post](https://erdaltoprak.com/blog/setting-up-cloudflare-argo-access-on-a-raspberry-pi/) I decided to use [Tailscale](https://tailscale.com) with the[ "Allow local network access" ](https://tailscale.com/kb/1103/exit-nodes/#:~:text=Open%20the%20Tailscale%20menu%20and,select%20Allow%20local%20network%20access.)option in order to use the ```192.168.X.X```URL across all my devices without bothering to put more network infrastructure behind anything! ## Conclusion Throughout my AI journey, I noticed I was frequently toggling between apps/projects rather than genuinely engaging with them. It felt as though I wasn't experimenting on my own terms. This realization drove me to create my own AI Homelab, which also doubles as a NAS. The initial decisions surrounding the right hardware and maximizing software utility can be intricate, influenced by factors like budget, personal preferences, and tolerance for noise and heat. Nonetheless, there's a unique satisfaction in experimenting on a machine that truly belongs to you. If you found this guide useful you can also check the previous ones about [Setting up macOS for development](https://erdaltoprak.com/blog/setting-up-macos-for-development/) and [Abstracting local development environments through containers](https://erdaltoprak.com/blog/abstracting-local-development-environments-through-containers/). ## Blog Post: abstracting-local-development-environments-through-containers.md Frontmatter: ```yaml --- title: Abstracting local development environments through containers pubDate: '2022-04-20' description: Abstracting local development environments through containers author: Erdal Toprak heroImage: ../../assets/blog/5/banner.jpeg type: articles id: 5 --- ``` Content: ![Banner](../../assets/blog/5/banner.jpeg) Every time I set up my devices, I like to take the time to use the sensible defaults of the operating system and then evaluate my personal needs of customization. On Mac devices, the default apps are quite good in terms of long term support and integration across devices, so installing a complete development environment often causes some disruption in the everyday workflow experience. If you tried to install a development environment on your computer, especially python, you must be familiar with this popular [xkcd](https://xkcd.com/1987/). ![xkcd python](../../assets/blog/5/xkcd.png) My goal going forward is to abstract my local development environment, so I can be more flexible and avoid unnecessary risks by separating the professional and personal usage, and this can be done through containers. ## Dev containers In [VS Code](https://code.visualstudio.com), the addition of development containers allows developers to use a reproducible and isolated environment while maintaining the flexibility of local files. ![Dev Containers](../../assets/blog/5/dev-containers.png) The above diagram from the Microsoft [documentation](https://code.visualstudio.com/docs/remote/containers) shows the development container architecture. It is quite easy to try it for yourself, just install Docker and the ["Remote - Containers" extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) and you're all set. ![VSCode Extension](../../assets/blog/5/vscode-extension.png) In a new project, you can open the command palette, and you have the option to create a container according to the development environment of your choice. ![Container Creation](../../assets/blog/5/container-creation.png) On your project location you know have a ".devcontainer" folder containing the "dockerfile" and the "devcontainer.json" configuration file that you can modify [according to your project](https://code.visualstudio.com/docs/remote/devcontainerjson-reference). ## Github Codespaces The motivation for this blog post and quick introduction to development containers are to take a step back and appreciate the possibilities offered by this solution. Combining development containers with [Github Codespaces](https://github.com/features/codespaces),you can not only abstract the need for a local development environment, but also (almost) abstract the need for a fully fledged operating system and use something like an iPad on the go and with your external monitors at your desk. ## Conclusion Personally, this kind of development setup with containers just makes sense in order to avoid, as much as possible, unnecessary conflicts with the operating system that I'm working with. I hope that this guide might help your workflow, you might also be interested in my previous post about [Setting up macOS for development](https://erdaltoprak.com/blog/setting-up-macos-for-development). ## Blog Post: 365-days-3-rings-and-a-journey-with-the-apple-watch.md Frontmatter: ```yaml --- title: '365 days, 3 rings and a journey with the Apple Watch' pubDate: '2024-01-01' description: A 365 days journey with the Apple Watch author: Erdal Toprak heroImage: ../../assets/blog/10/banner.png type: notes id: 10 --- ``` Content: ![Banner](../../assets/blog/10/banner.png) In May 2022, a vision took shape in my mind: to simplify my daily trips, be it a casual walk or a gym workout, armed with nothing more than my keys, AirPods, and an Apple Watch. This wasn't just a thought; it was a deliberate lifestyle choice. So, after meticulous planning, I embraced this vision by equipping myself with a stainless steel cellular Apple Watch Series 7. ## Preamble The first days with the watch felt great, you have to customize the watch experience in order to get the most out of it. You can create multiple watch faces for different occasions, for example, I have a modular watch face for everday use and sports one for workouts. You also have to customize the notifications and the focus mode in order to avoid unnecessary distractions. | ![Apple Watch Rings](../../assets/blog/10/apple-activity-rings.png) | |:--:| | *[Apple Watch rings documentation](https://support.apple.com/en-ca/guide/watch/apd3bf6d85a6/watchos)* | On the Apple Watch, the daily activity is measured with three rings. The red ring tracks active calories burned, the green one monitors exercise minutes, and the blue ring indicates how frequently you've moved for at least a minute per hour. A few days prior to January, I decided to set a goal for myself: to close all three rings every day for a year. I knew it would be a challenge and I tried completing a week just to see but I was determined to see it through. ## The Journey Begins In the first weeks I just followed a simple routine of walking every afternoon, mechanically, just to close the rings. I always loved walking but I had to go the extra mile to close the rings. A few weeks later, I started to feel the benefits of walking. I was more energetic, I was sleeping better and I was more focused. This isn't just an observation on my end but a clear representation thanks to the statistics on the watch for my resting heart rate and sleep cycles. I was walking more than ever but I was feeling less tired. The days where I was active enough to close the rings without walking felt like I didn't do anything at all. ## The Highs and Lows The Apple Watch is a great companion and even though the standing notifications can be annoying at times, it is a great reminder for our predominantly desk-bound lifestyle. The watch is also the best alarm clock I have ever used. The haptic feedback is a gentle tap on your wrist and it is a great way to wake up in combination to a home automation routine that turns on the lights at a thousand kelvin. The best experience was to walk on a sunny day on the south of France with nothing more than my keys, AirPods and my watch. I was able to listen to music, track my workout and pay for my croissant with just my watch. It was a great feeling to be able to do all without being weighed down by my phone. However, the watch is not perfect. The battery life, while being capable of fast charging, is not enough for more than a day. At some point you have to get used to carry an extra cable in the car, for longer trips, just in case. The watch also has badges and challenges but sometimes the monthly challenge can become out of reach if your previous month was super active and there are no rest days. ## The gamification of fitness While the badge challenges can get out of hand, the gamification of fitness is a great way to keep you motivated. The badges are incremental steps that you can look forward to and the monthly challenges are a great way to keep you on your toes. The watch also has a social aspect to it. You can share your activity with your friends and compete with them. I have to admit that I was a bit skeptical about this feature but I have a few close friends that I compete with and it is a great way to keep each other motivated. ## Data Analysis Diving into the data, I analyzed my 2023 workout logs extracted from the Health app. Through some xml parsing and data manipulation, I created a visual representation of my workout patterns. Unsurprisingly, walking dominated as my primary activity throughout the year. ![Banner](../../assets/blog/10/workouts.jpg) ## Conclusion Completing the three rings for a year has been a great challenge and fantastic discipline exercise. If you're considering the watch, I highly recommend it based on my experience. It is a great companion and a great way to keep you motivated. I am looking forward to the next year and I am planning to keep the streak going for as long as it continues to bring value to my daily routine. Thank you for reading my thoughts on this year long experience. You can also read my previous posts on [Apple Music is the last library focused music service](https://erdaltoprak.com/blog/apple-music-is-the-last-library-focused-music-service/) and [AI Homelab: A guide into hardware to software considerations](https://erdaltoprak.com/blog/ai-homelab-a-guide-into-hardware-to-software-considerations/).