inspired by https://github.com/mathiasbynens/dotfiles
A simple setup to automate and document the setup of my development machine as much as reasonable.
Full automation doesn’t make a lot of sense as it is a lot of work, needs to be adjusted every now and then, and won’t be used much more often than every few years. The invested time just doesn’t justify.
But at least this framework gives some tools to recreate a development setup on a new machine with not too much effort.
Currently this repo is optimized for macs. But it is also possible to adjust it for usage with different Linux distributions. Even a parallel setup and synchronization between a Linux and a MacOS system proved maintainable in the past.
- The dotfiles are stored in git. They can be checked out to any folder on the target workstation.
- Via symlinks the dotfiles are “activated”.
- The repo is copied into place via
bootstrap.sh(rsync) instead of managing per-file symlinks. - Complex configuration setups from 3rd parties are managed via git submodules.
On MacOS we use homebrew and install:
- awk
- aws-console
- aws-google-auth
- aws-iam-authenticator
- awscli
- bash
- bash-completion
- bash-language-server
- cmake
- coreutils
- emacs-plus@29
- gettext
- git
- git-lfs
- gnu-getopt
- gnupg
- graphviz
- helm
- kubectx
- kubernetes-cli
- make
- markdown
- node
- nvm
- pandoc
- plantuml
- rbenv
- scalastyle
- sops
- terraform-ls
- the_silver_searcher
- wget
- yaml-language-server
If possible a brew list on the old machine should be used to get a list of all installed packages
Run ./update.sh from time to time
Copies the tracked configuration into $HOME using rsync.
Dry run first (recommended):
./bootstrap.sh -n
Apply the changes once the output looks right:
./bootstrap.sh
- byobu
- thefuck
- fasd
- jq
- pass
- watch
- Ollama (local LLMs for code/chat)
Run powerful local models for coding and chat.
Quick start:
bin/ollama-install.sh --yes --start bin/ollama-coder.sh --repl # interactive coder chat (qwen2.5-coder:7b) bin/ollama-coder.sh "Write a Bash script..."
Tips:
- Use
-m qwen2.5-coder:14bfor higher quality if you have RAM/VRAM. - General models (non-coding):
llama3.1:8b,qwen2.5:7b. - Server host: set
OLLAMA_HOSTor pass--hostto scripts.