Automated git commit messages from local LLM
This is something I started using for my private projects in order to guarantee having good commit messages (where good is, of course, subjective and debatable. However the alternative is that I just write fix
or feat
without any further context).
We’re creating these messages with a local LLM such that we don’t have to send our code to any remote server. I’ve tried out multiple small LLMs to see which one generates solid messages relatively fast.
The models I’ve tried out are:
- qwen2.5-coder-1.5b-instruct-128k
- qwen2.5-coder-3b-instruct-128k
- openai gpt-4o-mini
- mistral-nemo-instruct-2407
- llama-3.2-8x3b-moe-dark-champion-instruct-uncensored-abliterated-18.4b
- Phi4
- Mistral-small-24b-instruct-2501
Haiku and Gpt4o-mini were tested to have a good comparison for the quality of the local LLM.
The Setup
Lets begin with the actual script. I’ve decided to name it git-haiku
. Create a git-haiku
file in a directory that is available from your $PATH
.
#!/bin/bash
# Call Python script and pass all arguments
exec python3 ~/.local/share/git-haiku.py "$@"
Make sure to make the script executable.
chmod +x git-haiku
This script will be called from git, and will then execute a python script that contains the actual logic.
By calling it git-haiku
, this becomes a git alias. You can now do:
git haiku
This will, of course, fail because there is no git-haiku
script. Next, we implement the git-haiky.py
Python script.
The Script
import subprocess
import os
import sys
import json
import requests
def get_staged_changes():
try:
result = subprocess.run(['git', 'diff', '--cached', '-U10'], capture_output=True, text=True)
if result.returncode != 0:
raise Exception("Failed to get staged changes")
return result.stdout
except Exception as e:
print(f"Error: {e}")
return ""
def send_to_llm(prompt):
url = "http://localhost:1234/v1/chat/completions"
headers = {
"Content-Type": "application/json"
}
payload = {
"model": "mistral-small-24b-instruct-2501",
"messages": [
{ "role": "system", "content": "You are an expert programmer. You have an uncanny ability for expressing yourself in tersly English." },
{ "role": "user", "content": prompt }
],
"temperature": 0.7,
"max_tokens": -1,
"stream": False
}
try:
response = requests.post(url, headers=headers, json=payload)
response.raise_for_status()
return response.json().get('choices')[0].get('message').get('content')
except Exception as e:
print(f"Error sending request: {e}")
return ""
def commit_changes(commit_message):
try:
result = subprocess.run(['git', 'commit', '-m', commit_message], capture_output=True, text=True)
if result.returncode != 0:
raise Exception("Failed to commit changes")
print(f"Changes committed with message: {commit_message}")
except Exception as e:
print(f"Error: {e}")
def main():
# Get staged changes
staged_changes = get_staged_changes()
if not staged_changes:
print("No staged changes to commit.")
return
# Combine the prompt with staged changes
prompt = f"{staged_changes} \n\nPlease write a SHORT commit message the above changes. JUST THE COMMIT MESSAGE, NOTHING ELSE."
response = send_to_llm(prompt)
if not response:
print("Failed to get a response from the LLM.")
return
response = response.strip().replace("\n", " ")
# Commit the changes with the response as the commit message
commit_changes(response)
if __name__ == "__main__":
main()
Feel free to update the system prompt and prompt to your liking. I do have a variant that writes all commit messages as Japanese haikus.
To run the local models, you can use Ollama or LMStudio. I’m using LMStudio because I like having a GUI for quick experimentation.
Bonus: Lazygit integration
I’m using lazygit to interact with git. I’ve added a custom keybinding to call git haiku
when I press H
.
On macOS, the config file is located at ~/Library/Application\ Support/lazygit/config.yml
.
customCommands:
- key: "H"
context: "files"
command: "git haiku"
subprocess: true
This allows me to make a commit with a generated message just by pressing H
.
Bonus: Comparing LLMs
Here’re some commit messages by LLM based on the following changes:
diff --git a/screens/LibraryScreen.tsx b/screens/LibraryScreen.tsx
index 00fd8cb..c17cfef 100644
--- a/screens/LibraryScreen.tsx
+++ b/screens/LibraryScreen.tsx
@@ -5,11 +5,21 @@ import { VStack } from '@/components/ui/vstack';
import PageContainer from '@/components/ui/sorcekit/pagecontainer';
import { useTranslation } from '@/translations/useTranslation';
import { InfoPopup, InfoPopupKind } from '@/components/ui/molecules/InfoPopup';
+import { UserQuestionaire } from '@/components/questionaire';
export const LibraryScreen: React.FC = () => {
const repository = useRepository();
const { t } = useTranslation();
const firstCoach = repository.user.coaches[0];
+ const hasFilledOutQuestionnaire = repository.user.hasFilledOutQuestionnaire;
+ if (!hasFilledOutQuestionnaire) {
+ return (
+ <PageContainer>
+ <UserQuestionaire />
+ </PageContainer>
+ );
+ }
+ if (!firstCoach) {
if (!firstCoach) {
return (
Mistral-small-24b-instruct-2501 [11 tok/s]
- Add questionnaire check and conditional rendering
- Add questionnaire check for LibraryScreen
- Add User Questionnaire and conditional rendering
Microsoft/phi-4 [16 tok/s]
- Add questionnaire check to LibraryScreen, redirecting users until completed.
- Implement questionnaire check before accessing LibraryScreen
- Add conditional rendering for user questionnaire in LibraryScreen
llama-3.2-8x3b-moe-dark-champion-instruct-uncensored-abliterated-18.4b [75 tok/s]
- Added User Questionnaire component and questionnaire logic for new users.
- Added questionnaire to library screen for new users.
- Added questionnaire component for users who haven’t filled it out yet.
mistral-nemo-instruct-2407 [60 tok/s]
- Add user questionnaire check and redirect on LibraryScreen
- Add user questionnaire and conditional rendering
- Add user questionnaire check and redirect in LibraryScreen
qwen2.5-coder-3b-instruct-mlx [100 tok/s]
- Add questionnaire check to LibraryScreen for new users.
- Fixes: Display questionnaire only if not filled out and first coach is not available
- Add UserQuestionaire component for first-time users and redirect them if they haven’t filled out the questionnaire yet.
GPT4o-mini
- Add questionnaire check and display before library content
- Add questionnaire prompt if not filled out in LibraryScreen
Locally, I decided to use the llama-3.2-8x3b moe because my Macbook has enough ram to load a slightly bigger model with the upside that it will understand more programming languages better. Qwen2.5-coder-3b-instruct-mlx is the best solution for machines with less ram.
Context Length
However, most of the above models are limited to a 32k (or less) context window. If you regularly perform grande changes in your codebase, you need a fast model with a large context window.
The best solution I found for this is the Qwen 2.5 coder Unsloth 128k set of models which range from 1.5b all the way up to 14b. In this example, I tried the 1.5b and 3b with a fairly large change (~38k tokens).
qwen2.5-coder-3b-instruct-128k
- Response: Add React integration and update table styles to match light theme.
- Prompt Processing Time: 53sec
- Tok/sec: 45sec
qwen2.5-coder-1.5b-instruct-128k
- Response: Add support for @mantine/core and mantine-react-table in Astro.config.mjs. Added @tabler/icons/@mantine/utils in Page.astro. Added @floating-ui/react@0.19.2 in BlogPost.astro.
- Prompt Processing Time: 28sec
- Tok/sec: 66sec
I find the quality of the 1.5b model significantly worse than the 3b model.