Keydb.cfg Makemkv Apr 2026

This configuration assumes you are using KeyDB as a job queue, metadata cache, or progress tracker for a MakeMKV automation script. # ============================================ # KeyDB Configuration for MakeMKV Automation # ============================================ # Purpose: High-performance job queue for disc ripping # Tuned for: Many parallel ripping tasks, large metadata --- NETWORK & PORT --- port 6379 tcp-backlog 511 timeout 300 tcp-keepalive 300 --- MEMORY MANAGEMENT (Optimized for large file lists)--- maxmemory 8gb maxmemory-policy allkeys-lru maxmemory-samples 10 --- SNAPSHOTTING (Disable for pure queue mode)--- save "" # Disable RDB snapshots to reduce I/O appendonly no # Disable AOF (queue can rebuild from source) --- THREADING (KeyDB specific)--- server-threads 4 # Match CPU cores for parallel ripping queues server-thread-affinity false io-threads 4 io-threads-do-reads yes --- REPLICATION (Optional: for backup of job status)--- replica-serve-stale-data yes replica-read-only yes --- SECURITY & COMMANDS --- requirepass MakemkvR0cks! # CHANGE THIS rename-command FLUSHALL "" rename-command FLUSHDB "" rename-command CONFIG "Makemkv_CONFIG_ADMIN" --- SLOW LOG & MONITORING --- slowlog-log-slower-than 10000 # 10ms, good for queue operations slowlog-max-len 128 latency-monitor-threshold 100 --- ADVANCED QUEUE SETTINGS --- Prevent head-of-line blocking for large MKV jobs client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 --- MAKEMKV SPECIFIC KEYS --- Suggested key structure: makemkv:queue:waiting -> List of pending disc paths makemkv:queue:processing -> Hash of active jobs (pid -> disc) makemkv:status:{job_id} -> Hash with progress, ETA, title makemkv:completed -> Sorted Set (timestamp -> output file) makemkv:failure -> List of failed discs + reason Bonus: Lua Script for Atomic Job Claim (Atomic pop + register) Save as claim_job.lua and load into KeyDB:

keydb-cli --pass MakemkvR0cks! SCRIPT LOAD "$(cat claim_job.lua)" # Push a disc to queue keydb-cli --pass MakemkvR0cks! LPUSH makemkv:queue:waiting "/dev/sr0" Worker loop (simplified) while true; do JOB=$(keydb-cli --pass MakemkvR0cks! EVALSHA <hash> 2 makemkv:queue:waiting makemkv:queue:processing "worker-$$" "/dev/sr0") if [ "$JOB" ]; then makemkvcon mkv disc:0 all /output --progress=-same keydb-cli --pass MakemkvR0cks! HDEL makemkv:queue:processing "worker-$$" fi sleep 2 done keydb.cfg makemkv

-- Atomic claim from waiting queue to processing -- KEYS[1] = waiting list -- KEYS[2] = processing hash -- ARGV[1] = worker_id (e.g., PID or hostname) -- ARGV[2] = disc_path -- Returns: claimed job info or nil local job = redis.call('LPOP', KEYS[1]) if job then redis.call('HSET', KEYS[2], ARGV[1], job) return job end return nil This configuration assumes you are using KeyDB as

This setup gives you a production-grade, multithreaded job queue for MakeMKV automation. Adjust thread counts and memory based on your actual hardware. SCRIPT LOAD "$(cat claim_job

Load it:

architecture AWS cluster cyber-security devops devops-basics docker elasticsearch flask geo high availability java machine learning opensearch php programming languages python recommendation systems search systems spring boot symfony

Privacy Overview
Sergii Demianchuk Blog

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

3rd Party Cookies

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.