|
Description:
|
|
Topics covered in this episode:
Watch on YouTube
About the show
Sponsored by us! Support our work through:
Michael #1: Command Book App
- New app from Michael
- Command Book App is a native macOS app for developers, data scientists, AI enthusiasts and more.
- This is a tool I've been using lately to help build Talk Python, Python Bytes, Talk Python Training, and many more applications.
- It's a bit like advanced terminal commands or complex shell aliases, but hosted outside of your terminal. This leaves the terminal there for interactive commands, exploration, short actions.
- Command Book manages commands like "tail this log while I'm developing the app", "Run the dev web server with true auto-reload", and even "Run MongoDB in Docker with exactly the settings I need"
- I'd love it if you gave it a look, shared it with your team, and send me feedback.
- Has a free version and paid version.
- Build with Swift and Swift UI
- Check it out at https://commandbookapp.com
Brian #2: uvx.sh: Install Python tools without uv or Python
Michael #3: Ending 15 years of subprocess polling
- by Giampaolo Rodola
- The standard library's
subprocess module has relied on a busy-loop polling approach since the timeout parameter was added to Popen.wait() in Python 3.3, around 15 years ago
- The problem with busy-polling
- CPU wake-ups: even with exponential backoff (starting at 0.1ms, capping at 40ms), the system constantly wakes up to check process status, wasting CPU cycles and draining batteries.
- Latency: there's always a gap between when a process actually terminates and when you detect it.
- Scalability: monitoring many processes simultaneously magnifies all of the above.
- + L1/L2 CPU cache invalidations
- It’s interesting to note that waiting via
poll() (or kqueue()) puts the process into the exact same sleeping state as a plain time.sleep() call. From the kernel's perspective, both are interruptible sleeps.
- Here is the merged PR for this change.
Brian #4: monty: A minimal, secure Python interpreter written in Rust for use by AI
- Samuel Colvin and others at Pydantic
- Still experimental
- “Monty avoids the cost, latency, complexity and general faff of using a full container based sandbox for running LLM generated code. “
- “Instead, it lets you safely run Python code written by an LLM embedded in your agent, with startup times measured in single digit microseconds not hundreds of milliseconds.”
Extras
Brian:
Michael:
Joke: Silence, current side project! |