I had the priviledge to speak at PyCon this year in Portland, Oregon. This was my first time speaking at PyCon and I was very excited to talk on a topic I truly love, graceful degradation. The talk focuses on techniques one can use to gracefully degrade and mitigate the impact of failures when the dependencies within your system begin to fail.
As the sole maintainer of the python-traceview library, I’ve been following a simple deploy process I cooked up for getting new releases of the library on PyPI (the Python Package Index). Now that I’ve been maintaining the project for almost 2 years, the “excitement” of doing a manual release has come and gone. So naturally I began to ask myself: How can I automate releasing to PyPI?
If you’ve ever dreaded merging a pull request simply because you don’t want to go through the hassle of doing a manual release, then you should check out the article to learn how to automate your release.
If you’ve ever profiled code in Python, you’ve probably used the cProfile module. While the cProfile module is quite powerful, I find it involves a lot of boilerplate code to get it setup and configured before you can get useful information out of it. Being a fan of the KISS principle, I want an easy and unobtrusive way to profile my code. Thus, I find myself using the line_profiler module due to it’s simplicity and superior output format.
I’m a big fan of tools that are simple to use, yet powerful in nature. Being both efficient and productive are important qualities in my daily workflow, so check out the article to learn more!
So you just thought of the next killer Hubot script and you want to share it with the world? Well if you haven’t been paying attention, you should know that the hubot-scripts community has changed their contributing policy:
It’s now preferred that if you are able to, you should release your script as part of a npm package built for Hubot.
And it’s a good change, because this now allows scripts to be distributed and versioned as individual NPM packages. This also means that individual scripts can easily declare dependencies, which is a big improvement.
If you’re familiar with the process for creating a NPM package, then you’re off to a good start. We’re going to be using yeoman and the generator-hubot-script to generate all the boilerplate necessary for quickly creating a NPM package for our Hubot script.
NOTE: The generated boilerplate is based on the hubot-example repository.
Now let’s ago head and install
yeoman and the
Secondly, let’s go ahead and create a directory for our script. For the sake of
this example, we’re gonna assume we created a script called
foobar.coffee, so we want to appropriately namespace our package with
1 2 3
Follow the prompts and wait until the NPM install completes. Now let’s initialize the directory as a Git repository and commit our initial files:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Cool, so now all the boilerplate is in place, it’s out with the old and in with the new:
1 2 3 4 5 6 7 8
Unit testing is cool, so let’s go ahead and set that up:
1 2 3
Once you have some passing test cases, let’s go ahead and commit these changes:
1 2 3 4
Next, you want to go ahead and update the
package.json file to include
all the relevant information for your script. Make sure the following fields
are all satisfactory:
More details on this file can be found here. Once you’re done, go ahead and commit your changes:
1 2 3 4
Lastly, go ahead and review the
README.md file. Feel free to make any
changes, and add any missing documentation you think is necessary. Make sure
to commit any changes you make.
1 2 3 4
Congratulations, you’ve packaged your Hubot script and are ready to share it with the world! So go ahead and push this repository to Github or publish the package on NPM.
After recently attending Velocity in NYC, I started to think more about why performance always seems to be an afterthought with developers. As I pondered this thought, I kept coming back to the following question:
How hard is it to get a perfect PageSpeed Insights score?
If you’d like to know the answer, then head on over to the Appneta Blog and checkout the new blog post I wrote where I explore this topic in more detail.
- 30 day summary view of a Github repository
- Filter Github milestone issues by label
- Several milestone view enhancements
For more details, head on over and check it out.
I recently wrote a blog post on the Appneta Blog about client side latency for users on the web.
So while the effects of poor performance is obvious, it makes one wonder about the relationship between client latency and the “perception of speed”. After all, the user can trigger many state change events (page load, submit a form, interact with a visualization, etc) and all these events have an associated latency to the client. However, are certain types of latency more noticable to the user then others?
This is an area that web developers need to think about closely, and it’s important because it could have an impact on your user engagement and user experience.
Heroku is quite the interesting topic these days. While there are many strong opinions about the platform itself, I feel the one thing they get right is deployment.
That’s it. Dead simple.
Now there really isn’t any magic behind this, as Heroku harnesses the power of Git hooks. To mimic this behavior, you can use a post-receive hook, which is just a bash script that runs after the entire process is completed and can be used to update other services or notify users. The beauty is that this functionality can easily be replicated for deploying any of your sites.
For this example, let’s keep it simple and deploy a static blog to a single web server.
First order of business is to make sure you can easily SSH onto your server. So if you haven’t already, make sure your SSH public key is copied to your server:
Now, on your server, we want to create a bare Git repository. This will be the destination repository for when you perform a git push for deployment:
1 2 3 4
Next thing to do is setup our simple post-receive script. Let’s assume for the sake of this example that our blog is served on our web server from the following directory: /www/myblog.com. First create the hook in your git repository:
Then update the post-receive hook:
1 2 3 4
Ok, so here is where the magic happens. The GIT_WORK_TREE environment variable is used as a destination for your checked out source code, with any changes you might have made. Thus, we are performing a force checkout to the working tree directory (aka deployment)!
NOTE: One side benefit of the working tree approach is that you do not accidentally serve your .git directory!
Now that we have a new remote repository, let’s go ahead and add it to our local git repository:
The last thing left now is to deploy:
1 2 3 4 5 6 7 8 9 10
Now this is a very simple example of a deployment script. Feel free to get as crazy as you want…send emails, message chat rooms, call fabric tasks, etc! The sky is the limit, so get as creative as you’d like.
Feel free to fork my Gist to get yourself started!
Although there are many unique characteristics for each software development methodology, one thing is consistent: the goal of progress. Everyone wants to know how a particular project or task is coming along and when it’s going to be complete. So even if you don’t believe in due dates and delivering to a schedule, it’s still useful to know and to be able to communicate your current progress to others .
To aid us on our never-ending quest to know just how awesome we really are, we’ve developed Burndown, an open source tool to assist in tracking progress of a Github milestone!
I like to listen to records. I also like to listen to my records when I code. So recently, I said to myself,
“How can I easily pipe audio between rooms in my house using my network and open source software?”
The answer: FFmpeg!
For the unfamiliar, FFmpeg is a complete, cross-platform solution to record, convert and stream audio and video. If you have a task that falls into any of those categories, FFmpeg can do it like no other!
So without derailing this post and rambling on about my audio setup at home, all you have to know is I have a line level audio source plugged into the sound card’s Line In on one computer and I want to pipe that audio to any other computer in my house. So conceptually:
[Line Level Audio Source]---->[Computer A]====(Home Network)====>[Computer B]
Still with me? Ok good! The first thing I want to do is figure out what audio inputs FFmpeg can see:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
And bingo! The second to last line is just what we want, so keep note of this!
For this example, let’s assume Computer B’s IP address is 192.168.1.17, which is our streaming destination. So hit play on your audio source and fire off the following to begin streaming:
Now, just set Computer B listening for audio:
BOOM! Point to point audio streaming using the power of FFmpeg in your own home!