I had the priviledge to speak at PyCon this year in Portland, Oregon. This
was my first time speaking at PyCon and I was very excited to talk on a topic I
truly love, graceful degradation. The talk focuses on techniques one can use to
gracefully degrade and mitigate the impact of failures when the dependencies
within your system begin to fail.
You can find my slides here and the talk details here. As always, feel
free to contact me if you have any questions.
I wrote an article on the Appneta Blog where I discussed automating your
PyPI deployment with Travis CI:
As the sole maintainer of the python-traceview library, I’ve been following a
simple deploy process I cooked up for getting new releases of the library on
PyPI (the Python Package Index). Now that I’ve been maintaining the project for
almost 2 years, the “excitement” of doing a manual release has come and gone.
So naturally I began to ask myself: How can I automate releasing to PyPI?
If you’ve ever dreaded merging a pull request simply because you don’t want to
go through the hassle of doing a manual release, then you should check out
the article to learn how to automate your release.
I wrote an article on the Appneta Blog where I discussed line profiling
in Python:
If you’ve ever profiled code in Python, you’ve probably used the
cProfile module. While the cProfile module is quite powerful, I find it
involves a lot of boilerplate code to get it setup and configured before
you can get useful information out of it. Being a fan of the KISS
principle, I want an easy and unobtrusive way to profile my code. Thus, I find
myself using the line_profiler module due to it’s simplicity and superior
output format.
I’m a big fan of tools that are simple to use, yet powerful in nature. Being
both efficient and productive are important qualities in my daily workflow, so
check out the article to learn more!
So you just thought of the next killer Hubot script and you want to share
it with the world? Well if you haven’t been paying attention, you should know
that the hubot-scripts community has changed their contributing policy:
It’s now preferred that if you are able to, you should release your script as
part of a npm package built for Hubot.
And it’s a good change, because this now allows scripts to be distributed and
versioned as individual NPM packages. This also means that individual scripts
can easily declare dependencies, which is a big improvement.
If you’re familiar with the process for creating a NPM package, then
you’re off to a good start. We’re going to be using yeoman and the
generator-hubot-script to generate all the boilerplate necessary for
quickly creating a NPM package for our Hubot script.
NOTE: The generated boilerplate is based on the hubot-example repository.
Now let’s ago head and install yeoman and the generator-hubot-script using
NPM:
1
$ npm install -g yo generator-hubot-script
Secondly, let’s go ahead and create a directory for our script. For the sake of
this example, we’re gonna assume we created a script called
foobar.coffee, so we want to appropriately namespace our package with
the name hubot-foobar:
123
$ mkdir hubot-foobar
$ cd hubot-foobar
$ yo hubot-script:foobar
Follow the prompts and wait until the NPM install completes. Now let’s
initialize the directory as a Git repository and commit our initial files:
Next, you want to go ahead and update the package.json file to include
all the relevant information for your script. Make sure the following fields
are all satisfactory:
author
description
version
author
license
keywords
repository
bugs
dependencies
More details on this file can be found here. Once you’re done, go ahead
and commit your changes:
Lastly, go ahead and review the README.md file. Feel free to make any
changes, and add any missing documentation you think is necessary. Make sure
to commit any changes you make.
Congratulations, you’ve packaged your Hubot script and are ready to share it
with the world! So go ahead and push this repository to Github or
publish the package on NPM.
After recently attending Velocity in NYC, I started to think more about why
performance always seems to be an afterthought with developers. As I pondered
this thought, I kept coming back to the following question:
How hard is it to get a perfect PageSpeed Insights score?
If you’d like to know the answer, then head on over to the Appneta Blog and
checkout the new blog post I wrote where I explore this topic in more detail.
I recently wrote a blog post on the Appneta Blog about client side latency
for users on the web.
So while the effects of poor performance is obvious, it makes one wonder about the relationship between client latency and the “perception of speed”. After all, the user can trigger many state change events (page load, submit a form, interact with a visualization, etc) and all these events have an associated latency to the client. However, are certain types of latency more noticable to the user then others?
This is an area that web developers need to think about closely, and it’s
important because it could have an impact on your user engagement and user
experience.
Heroku is quite the interesting topic these days. While there are many strong opinions about the platform itself, I feel the one thing they get right is deployment.
1
$ git push heroku master
That’s it. Dead simple.
Now there really isn’t any magic behind this, as Heroku harnesses the power of Git hooks. To mimic this behavior, you can use a post-receive hook, which is just a bash script that runs after the entire process is completed and can be used to update other services or notify users. The beauty is that this functionality can easily be replicated for deploying any of your sites.
For this example, let’s keep it simple and deploy a static blog to a single web server.
First order of business is to make sure you can easily SSH onto your server. So if you haven’t already, make sure your SSH public key is copied to your server:
1
[dan@local]$ ssh-copy-id username@server.org
Now, on your server, we want to create a bare Git repository. This will be the destination repository for when you perform a git push for deployment:
1234
[dan@server]$ cd ~/git
[dan@server]$ mkdir myblog.git
[dan@server]$ cd myblog.git
[dan@server]$ git init --bare
Next thing to do is setup our simple post-receive script. Let’s assume for the sake of this example that our blog is served on our web server from the following directory: /www/myblog.com. First create the hook in your git repository:
#!/bin/shecho'Deploying to Production...'GIT_WORK_TREE=/www/myblog.com git checkout -f
echo'Success!'
Ok, so here is where the magic happens. The GIT_WORK_TREE environment variable is used as a destination for your checked out source code, with any changes you might have made. Thus, we are performing a force checkout to the working tree directory (aka deployment)!
NOTE: One side benefit of the working tree approach is that you do not accidentally serve your .git directory!
Now that we have a new remote repository, let’s go ahead and add it to our local git repository:
1
[dan@local]$ git remote add production ssh://username@remote-server.org/~/git/myblog.git
The last thing left now is to deploy:
12345678910
[dan@local]$ git push production master
Counting objects: 9, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (5/5), done.
Writing objects: 100% (5/5), 455 bytes, done.
Total 5 (delta 3), reused 0 (delta 0)remote: Deploying to Production...
remote: Success!
To ssh://username@remote-server.org/~/git/myblog.git
52056c4..1b5c293 master -> master
Now this is a very simple example of a deployment script. Feel free to get as crazy as you want…send emails, message chat rooms, call fabric tasks, etc! The sky is the limit, so get as creative as you’d like.
Feel free to fork my Gist to get yourself started!
I wrote an article on the Appneta Blog about my new open source project
called Burndown.
Although there are many unique characteristics for each software development methodology, one thing is consistent: the goal of progress. Everyone wants to know how a particular project or task is coming along and when it’s going to be complete. So even if you don’t believe in due dates and delivering to a schedule, it’s still useful to know and to be able to communicate your current progress to others .
To aid us on our never-ending quest to know just how awesome we really are, we’ve developed Burndown, an open source tool to assist in tracking progress of a Github milestone!
For the unfamiliar, FFmpeg is a complete, cross-platform solution to record, convert and stream audio and video. If you have a task that falls into any of those categories, FFmpeg can do it like no other!
So without derailing this post and rambling on about my audio setup at home, all you have to know is I have a line level audio source plugged into the sound card’s Line In on one computer and I want to pipe that audio to any other computer in my house. So conceptually:
And bingo! The second to last line is just what we want, so keep note of this!
1
[dshow @ 01ffc400] "Line In (High Definition Audio "
For this example, let’s assume Computer B’s IP address is 192.168.1.17, which is our streaming destination. So hit play on your audio source and fire off the following to begin streaming: