Operator Error

adventures in software development and continuous learning

PyCon 2016 Talk: Remote Calls != Local Calls

I had the priviledge to speak at PyCon this year in Portland, Oregon. This was my first time speaking at PyCon and I was very excited to talk on a topic I truly love, graceful degradation. The talk focuses on techniques one can use to gracefully degrade and mitigate the impact of failures when the dependencies within your system begin to fail.

You can find my slides here and the talk details here. As always, feel free to contact me if you have any questions.

PyPI Deployment With Travis CI

I wrote an article on the Appneta Blog where I discussed automating your PyPI deployment with Travis CI:

As the sole maintainer of the python-traceview library, I’ve been following a simple deploy process I cooked up for getting new releases of the library on PyPI (the Python Package Index). Now that I’ve been maintaining the project for almost 2 years, the “excitement” of doing a manual release has come and gone. So naturally I began to ask myself: How can I automate releasing to PyPI?

If you’ve ever dreaded merging a pull request simply because you don’t want to go through the hassle of doing a manual release, then you should check out the article to learn how to automate your release.

Hold the Line: Line Profiling in Python

I wrote an article on the Appneta Blog where I discussed line profiling in Python:

If you’ve ever profiled code in Python, you’ve probably used the cProfile module. While the cProfile module is quite powerful, I find it involves a lot of boilerplate code to get it setup and configured before you can get useful information out of it. Being a fan of the KISS principle, I want an easy and unobtrusive way to profile my code. Thus, I find myself using the line_profiler module due to it’s simplicity and superior output format.

I’m a big fan of tools that are simple to use, yet powerful in nature. Being both efficient and productive are important qualities in my daily workflow, so check out the article to learn more!

Create a NPM Package for Your Hubot Script

So you just thought of the next killer Hubot script and you want to share it with the world? Well if you haven’t been paying attention, you should know that the hubot-scripts community has changed their contributing policy:

It’s now preferred that if you are able to, you should release your script as part of a npm package built for Hubot.

And it’s a good change, because this now allows scripts to be distributed and versioned as individual NPM packages. This also means that individual scripts can easily declare dependencies, which is a big improvement.

If you’re familiar with the process for creating a NPM package, then you’re off to a good start. We’re going to be using yeoman and the generator-hubot-script to generate all the boilerplate necessary for quickly creating a NPM package for our Hubot script.

NOTE: The generated boilerplate is based on the hubot-example repository.

Now let’s ago head and install yeoman and the generator-hubot-script using NPM:

1
$ npm install -g yo generator-hubot-script

Secondly, let’s go ahead and create a directory for our script. For the sake of this example, we’re gonna assume we created a script called foobar.coffee, so we want to appropriately namespace our package with the name hubot-foobar:

1
2
3
$ mkdir hubot-foobar
$ cd hubot-foobar
$ yo hubot-script:foobar

Follow the prompts and wait until the NPM install completes. Now let’s initialize the directory as a Git repository and commit our initial files:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ git init
$ git add .
$ git commit -m "Initial commit"
[master (root-commit) 5871342] Initial commit
 10 files changed, 175 insertions(+)
 create mode 100644 .gitignore
 create mode 100644 .travis.yml
 create mode 100644 Gruntfile.js
 create mode 100644 README.md
 create mode 100644 index.coffee
 create mode 100644 package.json
 create mode 100755 script/bootstrap
 create mode 100755 script/test
 create mode 100644 src/hello-world.coffee
 create mode 100644 test/hello-world-test.coffee

Cool, so now all the boilerplate is in place, it’s out with the old and in with the new:

1
2
3
4
5
6
7
8
$ cp ~/some/location/foobar.coffee src/
$ git add src/foobar.coffee
$ git rm src/hello-world.coffee
$ git commit -m "Add foobar script"
[master 4f4c612] Add foobar script
 1 file changed, 22 deletions(-)
 create mode 100644 src/foobar.coffee
 delete mode 100644 src/hello-world.coffee

Unit testing is cool, so let’s go ahead and set that up:

1
2
3
$ git mv test/hello-world-test.coffee test/foobar-test.coffee
$ vim test/foobar-test.coffee # update file to test your script
$ grunt test

Once you have some passing test cases, let’s go ahead and commit these changes:

1
2
3
4
$ git add test/hello-world-test.coffee
$ git commit -m "Update test cases"
[master 35d124a] Update test cases
 1 file changed, 2 insertions(+), 2 deletions(-)

Next, you want to go ahead and update the package.json file to include all the relevant information for your script. Make sure the following fields are all satisfactory:

  • author
  • description
  • version
  • author
  • license
  • keywords
  • repository
  • bugs
  • dependencies

More details on this file can be found here. Once you’re done, go ahead and commit your changes:

1
2
3
4
$ git add package.json
$ git commit -m "Update package.json"
[master 78c3233] Update package.json
 1 file changed, 1 insertion(+), 1 deletion(-)

Lastly, go ahead and review the README.md file. Feel free to make any changes, and add any missing documentation you think is necessary. Make sure to commit any changes you make.

1
2
3
4
$ git add README.md
$ git commit -m "Update README"
[master f94cd40] Update README
 1 file changed, 1 insertion(+), 1 deletion(-)

Congratulations, you’ve packaged your Hubot script and are ready to share it with the world! So go ahead and push this repository to Github or publish the package on NPM.

The Right Stuff: Breaking the PageSpeed Barrier With Bootstrap

After recently attending Velocity in NYC, I started to think more about why performance always seems to be an afterthought with developers. As I pondered this thought, I kept coming back to the following question:

How hard is it to get a perfect PageSpeed Insights score?

If you’d like to know the answer, then head on over to the Appneta Blog and checkout the new blog post I wrote where I explore this topic in more detail.

New Features in Burndown

"Burndown logo"

I wrote an article on the Appneta Blog about some of the new features that have recently been added to Burndown! Here’s the lowdown:

  • 30 day summary view of a Github repository
  • Filter Github milestone issues by label
  • Several milestone view enhancements

For more details, head on over and check it out.

An Introduction to Client Latency

I recently wrote a blog post on the Appneta Blog about client side latency for users on the web.

So while the effects of poor performance is obvious, it makes one wonder about the relationship between client latency and the “perception of speed”. After all, the user can trigger many state change events (page load, submit a form, interact with a visualization, etc) and all these events have an associated latency to the client. However, are certain types of latency more noticable to the user then others?

This is an area that web developers need to think about closely, and it’s important because it could have an impact on your user engagement and user experience.

Heroku Style Deployment Anywhere

Heroku is quite the interesting topic these days. While there are many strong opinions about the platform itself, I feel the one thing they get right is deployment.

1
$ git push heroku master

That’s it. Dead simple.

Now there really isn’t any magic behind this, as Heroku harnesses the power of Git hooks. To mimic this behavior, you can use a post-receive hook, which is just a bash script that runs after the entire process is completed and can be used to update other services or notify users. The beauty is that this functionality can easily be replicated for deploying any of your sites.

For this example, let’s keep it simple and deploy a static blog to a single web server.

First order of business is to make sure you can easily SSH onto your server. So if you haven’t already, make sure your SSH public key is copied to your server:

1
[dan@local] $ ssh-copy-id username@server.org

Now, on your server, we want to create a bare Git repository. This will be the destination repository for when you perform a git push for deployment:

1
2
3
4
[dan@server] $ cd ~/git
[dan@server] $ mkdir myblog.git
[dan@server] $ cd myblog.git
[dan@server] $ git init --bare

Next thing to do is setup our simple post-receive script. Let’s assume for the sake of this example that our blog is served on our web server from the following directory: /www/myblog.com. First create the hook in your git repository:

1
[dan@server] $ touch ~/git/myblog.git/.git/hooks/post-receive

Then update the post-receive hook:

1
2
3
4
#!/bin/sh
echo 'Deploying to Production...'
GIT_WORK_TREE=/www/myblog.com git checkout -f
echo 'Success!'

Ok, so here is where the magic happens. The GIT_WORK_TREE environment variable is used as a destination for your checked out source code, with any changes you might have made. Thus, we are performing a force checkout to the working tree directory (aka deployment)!

NOTE: One side benefit of the working tree approach is that you do not accidentally serve your .git directory!

Now that we have a new remote repository, let’s go ahead and add it to our local git repository:

1
[dan@local] $ git remote add production ssh://username@remote-server.org/~/git/myblog.git

The last thing left now is to deploy:

1
2
3
4
5
6
7
8
9
10
[dan@local] $ git push production master
Counting objects: 9, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (5/5), done.
Writing objects: 100% (5/5), 455 bytes, done.
Total 5 (delta 3), reused 0 (delta 0)
remote: Deploying to Production...
remote: Success!
To ssh://username@remote-server.org/~/git/myblog.git
   52056c4..1b5c293  master -> master

Now this is a very simple example of a deployment script. Feel free to get as crazy as you want…send emails, message chat rooms, call fabric tasks, etc! The sky is the limit, so get as creative as you’d like.

Feel free to fork my Gist to get yourself started!

Burndown Before You Burn Out

I wrote an article on the Appneta Blog about my new open source project called Burndown.

Although there are many unique characteristics for each software development methodology, one thing is consistent: the goal of progress. Everyone wants to know how a particular project or task is coming along and when it’s going to be complete. So even if you don’t believe in due dates and delivering to a schedule, it’s still useful to know and to be able to communicate your current progress to others .

To aid us on our never-ending quest to know just how awesome we really are, we’ve developed Burndown, an open source tool to assist in tracking progress of a Github milestone!

Using FFmpeg to Stream Audio in Your Home

I like to listen to records. I also like to listen to my records when I code. So recently, I said to myself,

“How can I easily pipe audio between rooms in my house using my network and open source software?”

The answer: FFmpeg!

For the unfamiliar, FFmpeg is a complete, cross-platform solution to record, convert and stream audio and video. If you have a task that falls into any of those categories, FFmpeg can do it like no other!

So without derailing this post and rambling on about my audio setup at home, all you have to know is I have a line level audio source plugged into the sound card’s Line In on one computer and I want to pipe that audio to any other computer in my house. So conceptually:

[Line Level Audio Source]---->[Computer A]====(Home Network)====>[Computer B]

Still with me? Ok good! The first thing I want to do is figure out what audio inputs FFmpeg can see:

Computer A:

1
ffmpeg.exe -list_devices true -f dshow -i dummy

Output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
D:\apps>ffmpeg.exe -list_devices true -f dshow -i dummy
ffmpeg version N-44818-g13f0cd6 Copyright (c) 2000-2012 the FFmpeg developers
  built on Sep 27 2012 19:30:20 with gcc 4.7.1 (GCC)
  configuration: --enable-gpl --enable-version3 --disable-pthreads --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass --enable-libcelt --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libutvideo --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
  libavutil      51. 73.101 / 51. 73.101
  libavcodec     54. 59.100 / 54. 59.100
  libavformat    54. 29.104 / 54. 29.104
  libavdevice    54.  2.101 / 54.  2.101
  libavfilter     3. 17.100 /  3. 17.100
  libswscale      2.  1.101 /  2.  1.101
  libswresample   0. 15.100 /  0. 15.100
  libpostproc    52.  0.100 / 52.  0.100
[dshow @ 01ffc400] DirectShow video devices
[dshow @ 01ffc400]  "Rocketfish HD Webcam"
[dshow @ 01ffc400] DirectShow audio devices
[dshow @ 01ffc400]  "Microphone (Rocketfish HD Webca"
[dshow @ 01ffc400]  "Digital Audio (S/PDIF) (High De"
[dshow @ 01ffc400]  "Line In (High Definition Audio "
dummy: Immediate exit requested

And bingo! The second to last line is just what we want, so keep note of this!

1
[dshow @ 01ffc400]  "Line In (High Definition Audio "

For this example, let’s assume Computer B’s IP address is 192.168.1.17, which is our streaming destination. So hit play on your audio source and fire off the following to begin streaming:

Computer A:

1
ffmpeg -f dshow -i audio="Line In (High Definition Audio " -acodec libmp3lame -ab 320000 -f rtp rtp://192.168.1.17:1234

Now, just set Computer B listening for audio:

Computer B:

1
ffplay 192.168.1.17:1234

BOOM! Point to point audio streaming using the power of FFmpeg in your own home!

References: