Tuesday, September 9, 2014

Angular constants vs services

A few months ago I've started using a pattern to wrap library code as angular constants to make them injectable.  I'd handle something like d3 with:

app.constant('d3', window.d3);

This was easy, but I always felt a little weird doing this because these library objects are not really constants.  Depending on the library, these can either be factory functions or large complex objects with a huge amount of internal state.  Definitely not constant.

A co-worker was wrapping some simple code to inject into a route handle configuration block.  The code was just a simple function for generating a template string based on a string passed to it.  This sounds like a great place to use a factory, but the docs say that
Only providers and constants can be injected into configuration blocks.
So this made me think again about constants.  Why could they be injected early?

The docs have a little to say about this.  See "the guide" documentation on modules:

Configuration blocks - get executed during the provider registrations and configuration phase. Only providers and constants can be injected into configuration blocks. This is to prevent accidental instantiation of services before they have been fully configured. 
Run blocks - get executed after the injector is created and are used to kickstart the application. Only instances and constants can be injected into run blocks. This is to prevent further system configuration during application run time.

The module type documentation also hints at the difference:

constant(name, object);

Because the constant are fixed, they get applied before other provide methods. See $provide.constant().
But this talk of "fixed" and "system configuration" is a little misleading.  Looking at the code for the injector, it starts to become clear what they really mean to say is that constants don't have any async configuration.

When a constant is set up, the constant function simply checks the name for validity and shoves the object into a couple cache arrays.

However, when a service is instantiated, it starts with a call to the service function, which calls factory, which calls provider.  Because services call provider (by way of factory) with an array of arguments, they are set up with a call to providerInjector.instantiate.  This method makes a call to the getService method of createInternalInjector.  This function is where the async handling magic happens.  Because the call to the factory constructor method for a service can be async, this function sets a marker at the assigned position of that service in the cache to prevent the service from getting instantiated multiple times when the thread of control gets passed back to the main process which is injecting other modules.

Check it out.  It's pretty neat.

After seeing all the complexity hoops that angular jumps through to make services work (and how simple constants are by comparison), the difference becomes clear.  If you use services, angular handles aync for you.  If you're using constants, you're on your own.

Monday, June 30, 2014

Opening and closing ports on iptables

It turns out this is pretty easy.  The insert and append commands make it easy.

iptables works by setting up chains of filters for certain types of requests.  To see all your chains, type:

# This gives you a verbose list of the rules, with numerical displays for port numbers
iptables -L -v -n

You may find that some chains feed into other chains.  Understanding the flow of how iptables handles letting packets through is the first step toward getting your rules in the right place.  There are 3 predefined chains (INPUT, FORWARD, and OUTPUT).  These are the starting points for processing any network traffic handled by iptables. 

iptables works by running rules against packets in order until it finds one that matches.  When it finds a rules that matches, it applies the relevant action to the packet, which could be accepting, dropping, rejecting, forwarding, or any of a number of other actions.

The rules are processed by order in their chains, so order matters.  Often a chain will end with a line that looks like this:

REJECT     all  --  anywhere             anywhere            reject-with icmp-host-prohibited

This rule rejects all traffic on all ports.  This is a common way to handle whitelisting only approved activities and rejecting everything else.  For your rule to take effect, it has to come before this rule in the chain.

Once you understand this, the value of the insert command makes more sense.  You need to get your rule into the appropriate place in the chain.  An example of opening port 8080 is below.  In this example I'm adding the rule to a specific chain (RH-Firewall-1-INPUT) that is handling all packets routed through the default INPUT and FORWARD chains.

# The rule closing all ports was previously in the 16th spot in the chain
# This new rule opens port 8080 by putting a rule right before that "catch all" exclusion rule
iptables -I RH-Firewall-1-INPUT 16 -m state --state NEW -p tcp --dport 8080 -j ACCEPT

If you happen to make a mistake, you can easily delete a rule at a specific point in your chain with the following.

# Deletes rule 12 in chain RH-Firewall-1-INPUT
iptables -D RH-Firewall-1-INPUT 12

It's usually good to add a line blocking all unspecified traffic at the end of your config file.

 # Reject all traffic not explicitly allowed in previous rules
iptables -A RH-Firewall-1-INPUT -p all -j REJECT

There is a lot more you can do with iptables, but hopefully this was a helpful starting point.

BONUS

If you're working a VM in VirtualBox, you can edit the port forwarding rules and they will take effect without having to reboot the VM.

DOUBLE BONUS

When trying to make sure a network service is working, here are a few good steps that I found to minimize frustration:
  • Turn off selinux (sudo setenforce 0)
  • Turn off iptables (service iptables stop)
  • Use nmap to scan open ports (nmap -sS -O 127.0.0.1)
  • Use curl to make sure you can access the service locally (applies to HTTP services only)
Once you can get to the service from the inside, gradually start turning service back on until something breaks.  Then fix it.



Wednesday, June 25, 2014

Hosting multiple versions of a django site on subdomains with apache/modwsgi

I've run into a couple scenarios recently where customers want to have access to multiple versions of a site at the same time.

Why multiple versions

The first scenario involved an analysis application where a bit of simulation code changed.  In that case we were fairly sure that the customer would want to use the updated model, but we wanted to provide access to both versions so they could do some comparisons.

The second scenario involved an application that pulled data from a remote database, cleaned it up, and provided an interface for browsing the data.  The format of the data in the remote database changed, but the customer wanted to be able to still connect and update from tables containing data in the old format as well as the new format.

For both of these situations, it would have been possible to edit the user interface and the backend code to allow access to both versions of the application at the same time, but this would have made for more confusing interfaces and a much more complex codebase.

Why subdomains

There are 2 main ways to handle serving 2 versions at the same time: 1) using different ports 2) using subdomains.  Each method has its upsides and downsides.

If you serve off multiple ports, you first have to open another port in your firewall.  For many applications (esp. those sitting in a customer testbed) this isn't a big deal.  In my group's situation, we deploy into some pretty tightly monitored environments, and minimizing the number of open ports makes certification and approval a simpler process.  Also, serving off of multiple ports makes just makes for less pretty urls.  "simv2test.myapplication.com" is just cleaner and more self documenting than "myapplication.com:8080".

If you choose to work with subdomains, you'll need a wildcard DNS record to be able to grab all the traffic to your site.  Some hosts and DNS services provide this automatically, and some make you pay more for that service.  Also, if you're serving over SSL, you'll need a wildcard SSL certificate.  This may also cost a bit more than a normal single domain certificate.

After considering both options, we decided subdomains made more sense in each of the scenarios described above.

How

First consider your data.  In both of the situations described above we were serving up 2 different versions of the code and the data.  We handled this by copying our database into a new database, and pointing the new fork of the application at this new database.

In mysql the command was as simple as this (after creating the "appname_2013" database):

sudo mysqldump appname | sudo mysql appname_simv2test

Mongo has a command specifically for this, which can be evoked from the mongo shell.

Next, setup the application code.  The code for one of the projects was being served from "/var/www/deploy/appname/", so I copied the new version of the code to "/var/www/deploy/appname_simv2test".  Make sure to make the necessary permission changed to the files and directories.  I found that writing a fabric task to deploy each version of the application made this much easier.

Finally, setup your apache configuration to serve up each version of the application at the appropriate subdomains.  Something like the following should work ok.



You probably don't want to do this with too many subdomains on one server, because each subdomain is basically doubling the amount of resources running on your computers (2x number of application threads, 2x number of database tables).

But for a simple temporary solution, that should do it.

BONUS

Here's a version that deploys over ports instead of subdomains.  One of the ports is served over https (port 80 redirected to 443), and the other is just over http on port 8080.


Monday, June 23, 2014

Getting django LiveServerTestCase working with selenium's remote webdriver on VirtualBox

Our group does our development on linux VMs, usually running on a Windows host.  We want our developers to be able to write selenium system tests to wrap some of our existing functionality before we start diving into some deep refactoring.

Most of the LiveServerTestCase documentation I have seen is for the case of django running locally and talking to selenium directly.  Getting an instance of django running a VM working with selenium running on the VM host required a few adjustments.

Start with a modern Django

LiveServerTestCase was introduced in Django 1.4.  We were on Django 1.3.  I tried using django-selenium, but had significant problems with their built in test server implementation not starting, not stopping, or crashing in strange ways.

I ended up upgrading our project from django 1.3 to 1.6.5.  For our large project this just took ~2 hours of fiddling.

Open/forward required ports

It's probably best to just turn off the iptables service when setting things up.  If you have selinux running, set it in permissive mode.

Add a line in your settings.py file to configure the port to use for the testserver used by LiveServerTestCase.

os.environ['DJANGO_LIVE_TEST_SERVER_ADDRESS'] = '0.0.0.0:8008'

It's important that you use '0.0.0.0' and not 'localhost' so that the port forwarding on VirtualBox works.  I'm using 8008 because it is one of the auxiliary http ports recognized by the selinux default configuration.

Then edit the settings of the VM in VirtualBox to forward port 8008 to some unused port on your local machine.  We're forwarding port 80 on the VM to 8888, so I forwarded this test port to 8889.

Serve up static files

We have apache serving static files at /static_assets/.

The test server is a python server, so we had to configure it to find and serve these static files.  In a test-specific settings file, I added:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': 'fact_rdb',
    }
}

if settings.DATABASES['default']['ENGINE'] == 'django.db.backends.sqlite3':
    import os
    from django.conf.urls.static import static
    STATIC_ASSETS_URL = "/static_assets/"
    STATIC_ASSETS_ROOT = os.path.join(settings.PROJECT_PATH, 'static_assets')
    LOGIN_REQUIRED_URLS_EXCEPTIONS = tuple(list(LOGIN_REQUIRED_URLS_EXCEPTIONS) + [r'^/static_assets/.*$'])
    urlpatterns += patterns('',
        url(r'^static_assets/.*$', 'myapp.views.serve_static_assets', name="static_assets_for_testing"),
    )

The last line is the important one.  It redirects to a view described below.

The lines where I an editing "LOGIN_REQUIRED_URLS_EXCEPTIONS" exist because we are using a middleware to handle restricting access to certain urls.  You can see what that middleware looks like here.  Remove that line if you're not using that middleware.

I'm also configuring the tests to use an in-memory sqlite database.  I recommend that you use this if possible.  If you are using custom sql in your code, this may not be possible, but if you're justing using the ORM it should work just fine.  Running with an in memory database (in my experience) seems to take a couple seconds off the execution time of every test in your test suite.

The view that ties into the url in the code above and serves up the static assets is as follows:

import os
from mimetypes import guess_type
from django.http import HttpResponse
from django.core.servers.basehttp import FileWrapper

def serve_static_assets(request):
    # Take a url like /static_assets/path/to/file.js and create a path
    filename = "/path/to/static/dir" + request.path_info
    static_file = FileWrapper(open(filename, 'rb'))
    mimetype = guess_type(request.path_info, False)[0] or 'binary/octet-stream'
    response = HttpResponse(static_file, mimetype=mimetype)
    response['Content-Length'] = os.path.getsize(filename)
    return response

WARNING: This is not a secure way to serve static files.  Please do not use this for anything but testing.


Add setUpClass and tearDownClass methods to your test classes

The following sets up a web driver (available at "self.driver" in your test functions) connected to the driver on the host machine.

SELENIUM_HOST = '10.0.2.2'
SELENIUM_PORT = 4444

class myTest(LiveServerTestCase):

    @classmethod
    def setUpClass(cls):
        cls.driver = webdriver.Remote(
            command_executor='http://%s:%s/wd/hub' %(SELENIUM_HOST, SELENIUM_PORT),
            desired_capabilities=DesiredCapabilities.CHROME)
        super(LoginTest, cls).setUpClass()

    @classmethod
    def tearDownClass(cls):
        cls.driver.quit()
        super(LoginTest, cls).tearDownClass()

To connect to a server running on the host machine, use the following settings.

SELENIUM_HOST = '10.0.2.2'
SELENIUM_PORT = 4444

On virtualbox, the IP of the host machine is usually 10.0.2.2.

Running your tests

You'll need to first start a selenium server running on the host machine.  If the selenium driver server isn't running there will be nothing for your selenium test runner to talk to.

To do this you'll need java and selenium installed.  Also, add the executables for any desired driver plugins (e.g. the chrome driver plugin) on your path.  Run the server with something like:

"C:\Program Files (x86)\Java\jre7\bin\java.exe" -jar selenium-server-standalone-2.33.0.jar

On the VM, run the command to test your application.  Something like:

python manage.py test --settings=custom_settings_file module.class.test_function

I hope that helped!


Wednesday, June 18, 2014

Decompiling python bytecode with pycdc

We somehow lost the correct working version of a python file for a project, but one of our servers still had the pyc file (which was working fine in production).  To fix this,  went hunting for a good solution to get back our sourcecode.

From what I found, it seems the pycdc library is the best option currently, though there is also:



When I tried unpyc it threw and error for me, and uncomplye2 only works with 2.7.

Here are the steps to setup pycdc.  These instructions are for Centos 5.3, so they may need to be tweaked for your system.

Install CMake


wget http://www.cmake.org/files/v2.8/cmake-2.8.12.2.tar.gz
tar xzvf cmake-2.8.12.2.tar.gz
cd cmake-2.8.10.2
./bootstrap
make
make install

Download and compile pycdc

git clone git@github.com:zrax/pycdc.git
cd pycdc
/usr/local/bin/cmake ../pycdc/
make

Using pycdc to decompile

The program outputs to stdout, so redirect to a file.

./pycdc/pycdc filename.pyc > filename.py

That's it.

Saturday, May 31, 2014

Fixing python terminal

Does your terminal make funny characters when you press the arrow keys instead of retrieving the last command?

You can fix that a couple quick installs.

pip install ipython
# may require you to 'yum install ncurses ncurses-devel' or similar, depending on your os
pip install readline

Now access the shell substituting the command 'python' with 'ipython'.  It should behave nicer.

IPython can do a lot more for you than give you a pretty shell.  Check out its capabilities at the project website.

Installing mongodb on an old computer

I have a little old 32 bit Compaq machine that I'm setting up a playground server for projects.  It has a lot of disk space (~500 GBs), a decent CPU (AMD Sempron 2 GHz), and a nice amount of ram (~2 Gb).  But it's old.  And it's 32 bit.  Still, I wanted to get mongo running on it.

When I tried to install mongo using the instructions for mongodb (yes, I made the change for the 32 bit version), I ran into the following error:

[tcvh@localhost ~]$ sudo service mongod start
Starting mongod: bash: line 1: 22495 Illegal instruction     /usr/bin/mongod -f /etc/mongod.conf > /dev/null 2>&1
                                                           [FAILED]


I looked into this a little, and it seemed the easiest solution was to roll back the version.  At first I tried just following their instructions for specifying a different version number, but when I went to install, yum couldn't find anything.

So instead I crawled around the directory listing for the RPM and saw that they recently changed the naming convention of the RPMs.  I tried a few different things, but eventually found that for installing version 2.0.4, the following works.

The contents of "/etc/yum.repos.d/mongodb.repo":

[mongo-10gen]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/i686/
gpgcheck=0
enabled=1


The installation commands:

sudo yum install mongo-10gen-2.0.4
sudo yum install mongo-10gen-server-2.0.4

After that you can see that mongod is installed in "/usr/bin"

updatedb  # you just added stuff, so update the search index
locate mongo  # hey, look at that, it's there

You can start up the server and get into the shell with the following:

sudo service mongod start  # kick off the server
mongo  # get into the shell

You may also want to set up the service to run when the machine boots:
sudo chkconfig mongod on

That's it.

Monday, April 21, 2014

Lean UX Manifesto, pdf version

Lean UX is a collection of practices for approaching the design of products and interfaces that draws from lean manufacturing, agile development, and the lean startup movement in its focus on core business-focused principles.

This article explains the core points of the movement and distills it down to 6 points in manifesto format.  I reads like the agile manifesto, talking about preferences, and I think these documents serve nicely as guides rather than checklist.

I wanted something I could post up on my wall next to Nielsen's 10 Usability Heuristics , so I took the manifesto text and turned it into something that I can pin up to my wall.  Hopefully it is useful to someone else.

Lean UX Manifesto, PDF

While I was at it, I did the same thing to the Agile Manifesto.

Agile Manifesto, PDF

EDIT: 4/26/2016

I couldn't find the PDF version of Nielsen's Usability Heuristics, so I re-created that as a PDF as well.

Nielsen's 10 Usability Heuristics, PDF

Wednesday, April 9, 2014

Exposing global libraries as module constants in angularjs

I want to use the awesome underscore library with angular.  Although you can just drop the library in your html file and reference the _ variable as a global, I wanted to pass it into my app's module with angular's dependency injection.  The main advantages here are:

1. Explicit definition of dependencies

Having the objects/services/controller/etc that something depends on defined makes it easier to re-use components elsewhere.  This helps avoid the annoying '_' is not defined type of errors, but also helps prevent some more subtle bugs if you are depending on a undefined global in a part of your code that doesn't get run very often.

2. Testability

If we inject underscore, we could mock it out or wrap it during a test.  This would allow us to do cool things like see how many times a certain underscore function was called.

How do you do this?  At first I read this article that discussed wrapping it in a factory.  This certainly works, but since angular has the nice interface for defining inject-able constants, I used that instead.  So all the code you need is:

var app = angular.module('MyApp');
app.constant('_', window._ );


Yep.  That's it.  And the first line is just to give you context.

To use this in a controller, do something like this:

app.service('MyService', ['_', function(_){
  // Your code goes here
}])


Pretty simple.

Saturday, April 5, 2014

Fixing missing VCBuild.exe

From the yeoman generator for angular, when installing socket.io:

MSBUILD : error MSB3428: Could not load the Visual C++ component "VCBuild.exe". To fix this, 1) ins
tall the .NET Framework 2.0 SDK, 2) install Microsoft Visual Studio 2005 or 3) add the location of
the component to the system path if it is installed elsewhere.

These instructions may be useful when encountering less helpful messages about a missing "VCBuild.exe" file in other programs.

You can download the .NET framework v2.0 SDK here.
You can download the Visual C++ 2005 ISO from the link in this blog post.

Then you can use this utility from Microsoft to mount the iso image and run the installer.  To mount the ISO, follow the instructions in the README extracted from the utility.  Make sure to run the Virtual CD ROM Control Panel executable as administrator.

I had problems with permissions with that utility, so I took the easy way out and just burned the Visual Studio ISO to a disk.  VirtualCloneDrive probably would have worked but I didn't feel like messing with it.  After all, this is just a step toward configuring a development environment...

I received a few errors about compatibility issues when installing for both Visual Studio 2005 and MSSQL Server Express.  I ignored those and continued with the installation.  The files of interest were placed in C:\Program Files (x86)\Microsoft Visual Studio 8\VC.  I also noticed vcvarsall.bat (a file that gets referenced many times when trying to compile components on Windows) was just one directory up at C:\Program Files (x86)\Microsoft Visual Studio 8\VC.  I added both to my PATH.

After those steps, everything worked fine.

EDIT

I ran into this same problem on another machine, and found this solution for installing the 2008 Express Edition of Visual Studio.  Follow the link and run the installer.  This worked as well as installing the 2005 edition but was much faster.

Note that this second solution just installs a 32 bit compiler.  Also it may be necessary to install a different version of the compiler depending on the version of python you are running.

Wednesday, April 2, 2014

Backbone View Event Types

Today I read through Derick Bailey's excellent post on memory management in Backbone view code.  I was a bit confused by all the different ways to register and un-register events and what they were all for.  Specifically I was confused by his close() method, used to clean up view events.

Why was unbind() needed in addition to remove()?   Why was stopListening() not needed?

So I put together a simple guide for myself.

After putting this together the code makes more sense.

You do need to call unbind() in addition to remove() because remove() handles DOM events and unbind() handles Backbone events.

You don't need to call stopListening() because unbind() catches everything stopListening() would catch.  You don't need undelegateEvents() because jquery's remove() removes all of the events on that DOM element for you.

Hope that helps.

Sunday, January 19, 2014

Steps I took for moving my Wordpress site to be hosted on Earthlink

My neighborhood's home owner's association is using Earthlink hosting.  Most of the cheap shared hosting services have some eccentricities that you may run into when trying to do anything outside the standard "1 click installer" type of actions.  Earthlink seems to be especially behind the times.
Some clues:
  • During my fiddling with user database accounts, I managed to get the web panel into a state where I was locked from doing anything to my phpmyadmin installation.
However,  both times I've used their chat support tool I've received fast and helpful replies, so the barrier for switching hosting services stayed just high enough to keep me with Earthlink (for now).

Getting to the point: here's what I did to get the latest version of Wordpress (with updated plugins and a database full of pages) migrated from a server I was running locally.

1. Use the Earthlink wordpress installer

This creates the wp-config.php and puts it into /private.  It also creates the database, which is not a big deal but it's nice.  It also probably fixes up htaccess and some other settings, which you can't edit.

You can't stop here though because you can't update your Wordpress installation, and running with an old version of Wordpress is bad for security and breaks many of the plugins that make Wordpress awesome.

2. Clean up your database dump file

You used mysqldump to get the data from your old database, right?  That's good, but you need to change the urls in the database dump file.  I was able to do this quickly using find/replace on the dump sql file.

Also, because the commands in the dump files include "lock" and "unlock" statements and Earthlink doesn't give you those permissions on your database users for some reason, you need to remove all of these commands.  I ended up just running the commands for each table one at a time, removing the "lock" and "unlock" statements for each table before executing.

3. Install phpmyadmin

For some reason I couldn't run any sql commands from the command line.  So In installed phpmyadmin, which worked fine.

One thing to look out for is that you have to associate the phpmyadmin installation with a single database when you add it.  If you delete the user associated with the database through Earthlink's console, the database is also deleted, and this breaks phpmyadmin to a point where you can't re-install, uninstall, or re-associate the phpmyadmin installation with a different database.  At that point, you have to talk to support.

4. Delete all the crap the Wordpress install placed in the /public folder

Yes, you just added it, but this is where your new Wordpress files will go.  Make sure to not delete the /phpmyadmin symlink in that directory.  You need that.

5. Copy all of files for your local Wordpress install into Earthlink's /public folder

Here "local Wordpress install" means all the files from the installation you already had running somewhere else.  FTP works fine here.


6. Copy the wp-config.php files from /private to /public

This file is pretty important.  You could fiddle around with the wp-load file to get it to fine the wp-config file in /private, but I found it simpler just to move the file.


That should be it!  It felt a little unclean, but it got the job done and I'm now running a modern copy of Wordpress that I can easily update because all of the files are in a folder I have FTP access to.