Thunder on the Plains

ThunderPlains 2015 is over and overall I was impressed with the quality of the presentations, the overall event, but mostly the OKC community as a whole.  Particularly as this was only the second year of this event.

The day started with a significant announcement, CodeForOKC.org will hold its first meeting later this month.  Most people who know me, know that I am a small government libertarian; but I am a huge fan of local government (government is best when it is closest to the people it represents.)  This will give coders a chance to service their local community.  The first meeting will be on October 27th at 6:00 PM, check out the meetup.

Listed is some of the presentation material, links, and references mentioned by the presenters (at least for the sessions I attended.)

Mobile Applications with JS & Iconic

So Tell Me Again Why We’re not Using Node.js

Supercharge Your Productivity with Ember.js

Building Massive Angular Apps

Your Grandparents Probably Didn’t Have Node

The Importance of Building Developer Communities

Life is what you make of it

The most difficult aspect of software development for new programmers often has nothing to do with algorithm complexity or syntactical quarks; it’s all the other “stuff” associated with building, managing, and testing software systems.  When a developer steps into a existing business that already has a software stack to support the problem can be mitigated by relying on the institutional knowledge that the existing developers have formed over the course of maintaining their software.  I haven’t, in most cases,  had that fortune  in my career because either a) I was the companies first software engineer, or b) the existing software engineers had become proficient at a “alternative” software stack (and honestly, alternative is the kindest word I could come up with for Mainframe/Cobol.)

The longer a programming language has been around the more complex these build & management tools get.  The reasons are pretty simple.  The longer a language has been used, the more complex and more broad the uses of that language become.  Build tools generally start off pretty simple (make was originally an 8 line bash script for gods sake) but they must expand to cover more and more complex setups with more and more non-standard configurations.  In the most extreme cases the support tools even need to consider multiple platforms on multiple hardware configurations.  This problem can be exacerbated when a language needs to be “compiled” (and I use the term loosely) or works on “core” systems, meaning closer to the hardware, network, or data layer.*

C suffers from all of the above listed problems and more.  Having been around for around 40 years, in constant usage, on every platform ever made (super-computer to toasters), used for hardware drivers, operating systems, core network stacks, and even to create other programming languages; means that C can be the most complicated system ever supported by mankind.  I’m really not kidding about this.  More than one person has pointed out that the Linux kernel (95% pure C code) is many orders of magnitude more complex than sending a man to the moon is or even sending a woman to Mars will be.  Anyone who has had to create a Gnu build-chain supported C program from scratch has had to kill themselves learning the intricacies of make, automake, config, autoconfig, m4, autoreconfig, cmake, libtools, and autoheader.  Seriously, a “correct” Gnu C project with 1 header file and 1 c file has 26 buildchain files supporting it on initial setup.

Recently I have been doing some really interesting C development on micro-mobile platforms.  The first language I did significant (i.e. not a GWBasic MadLibs game) development on was C.***  My college experience with C was relegated to a couple hundred lines and using the up arrow to re-compile the program after changes.  Now my annoyance with with the autoconf build tools (and its many many gotchas) is replaced with the need to support cross-compiling, manage external libraries, and automate build deployments.  I have had to learn each of these tools and what it is they accomplish for me so I don’t have to re-invent the wheel.  Here are some of the more useful sources of information I have come across:

  • Gnu Autoconfig, Automake, and Libtools – by Gary V. Vaughan, Ben Elliston, Tom Tromey and Ian Lance Taylor.  Available as a Web Book it covers the entire build chain and practical usage of each of the tools.  Also does a great job of showing how modern technological development owes a huge debt to the flexibility and power these tools gave (and continue to give) C developers.
  • Gnu.org amhello – A “Hello World” tutorial for getting Autotools setup and configured in a simple project.  Great example for getting a full build setup running for C.  The full code of which can be found in the automake doc folder on Linux systems, generally something like /usr/share/doc/automake/amhello-x.x.x.tar.gz.
  • Clemson Automake by Example – Old article (the pages images are all broken) that walks through a simple C program and its build chain.  Excellent tutorial for getting a notice programmer setup with a distributable and effective build environment.
  • Autotools Mythbusters – Practical, if high level, overview of autotools and its associated components.  There is an Appendix with a list of examples that is particularly outstanding.  Think stackoverflow for autotools that has been aggregated into a Cookbook.
  • Simple Makefile Tutorial – A newbie guide to creating Makefiles for building software.  If I include code examples in the project documentation I will generally create a simple Makefile that will build the examples with a “make someexample”.
  • Martin Mann’s HowTo Autotools – The examples are in C++ but the step by step process to add functionality to the autotools build chain is outstanding.  Especially useful if you have figured out some of the basics already.

Finally, because setting up and creating the necessary files for getting a C project started in libtools/automake are so annoying, I decided to create a single file bash script to do the work for me.  You can find it as a gist on github.  You can download and run it by doing a:

wget http://tinyurl.com/brockers-cmaker -O ~/bin/cmaker && chmod +x ~/bin/cmaker

On the Linux command line.  Then create your new C/Autoconf project with cmaker init newprojectname.  My primary concern with the script was that is should need NO outside dependencies besides libtools/automake itself and that it has everything needed to start the autoreconf –install, ./configure, make process.  Hopefully I will add some additional functionality to it soon.

* As an example, look at Perl.  It initially started as a glue language to allow developers to piece together software solutions in a single language instead of having to create divergent sed, awk, and grep scripts in sh**.  Then the WWW took off and the little glue language became the core component of the most powerful websites on the planet.  Perl went from being a support language to the core language of all things http.  The number of tools exploded.  CGI.pm, mod_perl, and DBI gave the developer massive power but managing these libraries in production created a boom of support tools (kids these days forget that cpan was the ruby gems/bundler of its day.)

** As a side note that last sentence sounds more like a caveman grunting then a discussion of software development tools.

*** OK, ttechnically it was C++ but our CS chair was a former NASA Chief Engineer who basically taught us C using a C++ compiler… with a little Class thrown in.  I think my first object was linkedList with methods push and pop.

Unless you continue to remember it

The dynamic device interface for Linux is called udev.  Generally it works without complaint or frustration but it does have some interesting side effects if you are doing more involved system configuration.  The one that tripped me up today is that udev keeps a record of every nic card that has been dynamically created during it’s lifetime.  For example, if you are using wireless USB nic (see my post yesterday) and you plug in a different one than you used before; the new nic ID is going to be wlan1 instead of wlan0.  Generally nobody would care; but in this case I did.  Thankfully modifying these records is pretty easy.  The device history is stored in /etc/udev/rules.d/70-persistent-net.rules and can be modified by hand.  Just change the wlan1 to wlan0 and delete the other entry.

Once again, text file configuration is FREAKING AWESOME!

The gap between the ones and the zeroes

Wireless configuration on embedded Linux systems has been pretty well documented for a while now.  If you are running a Desktop version of Linux then the probability of your wireless device being supported (either natively or through the WindowXP visualization layer NDIS) is likely to be transparent to you.  The situation is slightly different when you enter the embedded side of Linux where non-native driver support is really not an option.  That said, I have fallen in love with the Edimax Technology wireless USB nic (it uses the RealTek chipset) because they are smaller than my thumbnail, work with any Linux distribution you can think of (even Raspberry Pi), and  cost about  10 bucks.  Heck, they even support 802.11n.  To get this thing enabled/working on Debian from the command-line has been pretty simple.

apt-get install firmware-realtek
modprobe rt18192cu
ifconfig wlan0 up

Then iwlist wlan0 scan will show you a list of the available wireless networks.  Basically apt-get download the drivers, modprobe installs the the drivers, ifconfig turns on the wireless device (otherwise you get a wlan0 Interface doesn’t support scanning : Network is down when you try to scan the.  Not exactly the best error message, but anyway…

Forms it never takes, places it can never be

So, after looking around for an answer today I finally found out where the Debian install CD stores its cd/usb boot menu configuration files.  While I have already had a great deal of experience editing grub.conf files by hand, this methodology simply doesn’t work on an “El Torito” Joliet CDROM image.  So Debian set-up their boot image (as part of the initial ram disk) inside of the /ISO/isolinux/ directory where ISO is the uncompressed version of the boot image.  Specifically you can configure things like:

  • The boot option timeout in /isolinux/isolinux.cfg
  • The background splash image in /isolinux./splash.png (640×480 on the default menu set-up)
  • Which sub-menu’s, options, boot methods, and GUI installs are available via the /isolinux/menu.cfg

Honestly, I may be the only person on the planet trying to figure this stuff out; but here it is for future reference or for anyone who wants to make their very own custom Debian install CD .

GitLab on Fedora 18

I am using a RackSpace cloud running Fedora 18 to install GitLab are a replacement for GroundWarp’s Gitolite server.  Here are some install instructions for getting it running on CentOS6/RHEL6 (the commerical systems based on Fedora.) iconv-devel

Dependency Installation:

Start by doing yum install/groupinstall the following packages and any dependencies they find.

  • yum install ruby mysql -server git redis ruby-devel
  • yum groupinstall “Development Tools”
  • yum install mysql libxslt-devel libyaml-devel libxml2-devel gdbm-devel libffi-develzlib-devel openssl-devel libyaml-develreadline-devel curl-devel openssl-devel pcre-develmemcached-devel valgrind-devel mysql-devel ImageMagick-devel ImageMagick libicu libicu-devel libffi-devel rubygem-bundler

Start your database servers and configure them to start on boot.

systemctl start redis.service
systemctl enable redis.service
systemctl start mysqld.service
systemctl enable mysqld.service

MySQL Server Setup:

Start by logging into the mysql shell:

mysql -u root

Then we create the database we will need, and create a user who can edit/manage that database.

create database gitlabdb;
grant usage on *.* to gitlabuser@localhost identified by “inventapasswordhere”;
grant all privileges on gitlabdb.* to gitlabuser@localhost;

Service User and SSH Configuration:

Create the user account that the GitLab service will be running under.  This will also be used to build some of the necessary dependancies. After creating the user switch to that user’s login.

useradd git
passwd git
su -l git

From here forward (unless otherwise specified) make sure you are logged in as the git user.  The remaining configuration is done as that user.

The next steps are required for setting up the SSH keys pairs for your user account (git uses SSH in the background for most of its tasks.)  Choose the defaults, but make sure to supply a paraphrase (not just a password) when prompted by ssh-keygen.

ssh-keygen
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

GitLab Shell:

gitlab-shell needs to be installed built/installed for this user.  The following steps sets everything up and builds the packages directly from github (which is the easiest way honestly.)

cd ~
mkdir gitlab-shell
git clone https://github.com/gitlabhq/gitlab-shell.git gitlab-shell/
cd gitlab-shell
cp config.yml.example config.yml

Now you are going to want to use a text editor (nano,vim, etc…) to edit the following change to your config.yml file.

gitlab_url: “http://yourdomainforgitlab.com/”

Now run the gitlab-shell installer

./bin/install

Download & Configure GitLab:

Next we will setup our gitlab folders, download Gitlab, and begin configuring it to our set-up

cd ~
mkdir gitlab-satellites
mkdir gitlab
git clone https://github.com/gitlabhq/gitlabhq.git gitlab.
cd gitlab
git checkout 5-2-stable
mkdir tmp/pids/
mkdir public/uploads/
mkdir tmp/sockets/
chown -R git.git log/ tmp/
chmod -R u+rX log/ tmp/ public/uploads/
cp config/gitlab.yml.example config/gitlab.yml

Next you will need to make changes (again with your text editor) to the newly copied config/gitlab.yml file.

host: git.yourdomain.com
email_from: gitlab-noreply@git.yourdomain.com
support_email: you@yourdomain.com

Database Configuration:

Our next steps involve configuring the GitLab application database interfaces (remember there are two) and creating initial data entries.

cd ~/gitlab
cp config/puma.rb.example config/puma.rb
cp config/database.yml.mysql config/database.yml

Now configure your database file (the database.yml file you just copied) using your text editor and change the following settings under the production header. (Comment out all the lines under development and testing):

database: gitlabdb
username: gitlabuser
password: “YourSuperSecretPasswordFromTheDatabaseSetupAbove”

We need to install a couple Ruby gems and then  initialize the database.  You do this by running the following commands (note the “without” command tells the bundle to NOT install PostgreSQL gems.):

gem install charlock_holmes –version ‘0.6.9.4’
gem install thor –version ‘0.18.1’

gem install rb-inotify
bundle install –deployment –without development test postgres
bundle exec rake gitlab:setup RAILS_ENV=production

Git Environment Setup:

We need to setup a couple of environmental variables for our local git user. Simply type this:

git config –global user.name “GitLab”
git config –global user.email “gitlab-noreply@git.yourdomain.com”

A Date Which Will Live In Infamy

Date string conversion is fairly painless in JavaScript but sometimes the sheer number of options can be a little annoying to remember.  Below is a table of date display/conversion functions generated from

new Date(“2013-02-19T21:03:39.818Z”).

Hopefully this is helpful to someone else who doesn’t want to look up the output of each of these options.  One more note, these are outputs for US locals in the Central time-zone; other locals and other time-zones would very accordingly.

Date Function Output
 toString() Tue Feb 19 2013 15:03:39 GMT-0600 (CST)
toDateString()  Tue Feb 19 2013
toGMTString() Tue, 19 Feb 2013 21:03:39 GMT
toISOString() 2013-02-19T21:03:39.818Z
 toUTCString()  Tue, 19 Feb 2013 21:03:39 GMT
toTimeString() 15:03:39 GMT-0600 (CST)
 toLocaleString() Tue 19 Feb 2013 03:03:39 PM CST
 toLocaleDateString()  02/19/2013
 toLocaleTimeString()  03:03:39 PM
 toJSON()  2013-02-19T21:03:39.818Z
 valueOf()  1361307819818
toSource() (new Date(1361307819818))

Trust the Engineer

A client project along with my “Hacking & Countermeasures” class has recently necessitated a need for my own VPN for use in wireless applications. I needed to connect the VPN to my server rack and the system needed to be an “in-house” system I could turn up myself (sorry Cisco, no ASA for me.)  Finally, it needed to be an SSL based VPN solution as I have had entirely too many issues with locations filtering nonstandard Internet traffic effectively blocking IPSec VPN access on their networks.

I use Rackspace for my server infrastructure, so it only took me about 15 minutes to get the physical (errr… cloud… damn… whatever) Linux machine (Fedora 17 x64) up and running but actually setting up OpenVPN was significantly more challenging that I originally had considered.  The problem wasn’t the lack of documentation (actually the opposite was generally true.)  The problem is that VPN connectivity is so inherently picky, and there are SO many options, that getting a specific configuration running for a specific distribution can be a little overwhelming.

So, for my own personal benefit, here is some of the information I needed to get OpenVPN working on a Fedora 17 server routing http traffic as well as direct traffic to my private subnet.  OpenVPN will be configured to use port 443 (the standard web SSL port) using the TCP protocol.)  As OpenVPN uses SSL, and we will be using TCP on the HTTPS port, all the traffic will look like standard secure web traffic to the network, effectively keeping it from being filtered.

On the Server (as root):

  • Start by install openvpn and other support packages:
    • yum install openvpn pkcs11-tools pkc11-dump
  • We will use the easy-rsa script toolkit to create our shared keys.  So start by coping the example easy-rsa files into your home directory:
    • cp -ai /usr/share/openvpn/easy-rsa/2.0 ~/easy-rsa
    • cd ~/easy-rsa
  • Next you will need to edit the vars file.  Basically it is ID information for your server certificate.  The values other than the PKCS11_MODULE_PATH (which will be set to /usr/lib64/ on x64 machines) are not particularly critical but don’t leave them blank!  Mine looked something like this:

export KEY_COUNTRY=”US”
export KEY_PROVINCE=”OK”
export KEY_CITY=”Norman”
export KEY_ORG=”Rockerssoft”
export KEY_EMAIL=”name@emailaddress.com”
export KEY_EMAIL=name@emailaddress.com
export KEY_CN=rockerssoft-vpn
export KEY_NAME=rockerssoft-vpn-key
export KEY_OU=rockerssoft-vpn
export PKCS11_MODULE_PATH=/usr/lib64/

  • Now we generate our server keys and setup our openvpn service directories:
    • . vars
    • ./clean-all
    • ./build-ca
    • ./build-inter $( hostname | cut -d. -f1 )
    • ./build-dh
    • mkdir /etc/openvpn/keys
  • Now with our keys built, we need to copy all of them (along with our certificates and template configuration information) into our service directory.
    • cp -ai keys/$( hostname | cut -d. -f1 ).{crt,key} keys/ca.crt keys/dh*.pem /etc/openvpn/keys/
    • cp -ai /usr/share/doc/openvpn-*/sample-config-files/roadwarrior-server.conf /etc/openvpn/server.conf
  • The config file we just copied to /etc/openvpn/server.conf will need to be edited for your specific server configuration.  If you have problems connecting later on it is most like an issue with either the server configuration file or the client configuration file not matching.  As we want the system to be a full VPN proxy for all internet traffic start by adding the following to the BOTTOM of your config file:
    • comp-lzo yes
    • push "redirect-gateway def1"
  • In /etc/openvpn/server.conf, edit the port number and add a line to have openvpn use tcp instead of udp for port 443.  This should be somewhere between line 9 and 12 and should look something like this when you are done.

port 443
proto tcp-server

  • In /etc/openvpn/server.conf, edit the cert and key file location names somewhere between line 17 and 20.  Add the full path to your key/cert files we moved two steps previous.  They should look something like this (notice the /etc/openvpn/keys preceding each entry:)

tls-server
ca /etc/openvpn/keys/ca.crt
cert /etc/openvpn/keys/bob-vpn-1.crt
key /etc/openvpn/keys/bob-vpn-1.key
dh /etc/openvpn/keys/dh1024.pem

  • After you have modified your server configuration files, you will need to tell the Linux Security subsystem (aka SELinux) to recognize the to file layout.  To do this type the following command:
    • restorecon -Rv /etc/openvpn
  • If you need to test your server settings you can run openvpn directly, say to debug your config file,  this way (press Ctrl+c to stop it):
    • openvpn /etc/openvpn/server.conf
  • Finally, you can turn the openvpn server on and enable it so that it starts during future reboots as well.
    • systemctl enable openvpn@server.service
    • systemctl start openvpn@server.service
  • Now that the server is running you will need to configure the firewall to allow vpn traffic connections AND route all your traffic through the system (via Network Address Translation.)  Start by backing up your old iptables configuration and enabling NAT forwarding in the Linux kernel:
    • mv /etc/sysconfig/iptables /etc/sysconfig/iptables.old
    • sysctl -w net.ipv4.ip_forward=1
  • Open up your favorite text editor and copy the following iptable rules into the file.  You will need to save the file as /etc/sysconfig/iptables.  This configuration assumes that eth0 is your public IP address and eth1 is your private.  If this is backwards just change eth0 to eth1 and vice versa.  Also it keeps port 22 open for ssh connectivity.

# Modified from iptables-saved by Bob Rockers
*nat
:PREROUTING ACCEPT [15:1166]
:INPUT ACCEPT [4:422]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [118860:18883888]
-A INPUT -m state –state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state –state NEW -m tcp –dport 22 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp –dport 443 -j ACCEPT
-A INPUT -i tun+ -j ACCEPT
-A INPUT -j REJECT –reject-with icmp-host-prohibited
-A FORWARD -i tun+ -j ACCEPT
-A FORWARD -i eth1 -o tun+ -j ACCEPT
-A FORWARD -i eth0 -o tun+ -m state –state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -j REJECT –reject-with icmp-host-prohibited
COMMIT

  • To make NAT work across reboots you will need to modify the /etc/sysctl.conf file and change the line net.ipv4.ip_forward = 0to the following:
    • net.ipv4.ip_forward = 1
  • To make everything permanent type the following:
    • sysctl -p /etc/sysctl.conf
  • Now restart your firewall configuration:
    • systemctl restart iptables.service

That should take care of our server configuration. I will follow this post up with client configurations for Windows and Fedora 17 KDE installs. Please feel free to email any fixes/updates to the above configuration if you see something.  Below are a couple of the links I used to get this configuration working:

Finally, the above solution is susceptible to a man-in-the-middle attack from another client impersonating the server (not a problem for my setup as I personally know everyone who I have issued client certificates to.)  The solution is sign the server certificate with a tls-server only key and force clients to check this status on connection.  There is more documentation for this setup here and specifics about the easy-rsa setup here.  At some point I will update this tutorial to fix that issue but, for now, this has been a long enough post.

Starting Dropbox

My brother wanted a quick explanation on how to create an executable to start Dropbox. While I was helping him he was kind enough to mock my freakishly awesome IBM Model M Unicomp keyboard… the greatest keyboard in the world. This setup is designed to work the the local tar.gz install of Dropbox on Linux and NOT the rpm based install (that requires Gnome for the file manager.)

Create a new file in your ~/bin directory called startdropbox.sh with the following content

#!/bin/bash
~/.dropbox-dist/dropboxd &

After you have saved the file make the file executable by typeing

chmod 755 ~/bin/startdropbox.sh

Now you can start up dropbox by clicking on that icon at any time.

AND I LOVE MY CLICKY KEYBOARD BITCHES!!!

To Any Place Worth Going

One of the best parts of Unix systems is that fundamentally they are built as development platforms.  The most common text command interface for Unix is call Bash (the Bourne Again Shell)and it is a full blown script-able interface allowing direct interaction with command line programs and giving the user the ability to string together these programs into really powerful applications.  Because of the power of this interface, developers have over many years improved the ability to use it directly as well.  Things like <tab> completion are well known, but how about reverse command searches, built-in text editor mode, and shortcuts galore.  I have been trying to use more and more of this “built-in” bash functionality and so below are some of my favorite shortcuts and functionality.

Shortcuts:

Ctrl + A Go to the beginning of the line you are currently typing on
Ctrl + E Go to the end of the line you are currently typing on
Ctrl + L Clears the Screen, similar to the clear command
Ctrl + U Clears the line before the cursor position. If you are at the end of the line, clears the entire line.  Especially useful when you know you’ve mis-typed a password and want to start again.
Ctrl + K Cut the line after the cursor, inverse of the Ctrl + U
Ctrl + Y Pastes the content from a previous Ctrl + K or Ctrl + U cut.
Ctrl + H Same as backspace
Ctrl + R Search through previously used commands
Ctrl + C Sends SIGINT to whatever you are running (effectively terminating the program.)
Ctrl + D Exit the current shell
Ctrl + Z Puts whatever you are running into a suspended background process. fg restores it.
Ctrl + W Delete the word before the cursor
Ctrl + T Swap the last two characters before the cursor
Alt + T Swap the last two words before the cursor
Alt + F Move cursor forward one word on the current line
Alt + B Move cursor backward one word on the current line
Tab Auto-complete files and folder names (f there is a multiple option match hitting Tab twice will list all possible values.)
Alt + . Paste the previous commands final argument (great for running different commands on the same file path.)

To see a complete list of all bound bash shortcuts you can type

bind -P |less

but you may need to look-up some bash hex character values to understand all of them.  What is more you can actually set bound shortcuts to almost anything you can think of, including actual applications, for example:

$ bind -x ‘”\C-e”‘:firefox.

will launch the Firefox web browser from the command line when you hit Ctrl + e.

Another one of my favorite commands is fc (fix command.) If you simply type

fc

FC will copy your most recent bash history into your preferred editor (vi by default on most systems) and allow you to edit it within the editor.  If you save and exit the editor it will automatically copy it the contents into the bash session and hit enter.  Additionally if you are interested in editing some other history item you can type

fc -l

to get a full history with numbers beside them.  Then type

fc <num>

where<num> is the history number you want to edit.  In a former life my bash terminal and fc was all I needed for most SQL testing.