- Architectural design focusing on performant but easily maintained systems, focussing on high transaction rates and large data volumes;
- Technical leadership of small teams dedicated to producing great software;
- Design and implementation of sophisticated, high-availability, high performance server-side Web Services and Web Applications using Java technologies;
- Optimising server-side performance and security;
- Physical and logical database design and optimisation;
- Product delivery using Agile and related methodologies on time, to budget, and to specification;
- Writing high-quality technical documentation;
- Communicating and negotiating with stakeholders to optimise design and delivery;
- J2EE, JSF, Struts, Spring, Hibernate, EclipseLink, AWS SWF using Flow, Hadoop, AWS EC2 and EMR
- Apache, HTML/XHTML, CSS, JSP, XML, JSON, XSLT, XML Schema, XSL, Servlets, Web Services
- Application Servers
- JBoss, Glassfish, Jetty, Tomcat, Orion
- Operating Systems
- HP-UX, Solaris, Linux (RHEL, Ubuntu, BSD), CentOS, MacOS, Windows, MS-DOS, Xenix
- DynamoDB, Cassandra, Oracle, Postgres, Ingres, MySql, AWS RDS, xBase
- Java sockets (TCP, UDP), HTTP, JMS, SOAP, REST (Jersey + JSON), JAX-WS, Axis
- AWS, Docker, Terraform, Ansible, Vagrant, Netbeans, Eclipse, IntelliJ, JBuilder, Xcode, Gradle, Spock, Maven, Ant
- Agile, Test Driven Development, Kanban, Waterfall, OOD
- Version Control
- Git, Subversion, CodeCommit, Visual SourceSafe
Chief Technical Officer, Chrysalis Analytics,
November 2017 -
Co-founder of Chrysalis Analytics, a member of the Leap Beyond Analytics group, with responsibilities for management and direction, as well as general continuing roles through Leap Beyond Analytics providing senior engineering and security consultancy.
Senior Data Engineer, Think Big Analytics,
January 2017 — November 2017
With ThinkBig, I was part of various teams engaged with a variety of clients in different areas — Banking, Health Care, Telecoms and Insurance — delivering consultancy services and software implementations based around data analytics across large volumes of Big Data using open source technologies. Much of my work focussed on devops activities and security planning and implementation, and the engagements frequently involved the use of (Hortonworks) Hadoop technologies, Kylo and Apache NiFi.
Devops activity primarily centred around using AWS technologies, using a mixture of Terraform, Ansible, Jenkins and Git to provide software defined infrastructure (Infrastructure as Code). I also used my time here to begin investigation of Scala, and brush up my knowledge of Spark and associated technologies.
Senior Software Engineer, Camelot Global,
October 2015 — November 2016
At Camelot I was working within the Instant Win Game team, working on a set of web services that provide the IWG service. In addition I designed and built an Event Logging Service that collated and distributed key business events from across the suite of services that comprise the overall platform.
These services were characterised by requiring quite high transaction rates, and need to be extremely reliable. Providing guarantees about the state of customer transactions was as important as providing a very secure system. Part of the strategy around this was to ensure there was a reliable single point of truth (the Cassandra database), addressed by an arbitrary number of stateless horizontally scalable service instances, married to a solid understanding of the latencies between receipt of requests and the point a consistent read was available from the data store.
The code was written in a mixture of Java 8 with Spring 4.x and Groovy, with an emphasis on using Groovy mainly for unit and integration tests run by the Spock framework. All services were deployed into Tomcat containers, and where appropriate are backed by Cassandra databases. REST interfaces were brokered using Apache CXF, with assistance from the Jackson libraries for serialisation of JSON. The build process used Gradle, providing a consistent and seamless development cycle all the way from unit and integration tests on the desktop through to the construction and deployment of RPMs into QA via the Jenkins CI environment.
In this role I was responsible for the correctness of implementations of designs passed across from the architectural team, and contributing to the technical correctness and suitability of architectural decisions. In addition I had the day-to-day responsibility for ensuring that the code base was clean and solid, and suitable for deployment to a QA environment at any time, and provide third-line engineering support for production instances of the services.
My latest project was to work with the DevOps team and others to deploy these services into the Camelot UK environment. In keeping with the strategy of Camelot Global to focus on SAAS, or to be deploying 'black boxes' into a customer's environment, we bundled the services and a variety of supporting technologies into a distributed Docker Swarm cluster. This involved very rapid learning of a large number of technologies in this area: Management and provisioning of virtual machines into various environments and cloud vendors was performed by Ansible, which in turn usede Docker Swarm to create a distributed cluster into which we deployed the applications. The cluster included a Cassandra cluster, and we made use of Consul for DNS resolution and distributing state across the whole cluster. The services were fronted inside the cluster with HAProxy, allowing us to expose a single point of entry into the cluster while providing very high availability within the cluster. Configuration of the wiring within the cluster was dynamic and automatic, relying on Consul as a point of truth for the cluster state.
My role in this project was to rework the service projects to be automatically building Docker images using Gradle in our Jenkins CI pipeline, and coordinating the technical resources to keep the project on track to deliver a solution which allows automated and repeatable deploys into desktop, testing and production environments. In addition, this project was used as a template for me to evangelise the use of Docker and related into the rest of the projects, assisting other teams and the QA team to rebuild the development and testing framework to focus on the target deploy environment for all services to be into a Docker Swarm cluster.
The biggest challenge so far in this role happened over the first few weeks, where I had to rapidly learn Groovy, Gradle, Spock and CXF having had no previous exposure to these technologies. From a standing start, I can confidently say that I was fully competent within the first four weeks. I also had to come to grips with the different requirements for logical and physical data design using Cassandra, which has slightly different semantics and emphases to other NoSQL databases I have used.
The latest project required me to rapidly assimilate and integrate a cornucopia of technologies around Docker - the Docker suite itself, Ansible, Consul, Registrator and Vagrant to name a few. This was an interesting experience, learning these technologies in depth while under tight time constraints for delivery, and little scope for error in deployment.
Technical Design Authority, Lithient,
November 2014 — October 2015
Lead Java Developer, Lithient,
March 2014 — November 2014
Senior Developer, Somo,
February 2012 — March 2014
I came on board with Somo early in the Apptimiser project (later renamed Lithient), as the second technical hire. During my time I assisted in building out the engineering team, while at the same time rapidly building out a sophisticated, highly performant and highly resilient system to support analytics in the Mobile Marketing arena.
As the team expanded, so did my role, culminating in taking on the position of Technical Design Authority. In this role I had responsibility for all technical design, taking business requirements through design and articulation to the point of concrete change requests. The reverse also holds, in that I constantly contributed to the direction and prioritisation of business requests to ensure they are achievable and desirable within the broad development roadmap. I also had the responsibility for maintaining standards of quality and process, development of new processes where required, and the day to day management of the efforts of the development team. My team leadership responsibilities included balancing of work across the team to deliver Agile sprints effectively, developing the skills of team members, and coordinating the efforts of the development team with other specialist groups within Lithient. Finally I had a responsibility for ensuring successful frequent software releases to the production environments and ensuring that the production and other runtime environments are monitored and maintained.
I am proud of the efforts I made to promote an environment dedicated to building out extremely high quality code. To support this I introduced and enforced rigorous coding and design standards in a TDD-focused Agile environment. I placed an emphasis on peer code review using Fisheye and Crucible and backed this with automated static code analysis using tools such as FindBugs, PMD and Checkstyle running within a CI environment managed by Jenkins.
To provide for a system that was low maintenance, able to support our high transaction rates, and indefinitely scalable, I built out an architecture largely using J2SE, with J2EE elements used cautiously and abstracted away. This was a deliberate decision to allow us to deploy to very lightweight application containers (Jetty, after having evaluated Geronimo and Grizzly). A deliberate side effect from this decision was that it allowed a lower barrier of entry for junior coders to be able to produce good solutions without having to engage with the broader complexity of J2EE or Spring.
An example of this restrained adoption of key J2SE technologies, was to move the persistence layer to EclipseLink to provide ORM via JPA, and move our messaging to a stand-alone clustered HornetQ installation for use with JMS. In both instances I provided an abstraction layer that removed all complexity of JMS and JPA, reducing the interface to simple Get/Put Bean semantics.
In addition to code and system design and implementation, I was instrumental in the design of logical and physical data models, and implementing controlled development methodologies around maintaining and updating the databases using database migrations. Initially these were simple MySql 5.x databases running on dedicated servers, but as part of a general migration of services into the Amazon AWS cloud, they became managed RDS instances. As well as these traditional relational databases, I implemented use of Amazon's DynamoDB big data solution to hold our transactional data.
During my tenure, I took a proof-of-concept of Apptimiser written by third parties and turned it into a sophisticated and robust, enterprise quality service. When I began, the system could handle at most around 70 transactions per minute. I left it able to handle over 8000 transactions per minute, and it could be scaled indefinitely by adding more server instances. The system had multiple redundancies and self-healing strategies built in, and we were close to being able to introduce Continuous Deployment driven from the Continuous Integration environment. As it is, many of our periodic upgrades did not require a service outage, and we had over 99.9% uptime.
I introduced the following technologies, services and practices to Lithient and the engineering team, and undertook appropriate training of the team, creation of usage and maintenance documentation, and evaluation of alternatives prior to adoption:
- Jetty as the standard lightweight servlet container for deployment, including a migration from JBoss and transitioning from that server's embedded HornetQ to a stand-alone load balanced HornetQ cluster;
- REST as a strategy and philosophy for the design and delivery of communication between distributed and scalable services;
- Jersey to provide RESTful interfaces to our services using JAX-RS and JAXB. Automated testing around this was mainly managed using Grizzly;
- Jackson for JAXB serialisation of objects to and from JSON;
- Varnish to provide caching, load-balancing and fail-over semantics in front of the RESTful web services;
- Crucible and Fisheye to facilitate peer code reviews and simplify management and monitoring of code changes;
- JaCoCo to measure unit and integration test coverage, automatically managed by Maven - this included automatic build failures in the CI environment if test coverage does not meet mandated standards;
- Amazon S3 for scalable high performance file store. This was chosen to allow exposure of some files as web resources, and as a data source for PIG scripts running in Amazon EMR clusters. It also gives us a low-cost long term archive of our transactional data persisted as JSON objects;
- Amazon SWF and Flow as a workflow management system which coordinates complex workflows performing our batch-mode data analysis and aggregation activities;
- Amazon DynamoDB to provide a high-performance data store for very large volumes of transactional data;
- JMeter to allow for the development of repeatable and reliable regression test suites, and performance and stress testing;
- RunDeck to provide a web interface, with appropriate access control, to allow QA and other internal staff;
- Full use of the Maven lifecycle to manage software releases, including publication of reports and documentation via the site plugin
- Push-button software releases via Jenkins;
- Formal software release plans and release notes;
- Database migrations using Flyway;
- Mockito as a JUnit mocking framework;
- slf4j to stabilise and unify logging;
- Joda as a replacement for non-thread safe Date/Time classes;
- hsqldb to support automated integration testing managed by Maven.
One great thing about my time with Lithient was the chance I had to learn the details of Mobile Marketing from the very best practitioners. On the technical side, the main thing that I had the opportunity to learn about was many of the services in the Amazon AWS stack, particularly DynamoDB, SWF, S3, EMR and EC2. I also gained some exposure to ELB and RDS, but was not directly responsible for the design or implementation of these.
I had some opportunity to dip my feet into Android coding, and was fortunate enough to undertake a formal iOS training course. This left me in the position to be able to maintain the Lithient SDKs for both platforms, and to be able to specify and review changes to those SDKs.
Software Engineer, Transaction Network Services,
May 2010 — November 2011
In this role I worked in the Continuous Engineering team to enhance, maintain and support a range of cutting edge and legacy J2EE products for the Payment Industry. The role required sophisticated and rapid problem solving, prioritisation and resolution skills, and a pragmatic approach to providing solutions that satisfy both the end customer and the enterprise, coupled with exceptional Java coding development skills. A significant component of this role was overview and maintenance of software standards for new products, and continuous improvement of the security, reliability and maintainability of legacy systems.
The products in place were all internet facing and oriented around secure communication of transaction data, backed up by strict adherence to and compliance with PCI-DSS. All products were written in Java leveraging the powers of Spring for resource injection and a variety of other modern technologies including JMS, JPA and JAAS. The development environment was Agile, with a very strong emphasis on automated unit, integration and regression testing coupled with a traditional staged release environment. Use was made of Kanban for managing maintenance activity.
Both Eclipse and IntelliJ were used for development, with builds brokered by a mixture of Ant and Maven. All development and maintenance activity was performed against a Subversion repository, and deployed onto system and integration test hosts via a continuous integration environment based around Hudson. In house documentation was written and published via a Confluence CMS instance, and I was a significant and avid contributor to this documentation.
During my time at TNS, I participated in PCI-DSS mandated security training, and am well abreast of current issues and solutions related to web-facing systems, and in particular to security, confidentiality and auditing of financial systems.
Successes at TNS include:
- significantly improving the security and reliability of key products;
- implementing a controlled and documented production mirror for the CE team;
- improving the performance of a financial batch management system by an order of magnitude;
- significantly increasing and improving internal documentation and standards;
Software Engineer, Salmat/HPA,
2003 — May 2010
Responsible for the design and development of products oriented around fast, high-availability, complex J2EE Web Services and Web Applications to support the organisation’s business process outsourcing activity. These products were characterised by the need to support very large data sets and high transaction rates.
The bulk of the products were developed as a set of loosely connected J2EE web services running within a full J2EE environment (JBoss, Tomcat and Orion) and communicating via SOAP. Those services backed by a database used a mixture of Hibernate and JDBC for persistence against Oracle, Postgres and MySql databases. A number of the products made extensive use of XML for data interchange, and XSLT for presentation.
Successes at Salmat/HPA include:
- design and creation of a user Authorisation and security service, a complex tool for tracking physical materials and virtua documents throughout the enterprise, systems for persisting very large volumes of scanned materials;
- design and creation of a sophisticated multi-user system supporting the national NAPLAN initiative, extensible to any student assessment initiative;
- leading the NAPLAN development team to deliver the correct solution, on-time and to very aggressive performance requirements;
- promoting the use of Continuous Integration via Hudson, and a TDD approach;
- research and implementation of standards for development using industry best practice in the J2EE and Web Services realm;
- devising and promoting standards for service installation, configuration, documentation and management.
My dedication to quality and robustness, and a marked willingness to work whatever hours were necessary to fulfil corporate objectives and requirements, saw me lauded on several occasions through the national Employee Recognition program.
Database Administrator / Programmer, Qld Police Service,
1998 - 2003
Primarily dedicated to the creation of tools to support very large scale data conversion and cleansing activities. In addition responsible for design and implementation of Database Administration tools and processes, and database design in support of other development activity. I also designed and managed the corporate wide rollout of Ingres II 2.0 to replace a mix of older RDBMS.
Tools were created in a mixture of C, C++, Unix Shell Scripting and Ingres ABF, using XP and other Agile methodologies. The nature of the enterprise led to the overall development methodology being a traditional Waterfall, however through leading small project teams, I was able to begin introducing some aspects of Agile. The tools and processes developed needed to support very large data sets and extremely tight security requirements, and to meet very aggressive performance requirements.
Database Administrator / Programmer, Qld Department of Natural Resources,
1995 — 1998
Responsible for database design and analysis against very large Ingres installations, and creation of tools and procedures for performing maintenance, analysis and data conversion/cleansing against those large data sets. Worked on the IVAS, IVASe and LGIP projects, to design and create a suitable tool set in a mixture of C, C++, Ingres ABF and Unix Shell Scripting.
Senior Analyst/Programmer, Database Administrator, Pine Rivers Shire Council,
1989 — 1995
Responsible for design and maintenance of broad range of local government administration and financial systems, initially using MUMPS, but in later years working in C and Ingres ABF. I was responsible for creating and promoting standards and processes for the use of Ingres within the organisation.
Programmer / Technical Support, Shannon Robertson Systems,
1988 — 1989
Development and support of MS-DOS based small business systems and support systems for the agricultural industry, including debtors/creditors systems, feedlot management products, and stock breeding/stock book programs designed to integrate with the ABRI Breedplan project.
Secondary School Teacher, Mathematics and Science, Qld Department of Education,
1987 — 1987
Having taught secondary Mathematics and Science in a remote outback town, I acquired excellent communication, negotiation and time management skills. I maintain a professional interest in educational and didactic techniques, policies and trends.
- Bachelor of Science (Mathematics, Physics, Instrumentation/Computing) Griffith University, 1983 — 1986
- Graduate Diploma of Teaching (Secondary School, Mathematics/Science), Queensland University of Technology, 1985 — 1986
- Senior Certificate (990 TE score) Mitchelton State High School, 1979 — 1982