Why Automate?

Time and time again we repeat processes in a manual fashion often believing that it's faster "just to do it" rather than spend time thinking about how to make a computer do it. While this may be fine the first time, what about the second, third and fourth time? Doing things manually is not only very inefficient but prone to human errors; some can be catastrophic.
Automation facilitates the "six side-effects of automation":

  1. faster time to market
  2. faster user/customer feedback
  3. improved productivity
  4. reduced technical risk
  5. improved quality
  6. improved user/customer satisfaction

The way forward often requires careful navigation along two independent paths ... finding your way through the capabilities of the myriad and ever-changing tools available, and how best to change your development process so that you can take full advantage of the industry best-practice techniques and development strategies.

Whatever you do you should be thinking about ensuring you have clean build and a shippable product at least once every day. In order to thrive, every company needs to develop and deliver quality solutions faster than their competitor and one generally recognised asset of automation is that it "delivers value faster than it adds cost".

How can Informatics Matters help you?

Our staff have helped realise the "six side-effects of automation" while working on numerous projects through decades of professional experience building large and complex cutting-edge customer-specific automation tools while also having worked with a number of emerging off-the-shelf tools and frameworks. Our experience is gained through a diverse project portfolio that spans the fields of research, commerce and the military.

Automation is built into our philosophy. All the way from automated build tools, through testing and deployment. The tools we have chosen facilitate this, knowing how to use them together, and how to make this possible whilst keeping the physical and financial overhead to a minimum.

The experienced individuals at Informatics Matters can provide you with advice and solutions both on-site or remotely hosted.

Continuous integration

Essentially the goal is to build and test your product at every opportunity so that you always have a product build ready to ship following best-practice Test-Driven or even Behaviour-Driven design methodologies like Cucumber. Testing needs to move beyond the basics by employing advanced analysers and hardening tests to identify faults at the earliest opportunity.

According to Larry Bernstein of Bell Communications Research

"it takes three times the effort to find and fix bugs found during system testing than when found by the developer. It also takes ten times the effort to find and fix bugs in the field than during system testing."

"I think test-driven design is great. But you can test all you want and if you don't know how to approach the problem, you're not going to get a solution."

(Peter Norvig)

We have the experience to show you how best to utilise your resources in continuous integration.

Although our developers have access to the tests and they are encouraged to add tests as they add features, we defer the majority of testing to an automated build management schedulers like Jenkins, Travis CI and GitLab Runners with build artefacts that can be stored in a Docker Registry or binary repositories like Artifactory and Nexus. Jenkins not only build builds our code in "jobs" but we also execute static analysers to help find obscure bugs and code coverage and profiling tools to highlight vulnerable areas of untested areas of the product.

Our automated "jobs" utilise rich and modern build tools like Gradle and Groovy environments that can run our build processes in parallel, on any number of dynamically provisioned servers based on the success of any up-stream dependent tasks. If all goes well we hear nothing from our continuous integration framework; when things fail we receive an email. In all instances we have access to detailed reports.

Our experience:
  • TDD/BDD, Cucumber
  • Hardening
  • Jenkins, Travis, GitLab
  • Binary repositories, Artifactory, Nexus
  • Static analysis, code coverage & profiling
  • Maven, Gradle & Groovy
  • Dynamically provisioned servers
  • Detailed reports

Whether your language if you need help with continuous integration in your project we have the experience to help.

Continuous deployment

Once you're in the position of deploying your product components into a staging environment using fully automated tools and strategies, as we do here at Informatics Matters, so that you are able to test the wider-scale business logic allowing you to more confidently deliver your application to production platforms at the "click of a button".

Delivery is, ideally, a series of tests that take place in a production like environment and consists of delivery to these staging environments, ideally created and provisioned from clean compute instances, and the execution of application-level acceptance tests before a final manual step that involves the delivery to your customer.

"There is an old saying with software that three years from now, no one will remember if you shipped an awesome software release a few months late. What customers will still remember three years from now is if you shipped a software release that wasn't ready a few months too soon. It takes multiple product releases to change people's quality perception about one bad release."

Scott Guthrie (VP, Cloud & Enterprise Group, Microsoft)
We almost exclusively employ containers to package our application components and this allows us to be confident not just about the platform but the provenance of the delivered application images. Our staging and production environment is built on scalable on-site and cloud-based compute instances organised into a resilient cluster managed by RedHat's OpenShift Container Application Platform.
OpenShift is

"built around a core of Docker container packaging and Kubernetes container cluster management, Origin is also augmented by application lifecycle management functionality and DevOps tooling."

Our experience:
  • Deployment and delivery
  • Staging environments
  • Acceptance testing
  • Containers
  • Kubernetes and OpenShift

Whether your production environment utilised containers, packages or binaries; if you need help with continuous deployment in your project we have the experience to help.

Orchestration

Provisioning your compute resources on demand, from known "clean" sources is an important requirement in order to create a reliable testing environment. Having precise control over the operating system its patches and all the libraries and tools installed is crucial in order to avoid the "it works on my machine" dilemma. This further reduces testing effort and technical risk while still improving product quality.

"A 2% reduction in defects is usually accompanied by a 10% increase in productivity."

(Lynas - Harvard Business Review, August, 1981)

This area of automation is commonly referred to as "Infrastructure as Code", or IaC - a collection of tools that simplify the definition and creation of execution platforms.

While running tests on your development machine is possible it's not recommended. Here at Informatics Matters, we make extensive use of IaC tooling to dynamically provision our test, staging and production units.

We use HashiCorp's Packer's JSON-based configuration to build reference snapshot images for our platforms, rolling out new baseline images in one or two minutes. With a few YAML files we employ HashiCorp's Terraform to dynamically build and update our cloud-based compute clusters where a fully networked set of servers can be provisioned in less than 60 seconds. This in tandem with Ansible's powerful Python -based orchestration engine, which is used to deploy our container execution platform of choice, RedHat's OpenShift Origin.

We also have experience with other orchestration platforms like Puppet's Ruby-based framework and automated virtual machine creation using Vagrant.

We execute application software on AWS EC2 instances, Digital Ocean Droplets as well as on Scaleway and OpenStack infrastructures.

Our experience:
  • IaC & orchestration
  • JSON, YAML, Python, Ruby
  • Packer, Terraform, Vagrant
  • Ansible, Puppet
  • AWS EC2, Scaleway, Digital Ocean, Openstack
  • OpenShift Origin
  • Virtual machines

Whether your deployment platform is proprietary, embedded or a conventional compute resource if you need help simplifying and automating your hardware using industry-leading orchestration tools we have the experience to help.

Monitoring

Whether this is monitoring and analytics with off-the-shelf frameworks like Datadog's Python-based platform, OpenShift's logging and metrics using Kibana and Prometheus or proprietary complex event processing ("CEP") solutions, we can help monitor the health of your application and provide monitoring, pattern recognition and machine and deep learning techniques to automate any remedial actions in order to rapidly return a faulty system into an operational state.

"When it comes to technology, everything can and will fail. The secret to surviving failures is to expect everything to fail and design for those failures".

Gordon Bell (Disaster Recovery Planning, Chapter 13)

We have experience building error detection and recovery systems and are comfortable with the needs of big data applications for research, commercial and telecommunications establishments.

Our experience:
  • CEP
  • Realtime monitoring
  • Pattern recognition, machine and deep learning
  • Big data, telecommunications
  • Datadog, OpenShift

We are experts in realtime analytics and can call upon decades of experience developing complex high-performance realtime pattern recognition applications in order to deliver a solution that's right for you.