On the 11th of April 2019 part of the Proteon crew drove out to Paris to attend the local OpenShift meet-up. With over 50 attendees, a lot of OpenShift enthusiasts came out to discuss the past, present and future of OpenShift. The presentations were mostly use-cases and examples of how one can implement OpenShift.
OpenShift at Société Générale
Soufiane Matine and Eric Larrouy talked about the recent OpenShift developments at Société Générale. Société Générale started their numeric transition project in 2014. The project goal was to shorten the time-to-market while continuing their quality-improvement drive.
It has been Société Générale’s ambition to shift in 5 years to an environment with Microsoft Azure/OpenShift. This shift was made from a mixed-use of AWS and Microsoft Azure towards standardisation on the Azure. The 3 basic elements: Continuous Delivery, Cloud hosting and DevOps Coaching. The main reason is the effort towards standardisation of used tools. The dozens of developers working on the different projects are using Docker UCP, Kubernetes were coached to use the same OpenShift versions.
The OpenShift environment allows making Proof Of Concepts (POC) off-premises, to deploy them anywhere. This infrastructure enables to develop a POC on an external platform which can demonstrate its feasibility. An advantage of the OpenShift-environment is that it enables developers to teach themselves programming is a wide variety of tools. The essential step to forward is that applications in the OpenShift-set up can be “Get started in minutes”.
OpenShift at GameRefinery
Tero Ahonen of GameRefinery is a long-time OpenShift user and told about their 5 re-iterations of their OpenShift architecture. Gamerefinery uses Openshift Online already since OpenShift 2. Tero explains how they run their SaaS-service and how they use pipelines to deploy their application stack to multiple Openshift Online clusters.
At first, they used OpenShift 2 (the one before Kubernetes) for everything: the data processing builds, the application and the MongoDB database. When OpenShift 3 came out, they migrated to OpenShift 3 and they switched to an external Database service. They selected MongoDB Atlas because MongoDB in OpenShift is not as fast and resilient as specialized Database services. It was helped by the fact that MongoDB Atlas runs in the same AWS region as OpenShift 3, thus preventing latency issues.
Next step, they took out the batch jobs to run them on bare metal to use as much processing capacity as possible.
Lastly, they added a US region to reduce latency for US customers. The hard part of multi-cloud is the data, so the US region uses a read-replica in the US region. Reads-replica is a MongoDB Atlas feature.
Finally, they added Cloudflare to eliminate latency for resources that are easily cachable. He made a very simple calculation: Cloudflare costs $70 per month and handles 50% of the load, cannot spend your money wiser!
The lessons Tero learnt and that he wants to share with us:
- Monitor and Profile your setup
- Deploying to multiple clouds is easy, but you need to solve the data issue
- Image and routing (in OpenShift) are the king
- Put you K8s YAMLs in version control
- Automate as much as possible (but be smart)