How CIOs can build the future they want with open source
By Chris Wright, CTO and senior VP of Global Engineering, Red Hat
Friday, 16 September, 2022
Just over a decade ago, Marc Andreessen famously stated that software was eating the world. Today, we can definitively update his quote to be more accurate and say “software ate the world”. Simply put, software has taken over our businesses and how we create value for our customers.
Our organisation, Red Hat, is a software factory — and we help our customers become software factories too. With these capabilities, enterprises can build the future they want. For example, organisations can choose where to run their applications based on their business needs with the flexibility of hybrid cloud. Repeatability in software production pipelines helps avoid hand-crafted mistakes by using a common platform. Organisations can tame the complexity of distributed systems with automation, while security risk in software supply chains can be addressed with modern tools.
Since software is now everyone’s business, developers are essential and the ability to move rapidly from experiment to production is the hallmark of a high-velocity development team.
Hello artificial intelligence
Which leads us to the next big transformation: where software ate the world, artificial intelligence (AI) is now eating software. This is happening because businesses are looking to derive value from data and are leveraging AI to deliver insights from oceans of information. Not only can organisations make better use of data to make more informed decisions, they can also deliver better customer experiences by embedding intelligence into their products and services.
However, AI is not a one-and-done endeavour. You may have a software development pipeline but do you create an AI development pipeline?
Think about it: your source code is analogous to data and your deployed applications are analogous to deployed machine learning (ML) models. The discipline of going from source code to testing to production of software at scale is well understood but, with AI, are you applying that same discipline from development to deployment, which also needs to happen at scale?
If you consider an average enterprise is made up of a few thousand applications, it will also be made up of thousands of machine learning models. As decisions rely more and more on AI/ML, I want to establish trust in the model making those decisions to have confidence in taking action.
Part of building that trust is through:
- Collaboration — helping to build the model
- Transparency — understanding what went into the model
- Auditability — seeing what changes were made to models and the impacts those changes had on the outcomes.
Life on the edge
For CIOs, the peace of mind that came from data centres and IT assets remaining safely ensconced within the four walls of headquarters is long gone. The advent of the cloud, blazing-fast processors, improvements in wireless networks, and the spread of far-flung but crucial remote operations have come together to make sure of that. But the technical freedoms we take advantage of today aren’t without challenges and this is where we believe edge computing will be transformative.
Edge computing is the ability to generate insights from data and act on these locally where it matters. Intelligent devices are pushing the boundaries of where computing can happen — on earth, in space and wherever else there’s a benefit to enterprise or even humanity itself.
Edge computing can now take place at or near the physical location of either the user or the source of the data — whether that’s an SUV speeding down the highway, sensors monitoring a natural-gas pipeline in the middle of nowhere, or on-board a satellite orbiting the earth.
That is hybrid, and that is the future.
With open source technology, your workloads can span the typical IT footprints — from data centres, to clouds, to the edge. Innovation from open source communities provides consistency and flexibility to be able to choose where and how you securely build and deploy your applications and ML models.
Risk management
Now let’s talk about security, as there’s plenty of risk out there. The Apache Log4j vulnerabilities demonstrated that enterprises need to be aware of what open source they have deployed and how actively they are managing it.
In fact, 99% of audited codebases contain open source, according to Forrester Research’s ‘State of Application Security 2021’ report. Considering this, we can now claim that ‘open source software ate the world’. But that popularity can also make you a target. In 2021, there was a 650% year-over-year increase in software supply chain attacks aimed at exploiting weaknesses in upstream, open source ecosystems, according to 2021’s ‘State of the Software Supply Chain’ report.
This is without a doubt on the minds of businesses that make software, including ours. And for you, of course, as you continue to use software to grow your business and stand out from the competition. In light of the persistent rise of ransomware and protest-ware, it is also tops the list of government priorities worldwide.
We must make sure that during the whole development process, the integrity of software updates is protected and verified across the entire development lifecycle. In other words, the key for enterprise use of open source is to make sure you’re aware of what you’re using, where it’s being used, and how it’s being used.
So what is the future you want to build? Whatever that future state looks like, we believe it starts with open source software.
Staying ahead: business resilience in the hybrid cloud era
The rise of cloud computing and advancements in virtualisation have revolutionised how businesses...
Taming cloud costs and carbon footprint with a FinOps mindset
In today's business environment, where cloud is at the centre of many organisations' IT...
The power of AI: chatbots are learning to understand your emotions
How AI is levelling up and can now read between the lines.