Hyperscalers provide. The major Cloud Service Provider (CSP) hyperscalers have established themselves pretty much from the outset as essentially Software-as-a-Service (SaaS) entities that supply, serve and sustain. That core ethos hasn’t changed, but the progression of cloud computing over the last couple of decades has enabled us to evolve to a higher plane of creativity where we don’t just consume provided cloud services, we also now create in and of the cloud itself.
This of course is the current era, an approach we call cloud-native development.
Keen to solidify the virtual software spanner set in its SaaS toolbox (developers actually do call them toolboxes) and provide the new cloud-native era with tools for the job, AWS used a recent user symposium in New York to explain some of what it would like to think of its most progressive offerings in this space.
Glue guns & code companions
As we mercifully get beyond the point where we need management consultants to tell us that data is the lifeblood of business, AWS says it has recognized the need to provide cloud functions that work with disparate data sources. With some data in proprietary locked-down on-premises repositories held by individual companies, organizations will also now want to work with data in data marketplaces, data exchanges and open data sources such as the Large Language Models (LLMs) driving generative AI development.
Realizing that woodworker’s epoxy resin or even superglue variants don’t actually work in the abstracted world of cloud computing, AWS offers AWS Glue to help firms integrate data sources from multiple multiple channels. This technology is founded on serverless infrastructure principles i.e. engineers do not have to provision the base layers needed to make it operate, they can flick a switch (well, almost) and use as much gas as needed without needing to worry about the server state behind the service.
Because different users want to ‘glue together’ data in different ways, AWS Glue provides different authoring experiences for data science professionals and cloud-native software engineers to build data integration jobs. Using a degree of Machine Learning (ML) and data analytics, this technology offers a notebook option to serve data scientists who tend to run queries interactively and retrieve results immediately to author data integration jobs.
This interactive experience can accelerate building data integration pipelines, says AWS, with the team also pointing to the general availability of Amazon CodeWhisperer, an AI coding companion that uses foundational models under the hood to improve developer productivity.
What is a foundational model?
For some quite accessible data science learning here, we can remind ourselves that foundational models are a way of describing Artificial Intelligence (AI) model production where the training data is broad enough to be adapted to a variety of downstream tasks. The Stanford Institute for Human-Centered Artificial Intelligence (HAI) described foundational models in 2021 (also sometimes called base models) as part of the ‘paradigm shift with the rise of models’ and said, “We call these [here detailed] models foundational models to underscore their critically central yet incomplete character.”
Amazon CodeWhisperer works by generating code suggestions in real-time based on developers’ comments in natural language and prior code in their integrated development environment (IDE).
AWS has also announced the Amazon CodeWhisperer Jupyter extension to help Jupyter users by generating real-time, single-line, or full-function code suggestions for Python notebooks on Jupyter Lab and Amazon SageMaker Studio. For context here, as noted on GeeksforGeeks, Jupyter Notebook is an open source web application that users to create and share typically data-science workload-related documents that contain live code, equations, visualizations and narrative text.
“AWS Glue Studio notebooks now support Amazon CodeWhisperer for AWS Glue users to improve your experience and help boost development productivity. Now, in your Glue Studio notebook, you can write a comment in natural language (in English) that outlines a specific task, such as “Create a Spark DataFrame from a json file”. Based on this information, CodeWhisperer recommends one or more code snippets directly in the notebook that can accomplish the task,” notes AWS, in a technical statement.
Amazon Bedrock
AWS has also now expanded its fully managed foundational model (as explained above) service Amazon Bedrock to include the addition of Cohere as a foundational model provider and the latest foundational models from Anthropic and Stability AI (smaller, independent IT organizations that specialize in AI development, AI safety and deep learning for text-to-image AI) as well as a new capability for creating fully managed agents in just a few clicks.
Amazon Bedrock is a managed service that makes foundational models from Amazon and AI startups available through an API, so users can find the model that’s best suited for each use case. Clearly happy about this work, AWS says it’s a ‘game-changing’ feature for builders, with no expertise required.
Just to fill in another explanatory gap here then, Cohere is a developer of enterprise AI platforms and foundational models that enable ways to generate, search and summarize information. Command, Cohere’s text generation model, is trained to follow user commands and be useful in practical business applications such as summarization, copywriting, dialogue, extraction and question answering.
Customers (by which we again mean data scientists and software developers) can use Amazon Bedrock to build and scale generative AI applications with a selection of foundational models by accessing a simple Application Programming Interface (API), all in a secure environment and without managing any infrastructure. The expansion of Amazon Bedrock to include a new model provider, foundational models and the ability to easily managed agents is said to offer users a comprehensive toolset to employ and delpoy generative AI for any use case.
SaaS soothsayer Sivasubramanian
“Generative AI has the potential to transform every application, business and industry. Advancements across data processing, compute and machine learning are expediting the shift from experimentation to deployment for AWS customers of all sizes,” said Swami Sivasubramanian, vice president of database, analytics and machine learning at AWS. “Through services like Amazon Bedrock and collaborations with industry leaders, we are democratizing access to generative AI, so wherever customers are on their machine learning journey, they can use AWS to reimagine experiences and bring new products to life.”
Sivasubramanian and team remind us just how much potential there is today for generative AI to change the way we work. A lot of this is down to the massive proliferation of data, the availability of highly scalable compute capacity – much of it via cloud, but also on-premises – and the advancement of Machine Learning (ML) technologies. However, AWS reminds us, selecting the right model, securely customizing it using sensitive intellectual property or company data and integrating it into an application is a complex process requiring significant time and highly specialized expertise.
Unsurprisingly named to convey some solid grounding on planet Earth with so much talk of cloud around us, Amazon Bedrock aims to simplify processes involved with building and scaling generative AI-based applications, with its access to a selection of foundational models from the previously mentioned specialists and others – all of which can be accessed via a ‘simple’ Application Programming Interface (API) i.e. the same application-to-application or service-to-service connection that allows you to use Google Maps inside Uber or some other favourite ride-hailing service.
Cloud-native made easier
These AWS technologies are not the first set of cloud-native software application development tools to come to market; the process of getting to this point has been coming since the turn of the millennium and the first era of cloud, if not before. What they do represent is a more finely machined set of SaaS software spanners and, crucially, because they are making use of an increasing set of ML and AI, they will be potentially useable by a broader range of cloud-native software engineers and data scientists.
The company hasn’t missed that point either and it has said it out loud.
“AWS offers the broadest and deepest set of ML services and supporting cloud infrastructure to put ML and AI capabilities in the hands of builders of all levels of experience,” notes the company. Plus, as noted above, we’re now also talking about using cloud-native application development in much closer proximity to building business functions (with Amazon Bedrock adding Cohere as the newest foundational model provider) and creating generative AI applications.
Cloud-native is getting easier and more complex at the same time, but that increasing complexity is being matched by services that will assuage and mitigate any user concerns related to functionality, security, integration or implementation throughout the application development lifecycle.
Clouds are still fluffy, but they continue to solidify.