Unfolding the universe of possibilities..

Journeying through the galaxy of bits and bytes.

Mastering the Future: Evaluating LLM-Generated Data Architectures leveraging IaC technologies

Evaluate the suitability of LLMs for the generation of Infrastructure as Code to provision, configure, and deploy modern applications

Photo by ZHENYU LUO on Unsplash


In this article, we address the suitability of LLMs to leverage the lifecycle of real applications, ranging from the infrastructure provisioning to the configuration management and deployment. The source code resulting from this effort is publicly available on GitHub¹¹. Infrastructure as Code (IaC) solutions facilitate the management and provisioning of infrastructure through code instead of through a manual process¹. They are becoming commonplace, and major cloud providers have implemented their own flavor of IaC solutions for interacting with their services. In this regard, AWS CloudFormation, Google Cloud Deployment Manager, and Azure Resource Manager Templates streamline the provisioning of cloud services, eliminating the need for IT Operations to manually spin up servers, databases, and networks. However, these many possibilities introduce the risk of vendor locking since the definition of the required IaC for a given cloud provider is not portable and would need to be translated if a different cloud provider is required. In this regard, tools like Terraform² or Pulumi³ provide an abstraction over the various implementations of the different cloud providers and facilitate the development of portable deployments. This way, the risk of vendor locking is severely reduced, and organizations can react dynamically to their needs without incurring significant implementation costs. On top of this, there are numerous benefits that IaC technologies bring to the table⁴:

Consistency: it permits the automation of the infrastructure provisioning by enabling repeatable deployments.Decreased Risk: it promotes a less error-prone approximation to infrastructure management, as manual interventions are minimized.Cost Optimization: it enables easier identification of unnecessary resources, and faster migration among cloud providers reacting to billing changes.Improved Collaboration: scripts can be integrated into version control tools, which promotes collaboration between individuals and teams.

However, the application lifecycle goes beyond infrastructure provisioning. The following figure displays the application lifecycle supported by different IaC technologies⁵.

The application lifecycle supported by Infrastructure as Code technologies. | Source: Josu Diaz-de-Arcaya et al.⁵

In this scenario, the goal of IaC technologies extends beyond the mere provisioning of infrastructural resources. After spinning up the necessary infrastructure, the configuration management stage ensures that all the requirements are appropriately installed. This stage is usually accomplished with tools such as ansible⁶, chef⁷, or puppet⁸. Finally, the application deployment oversees the orchestration of the application over the various infrastructural devices.

Understanding LLMs

Large Language Models (LLMs) refer to a class of artificial intelligence models that are designed to understand and generate human-like text based on the input provided to them. These models are known for their considerable number of parameters, which enable them to capture complex patterns and nuances in language⁹.

Text Generation: Text created by LLMs can be cohesive and relevant to its surroundings. They are employed for activities like finishing texts, producing material, and even doing creative writing.Language Comprehension: LLMs are capable of comprehending and extracting information from text. They are capable of attitude analysis, text classification, and information retrieval.Translation: LLMs can translate text from one language to another. This is very beneficial for machine translation applications.Answering questions: LLMs can answer questions based on a given context. They are used in chatbots and virtual assistants to answer user queries.Text Summarization: LLMs can summarize long passages of text into shorter, more coherent summaries. This is useful for condensing information for quick consumption.

Among the aforementioned capabilities, we will focus on text generation. In particular, in its ability to produce the correct IaC code based on the input prompts, Large Language Models (LLMs) have made significant advances in the field of natural language processing, but they have also raised several challenges and concerns. Some of the key issues and concerns associated with LLMs at that time include:

Biases and fairness: LLMs can learn biases present in the data they are trained on, which can lead to biased or unfair results.Misinformation and disinformation: LLMs may generate false or misleading information, which may contribute to the spread of misinformation and disinformation online. These models have the potential to create content that appears credible but is factually incorrect.Security and privacy: LLMs can be misused to generate malicious content, such as deepfake texts, fake news, or phishing emails.

The following table displays a comparison among various LLMs¹⁰


Generating IaC with LLMs

In order to test how current LLM tools perform in the field of IaC, a use case has been designed. The final objective is to build an API in a virtual machine using the FastAPI framework that allows the client to perform search and delete tasks in an Elasticsearch cluster using HTTP methods. The cluster will be composed of three nodes, each one in its own virtual machine, and in another machine will be Kibana, an administration tool for the cluster that supports visualizations. Everything must be in the AWS cloud. The following figure shows this architecture:

Use case designed to test the feasibility of LLMs generated IaC

The challenge is to complete the following three tasks successfully using LLM tools. In this article, we have used OpenAI’s ChatGPT.

Terraform code to build an infrastructure with five virtual machines in AWS.Source code of the FastAPI application to perform document search and delete operations through HTTP methods of the API.Ansible code for the deployment and installation of an Elasticsearch cluster on three nodes, Kibana on another node, and a FastAPI application on the remaining node.

Task #1

For this challenge, we have used the following prompt:

I need to create, via Terraform, five virtual machines at a public cloud provider you want. The purpose of these virtual machines is the following: Three of them are for deploying an Elasticsearch cluster, which is going to ingest 2G of data each day; the other one is for Kibana; and the last one is for deploying a FastAPI application. You should choose the hardware for each virtual machine and the cloud provider for each virtual machine. For the variables you don’t know, use variables as placeholders. Use cheap instances.

The initial response was a good effort, but we needed to keep iterating. For instance, we wanted all the variables to be defined in a separate file, which resulted in the following image.


Similarly, we would like to know the IP addresses of the deployment, and we want this configuration to be in a separate file.


The AI did an excellent job at describing the instances we wanted, as well as configuring them with the security groups each of the required.


It also created the necessary resources for security groups we wanted, and used place holders when to define the various ports as variables.


In general, ChatGPT did a fine job at doing this task. However, it took us a while to obtain a viable configuration. get the networking configuration correct. For instance, we wanted to connect to each of the provisioned virtual machines, which we indicated in the following way.

I want ssh access to all of them from my laptop, and the kibana instance requires http and https access from my laptop.

The above prompt produced a code that was almost correct, since the AI got confused with the ingress and egress policies. Nevertheless, this was easy to spot and fix.

After being able to reach the virtual machines, we had the issue of not being able to connect to them due to the lack of permissions. This resulted in a longer conversation, and it ended up being easier to add these lines ourselves.

Task #2

For this challenge, we have used the following prompt:

I need to create a FastAPI application. The purpose of these API is to have methods for storing single json document in Elasticsearch cluster, storing multiple documents and for deleting them. Elasticsearch cluster is deployed in 3 nodes, and it has a basic authentication with user “tecnalia” and password “iac-llm”.

The result of this prompt has been remarkably successful. The app uses the Elasticsearch python package¹² to interact with Elasticsearch cluster and it is completely valid. We must only remember that we need to change the IP addresses of the nodes where the cluster is deployed. In the following picture, the first method has been created with serves the purpose of inserting a single document in Elasticsearch.


Then, the second method is used to create a bulk insert of various documents in a single call.


Finally, the last method can be used to delete a single document from the Elasticsearch cluster.


We reckon this experiment has been highly successful, as it correctly selects an appropriate library for doing the task. However, further manual refinements are necessary to turn this code into production ready software.

Task #3

For this challenge, we have used the following prompt:

Generate ansible code to install Elasticsearch cluster on three nodes. Please also add a Kibana node connected to the cluster.

This prompt did an OK job at producing the desired ansible scripts. It did an excellent job at organizing the source code into various files. First, the inventory with details about all the nodes. Keep in mind this file needs to be adjusted with the correct IP addresses generated in Task #1.


Then, the main ansible script for installing Elasticsearch is displayed in the following figure. This represents an excerpt of it, the complete example can be found in the repository¹¹.


On the other hand, the necessary configuration for each of the Elasticsearch nodes has been generated conveniently as a Jinja file. In this case, we had to manually add the path.logs and path.data configurations as Elasticsearch was unable to boot up due to permission issues.


On a similar note, ChatGPT was able to generate a similar configuration for the Kibana instance. However, in this case we manually separated the configuration into a separate file for convenience. An excerpt of this file can be seen in the following image.


Similarly, the following jinja file which refers to the Kibana instance seems good, even though the IP addresses would be better to be parameterized.


In general, we found ChatGPT extremely good at producing a skeleton of the project. However, there are still plenty of actions required to turn that skeleton into a production level application. In this regard, deep expertise in the utilized technologies is required to tweak the project.


This article addresses the use of LLMs to oversee the application’s lifecycle. The pros and cons of this effort are discussed in the following lines.


The use of LLMs for the support of the various stages of the application lifecycle is particularly beneficial in kicking off the project, particularly in well-known technologies.The initial skeleton is well structured, and it provides structures and methodologies that otherwise would not have been utilized.


LLMs are subject to the bias risk associated with the use of AI solutions; in this instance, ChatGPT chose AWS over similar options.Polishing the project to be production-ready can be troublesome, and it is sometimes easier to adjust the code by hand, which requires extensive knowledge of the utilized technologies.


This work is funded by the SIIRSE Elkartek project (Robust, safe and ethical smart industrial systems for Industry 5.0: Advanced paradigms for the specification, design, evaluation, & monitoring) from the Basque Government (ELKARTEK 2022 KK-2022/00007).

Authorship contribution

The conceptualization, analysis, investigation, and writing are a joint effort of Juan Lopez de Armentia, Ana Torre, and Gorka Zárate.


What is Infrastructure as Code (IaC)? (2022). https://www.redhat.com/en/topics/automation/what-is-infrastructure-as-code-iacTerraform by HashiCorp. (n.d.). Retrieved October 5, 2023, from https://www.terraform.ioPulumi — Universal Infrastructure as Code. (n.d.). Retrieved October 5, 2023, from https://www.pulumi.com/The 7 Biggest Benefits of Infrastructure as Code — DevOps. (n.d.). Retrieved October 5, 2023, from https://duplocloud.com/blog/infrastructure-as-code-benefits/Diaz-De-Arcaya, J., Lobo, J. L., Alonso, J., Almeida, A., Osaba, E., Benguria, G., Etxaniz, I., & Torre-Bastida, A. I. (2023). IEM: A Unified Lifecycle Orchestrator for Multilingual IaC Deployments ACM Reference Format. https://doi.org/10.1145/3578245.3584938Ansible is Simple IT Automation. (n.d.). Retrieved October 5, 2023, from https://www.ansible.com/Chef Software DevOps Automation Solutions | Chef. (n.d.). Retrieved October 5, 2023, from https://www.chef.io/Puppet Infrastructure & IT Automation at Scale | Puppet by Perforce. (n.d.). Retrieved October 5, 2023, from https://www.puppet.com/Kerner, S. M. (n.d.). What are Large Language Models? | Definition from TechTarget. Retrieved October 5, 2023, from https://www.techtarget.com/whatis/definition/large-language-model-LLMSha, A. (2023). 12 Best Large Language Models (LLMs) in 2023 | Beebom. https://beebom.com/best-large-language-models-llms/Diaz-de-Arcaya, J., Lopez de Armentia, J., & Zarate, G. (n.d.). iac-llms GitHub. Retrieved October 5, 2023, from https://github.com/josu-arcaya/iac-llmsElastic Client Library Maintainers. (2023). elasticsearch · PyPI. https://pypi.org/project/elasticsearch/

Mastering the Future: Evaluating LLM-Generated Data Architectures leveraging IaC technologies was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

Leave a Comment