Hacker Newsnew | past | comments | ask | show | jobs | submit | fasten's commentslogin

Most IaC setups will generate a terraform state, wheter in a directory (s3 bucket, hcp...) or on the fly. As long as we are able to access them we will be able to create a reconciliation at some point. which framework do you use?


thanks for your feedback!


We can handle multiple changes in the same PR thanks to our graph, a digital twin of your infra. We will query each changes separately, so it can support Terraform files. But you're right on one point : if multiple PR are open, we don't have a chronological way to treat them (to take into account the first PR and its impact and based on that do the analysis on the second PR etc..).


This mean searching through time and changes. Imagine prod is on fire and api returns 500. Often you need to check through logs, git, cloud consoles, kub configs etc... with the time machine, Anyshift will directly return the list of 5 changes that occured during the week, including the autoscaler and who did the change


we are thinking to add live monitoring data to it such as datadog or prometheus. What do you use ?


Datadog


thats the first one we are thinking about, thats great thanks


yes directly by signing up on our app https://app.anyshift.io/sign-in


In our Pull Request bot, we provide more information with a clear sumup of whats gonna be impacted. One of our next feature is to configure what type of information is more critical to you: by type of resources, owner (git blame) and tags. Do you have one that you would prefer in particular ?


I guess by type of resources but also env would be interesting. I don't care on most of the impact of my dev end tbh.


super interesting thanks! for having config combined


Thanks for the feedback! We already use AI in the PR to explain whats happening and the best practices to adopt. As for the code remediation part: most LLMs fail to generate the right IaC code thats adapted to your infra because they miss its general context (config, dependencies..). We are building first the deterministic part (the context) and once we have the context our plan is to add the fix/recommendation in the change.


How will you be checkiing the quality of the AI recommendations in the your PR. Do you think that using different model ( chatgpt, claude,gemin, qwen) to challenge the recommendation made by another AI could help ?


About having differents models challenging each other, I haven’t seen anything useful yet but I understand where you are going. Might be a future direction


I have in mind the following paper. It is called Self-Taught Evaluators (https://arxiv.org/pdf/2408.02666)by Meta . It is interesting as they get big improvements from LLM checking and improving solution. WDYT ? I don't know if you could generate an PR using AI with let's say Claude and then check the quality by using chatgpt or gemini.. I would be interested by knowing if that would provide quality and more trust or the opposite


Ok focusing on context makes sense but I’d challenge the idea that LLMs inherently fail without it. Some teams have used fine-tuned models or hybrid workflows with partial context to generate useful IaC snippets


Agreed 100%. LLMs are doing solid job at generating IaC but in a context where the person who use them knows what he/she's doing. In our case, remediaiton means an extra level of trust, where your infra is already super sensitive.


we have used some tools to generate terraform code based on our unmanaged cloud resource for instance and it worked well..


The tools we are aware of will create a 1-to-1 mapping to some code, but very often with hardcoded values because they lack the full context of your infrastructure. This can lead to potential incidents in the future (broken dependencies / visibility). This is at least the way we are approaching it, and why we want to build this "deterministic" part first and then use it as context to the LLMs.


Thanks for your kind words!!


nice approach. highlighting key info like hostname can definitely help prevent mistakes


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: