• 1 Post
  • 2.37K Comments
Joined 1 year ago
cake
Cake day: February 10th, 2025

help-circle





  • CFD = Computational Fluid Dynamics.

    It is kind of what they said, you’re right. I was more pointing how how it could be that they could ‘sense the vibes’ of a CFD result to determine if it is accurate or if the model decided to do something weird. Since it’s a chaotic process and also an artificial one, the starting conditions can yield results that are impossible/not based on reality.

    If you look at enough of them you start to notice the kinds of things that go wrong. They would also have a pretty good idea about how their design should perform and if the simulation shows different they’d first want to troubleshoot the simulation before attempting to re-design whatever system they’re creating.




  • Anfinsen won the Nobel in 1972 for showing that the amino acid sequence is what is responsible for the 3D structure of proteins.

    Since then we’ve been able to take images of protein’s structures using xray crystallography but that is a painstaking process. The ability to accurately predict a protein’s structure from an amino acid sequence has been an unsolved problem until very recently.

    It wasn’t until 2024 that Hassabis, Jumper and Baker won the Nobel for their work in predicting protein structure (using an AI called AlphaFold) and computationally designing new proteins.

    The ability to create arbitrary proteins is new and will revolutionize some fields of medicine (like cancer treatment) and, to me, is a much more impressive use of AI.

    LLMs are interesting but they are incredibly over-hyped as far as ‘changing the world’ goes, imo.


  • My first ‘good’ computer was a Compaq (from Radio Shack!) 512MB RAM and a 10GB hard drive! It could run Windows 98 and Starcraft!

    Previously I had a 486dx with 64MB RAM and 512MB hard drive. We played qbasic games, like Snake and Gorilla, I shared a copy of Wing Commander with a friend (and hand copied the instruction booklet because the DRM at the time was that the game wouldn’t launch unless you could tell it what was the 5th word on the 3rd page or whatever).

    Later, I found a modem and was able to dial into BBSs to play MUDs. MajorMUD was the first I found, but they only let you do about 100 commands/day unless you paid ($15/month!).

    On the new PC we had dial-up from a local ISP and I could play MUDs via Telnet (or zMUD 5.55, the version who’s DRM broke and didn’t count down the 30 day free trial clock).

    We also used to have to fight off the dinosaurs on the way to school (which we walked to, barefoot, uphill in the snow) of course.



  • Those kinds of simulations are inherently chaotic, tiny changes to the initial conditions can have wildly different outcomes sometimes to the point of being nonsensical. Also, since they’re simulating a limited volume the boundary conditions can cause weird artifacts in some cases.

    If you run a simulation of air over an aircraft wing and the end result is a mess of turbulence instead of smooth flow then you can assume that simulation was acting weird and not that your wing design is suddenly breaking the rule of physics. When the simulation breaks it usually does so in ways that are obvious due to previous testing with physical models.


  • I’m failing to see why the creative writing machine is better than a simulation set to ‘rough’.

    The problem is that you saw AI and thought LLM.

    Machine Learning is a big field, AI/Neural Networks are a subset of that field and LLMs are only a single application of a specific type of LLM (Transformer model) to a specific task (next token prediction).

    The only reason that LLMs and Image generation models are the most visible is that training neural network requires a large amount of data and the largest repository of public data, the Internet, is primarily text and images. So, text and image models were the first large models to be trained.

    The most exciting and potentially impactful uses of AI are not LLMs. Things like protein folding and robotics will have more of an impact on the world than chatbots.

    In this case, generating fast approximations for physical modeling can save a ton of compute time for engineering work.








  • https://www.newsweek.com/avignon-papacy-explained-what-reported-us-threat-to-pope-and-vatican-means-11803877

    As things got heated, Colby reportedly lectured Cardinal Pierre on the military power of the U.S. and told him that the Catholic Church must pick a side, according to Hale, and mentioned the “Avignon papacy.”

    All information about the meeting was obtained by Ferraresi from Vatican and U.S. officials briefed on the Pentagon meeting. What Could Such a Reference Indicate?

    There is no doubt that a reference to the “Avignon papacy” by a Pentagon official was meant to make the Vatican feel uneasy about crossing the Trump administration.

    Some have suggested that the reference of this chapter of medieval history might signal the Trump administration’s intention to trigger a new schism, while others think it would suggest its willingness to use military force against the Holy See. Ferraresi, however, said that interpreting it as a military threat is “just absurd.”

    According to Mike Young, author of a newsletter on civic accountability, the mention of the “Avignon papacy” was a reference to “an implicit model for what happens to religious institutions that oppose state power.”

    In a post on X, he wrote: “That’s not a slip of the tongue. That’s a studied historical reference deployed deliberately in a room with the Pope’s senior diplomat. The message was not subtle.”