• 2 Posts
  • 123 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • That’s entirely fair for the usecase of a small script or plugin, or even a small website. I’d quickly get annoyed with Python if I had to use it for a larger project though.

    TypeScript breaks down when you need it for a codebase that’s longer than a few thousand lines of code. I use pure JavaScript in my personal website and it’s not that bad. At work where the frontend I work on has 20,000 lines of TypeScript not including the HTML files, it’s a massive headache.


  • Zangoose@lemmy.worldtoProgrammer Humor@lemmy.mlEvil Ones
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    8 days ago

    This is the case for literally all interpreted languages, and is an inherent part of them being interpreted.

    It’s actually the opposite. The idea of “types” is almost entirely made up by compilers and runtime environments (including interpreters). The only thing assembly instructions actually care about is how many bits a binary value has and whether or not it should be stored as a floating point, integer, or pointer (I’m oversimplifying here but the point still stands). Assembly instructions only care about the data in the registers (or an address in memory) that they operate on.

    There is no part of an interpreted language that requires it to not have any type-checking. In fact, many languages use runtime environments for better runtime type diagnostics (e.g. Java and C#) that couldn’t be enforced at runtime in a purely compiled language like C or C++. Purely compiled binaries are pretty much the only environments where automatic runtime type checking can’t be added without basically recreating a runtime environment in the binary (like what languages like go do). The only interpreter that can’t have type-checking is your physical CPU.

    If you meant that it is inherent to the language in that it was intended, you could make the case that for smaller-scale languages like bash, Lua, and some cases Python, that the dynamic typing makes it better. Working with large, complex frontends is not one of those cases. Even if this was an intentional feature of JavaScript, the existence of TypeScript at all proves it was a bad one.

    However, while I recognize that can happen, I’ve literally never come across it in my time working on Typescript. I’m not sure what third party libraries you’re relying on but the most popular OAuth libraries, ORMs, frontend component libraries, state management libraries, graphing libraries, etc. are all written in pure Typescript these days.

    This next example doesn’t directly return any, but is more ubiquitous than the admittedly niche libraries the code I work on depends on: Many HTTP request services in TypeScript will fill fields in as undefined if they’re missing, even if the typing shouldn’t allow for that because that type requirement doesn’t actually exist at runtime. Languages like Kotlin, C#, and Rust would all error because the deserialization failed when something that shouldn’t be considered nullable had an empty value. Java might also have options for this depending on the serialization library used.


  • As a TypeScript dev, TypeScript is not pleasant to work with at all. I don’t love Java or C# but I’d take them any day of the week over anything JS-based. TypeScript provides the illusion of type safety without actually providing full type safety because of one random library whose functionality you depend on that returns and takes in any instead of using generic types. Unlike pretty much any other statically typed language, compiled TypeScript will do nothing to ensure typing at runtime, and won’t error at all if something else gets passed in until you try to use a method or field that it doesn’t have. It will just fail silently unless you add type checking to your functions/methods that are already annotated as taking in your desired types. Languages like Java and C# would throw an exception immediately when you try to cast the value, and languages like Rust and Go wouldn’t even compile unless you either handle the case or panic at that exact location. Pretty much the only language that handles this worse is Python (and maybe Lua? I don’t really know much about Lua though).

    TLDR; TypeScript in theory is very different from TypeScript in practice and that difference makes it very annoying to use.

    Bonus meme:


  • Rust is only huge because it doesn’t have an ABI. If you had an ABI (and didn’t have to compile every single dependency into the binary) the binary sizes would probably drop a lot to the point where they’re only slightly bigger than a C counterpart

    Edit: I don’t know if Go has an ABI but they also include a runtime garbage collector in their binaries so that probably has something to do with it.



  • Let people be forced to work and starve so they can learn that capitalism and the traditional ways (outside of religious influence) weren’t so bad.

    Do you really think people aren’t already being forced to work and starve under capitalism? How much do you think your job pays the janitor? What about fast food workers? Mailmen? Do you think they’re all living comfortably and can easily afford food and rent?





  • There were plenty of good shows in 2023 though? Even excluding Frieren and shonens (since I’m assuming based on what you said, you aren’t interested in them) there was also Apothecary Diaries which aired during the same season. Oshi No Ko was pretty good also (the first episode is by far the best, imo the rest of the show is still pretty good tho). Those definitely stood out the most to me but I did enjoy a lot of the 2023 shows I watched.



  • Decentralized/OSS platforms >>> Multiple competing centralized platforms >>> One single centralized platform

    Bluesky and Threads are both bad but having more options than Twitter/X is still a step in the right direction, especially given the direction Musk is taking it in. As much as I like the fediverse (I won’t be using either Threads or BlueSky anytime soon), it still has a lot of problems surrounding ease of use. Lemmy, Mastodon, Misskey, etc. would benefit a lot from improving the signup process so that the average user doesn’t need to be overwhelmed with picking an instance and understanding how federation works.



  • I’m not trying to say LLM’s haven’t gotten better on a technical level, nor am I trying to say there should have been AGI by now. I’m trying to say that from a user perspective, ChatGPT, Google Gemini, etc. are about as useful as they were when they came out (i.e. not very). Context size might have changed, but what does that actually mean for a user? ChatGPT writing is still obviously identifiable and immediately discredits my view of someone when I see it. Same with AI generated images. From experience, ChatGPT, Gemini, and all the others still hallucinate points which makes it near-useless for learning new topics since you can’t be sure what is real and what’s made up.

    Another thing I take issue with is open source models that are driven by VCs anyway. A model of an LLM might be open source, but is the LLM actually open source? IMO this is one of those things where the definitions haven’t caught up to actual usage. A set of numerical weights achieved by training on millions of pieces of involuntarily taken data based on retroactively modified terms of service doesn’t seem open source to me, even if the model itself is. And AI companies have openly admitted that they would never be able to make what they have if they had to ask for permission. When you say that “open source” LLMs have caught up, is that true, or are these the LLM-equivalent of uploading a compiled binary to GitHub and then calling that open source?

    ChatGPT still loses OpenAI hundreds of thousands of dollars per day. The only way for a user to be profitable to them is if they own the paid tier and don’t use it. The service was always venture capital hype to begin with. The same applies to Copilot and Gemini as well, and probably to companies like Perplexity as well.

    My issue with LLMs isn’t that it’s useless or immoral. It’s that it’s mostly useless and immoral, on top of causing higher emissions, making it harder to find actual results as AI-generated slop combines with SEO. They’re also normalizing collection of any and all user data for training purposes, including private data such as health tracking apps, personal emails, and direct messages. Half-baked AI features aren’t making computers better, they’re actively making computers worse.



  • My current phone has less utility than the phone I had in 2018, which had a headphone jack, SD card, IR emitter (I could use it as a TV remote!), heartrate sensor, and a decent camera.

    My current laptop is less upgradable than pretty much anything that came out in 2010. The storage uses a technically standard but uncommon drive size, and the wifi and RAM are both soldered on. It is faster and has a nicer screen, but DRMs in web browsers make it hard to take advantage of that screen, and bloated electron apps make it not feel much faster.

    Oh but here’s the catch! Now, thanks to a significant amount of stolen data being used to train some autocorrect, my computer can generate code that’s significantly worse than what I can write as a junior software dev with under a year of job experience, and takes twice as long to debug. It can also generate uncanny valley level images that look about like I typed in a one sentence prompt to get them.