• 0 Posts
  • 6 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle
  • I’ve been wondering the same about a lot of right wing economic policy. Why push for policy that is a net negative for everyone in the long run? I have since realized that it does make sense if you don’t look at it in terms of wealth, but rather in terms of power. The control you have over other people doesn’t depend on your absolute wealth, but rather the relative wealth you have compared to others, and so for someone looking for that kind of power (i.e. most billionaires as far as I can tell) it wouldn’t matter if something they do hurts everybody, as long as it hurts you more than it hurts them.


  • One issue I’ve heard is if a restaurant chooses not to use the service someone else can set up a page in their name without permission, and the platforms often won’t do anything to prevent it. Then confused delivery drivers start to show up, and customers complain to the restaurant about the markups/high pricing even when the restaurant is not actually involved at all.

    On top of all that, many people just use delivery apps to find local restaurants, so you lose a lot of visibility if you aren’t listed, but for that one you can argue it’s in fact paying for the service you get (i.e. marketing).


  • I strongly believe that our brains are fundamentally just prediction machines. We strive for a specific level of controlled novelty, but for the most part ‘understanding’ (i.e. being able to predict) the world around us is the goal. We get boredom to push us beyond getting too comfortable and simply sitting in the already familiar, and one of the biggest pleasures in life is the ‘aha’ moment when understanding finally clicks in place and we feel we can predict something novel.

    I feel this is also why LLMs (ChatGPT etc.) can be so effective working with language, and why they occasionally seem to behave so humanlike – The fundamental mechanism is essentially the same as our brains, if massively more limited. Animal brains continuously adapt to predict sensory input (and to an extent their own output), while LLMs learn to predict a sequence of text tokens during a restricted training period.

    It also seems to me the strongest example of this kind of prediction in animals is the noticing (and wariness) when something feels ‘off’ about the environment around us. We can easily sense specific kinds of small changes to our surroundings that signify potential danger, even in seemingly unpredictable natural environments. From an evolutionary perspective this also seems like the most immediately beneficial aspect of this kind of predictive capability. Interstingly, this kind of prediction seems to happen even on the level of individual neurons. As predictive capability improves, it also necessitates an increasingly deep ability to model the world around us, leading to deeper cognition.


  • It gets better, but learning vocabulary at that level is going to feel very slow no matter what. I would recommend keeping a fairly low bar for just ignoring words and moving on, as keeping up the reading habit is by far the most useful. If reading feels tedious it’s easy to lose interest.

    One to two new words per page sounds high enough where you are bound to get repetition, so you may want to only look up words that seem either important for context or familiar (i.e. feels like something you’ve seen before) to get the most value. I combine that with spaced repetition (Anki) for words that I seem to look up often, but Anki has a bit of a learning curve so it may or may not suite you.