• 1 Post
  • 641 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • And you can’t tell when something is active/focused or not because every goddamn app and web site wants to use its own “design language”. Wish I had a dollar for every time I saw two options, one light-gray and one dark-gray, with no way to know whether dark or light was supposed to mean “active”.

    I miss old-school Mac OS when consistency was king. But even Mac OS abandoned consistency about 25 years ago. I’d say the introduction of “brushed metal” was the beginning of the end, and IIRC that was late 90s. I am old and grumpy.



  • “It’s popular so it must be good/true” is not a compelling argument. I certainly wouldn’t take it on faith just because it has remained largely unquestioned by marketers.

    The closest research I’m familiar with showed the opposite, but it was specifically related to the real estate market so I wouldn’t assume it applies broadly to, say, groceries or consumer goods. I couldn’t find anything supporting this idea from a quick search of papers. Again, if there’s supporting research on this (particularly recent research), I would really like to see it.






  • We find that the MTEs are biased, signif-icantly favoring White-associated names in 85.1% of casesand female-associated names in only 11.1% of case

    If you’re planning to use LLMs for anything along these lines, you should filter out irrelevant details like names before any evaluation step. Honestly, humans should do the same, but it’s impractical. This is, ironically, something LLMs are very well suited for.

    Of course, that doesn’t mean off-the-shelf tools are actually doing that, and there are other potential issues as well, such as biases around cities, schools, or any non-personal info on a resume that might correlate with race/gender/etc.

    I think there’s great potential for LLMs to reduce bias compared to humans, but half-assed implementations are currently the norm, so be careful.