• 5 Posts
  • 94 Comments
Joined 7 months ago
cake
Cake day: January 20th, 2024

help-circle


  • I viewed “let’s play” series of Alan Wake 1 and 2. Just wanna share my thoughts re AW2. There are spoilers below.

    So much reading. Yes, you can skip a lot of those. Yes, the title is Alan Wake 2 and Wake is a writer. But a gamer who very much dislikes reading may be turned off or think “I may be missing much by skipping most of the reading, so I’ll drop this game.”

    It seems Alice is alive. So she’s been living sans Alan for 13 years minimum. Extremely :(. I wanted a :) end for them. I can imagine an AW2 where the year is 2013 instead of 2023, and Alan was able to escape the “dark place” in 2013 and live with Alice happily. The writer (I dislike Sam Lake’s writing so I’m not saying Sam Lake) can figure out the plot gymnastics in between to make the game so interesting.

    Scratch slew Jaakko Koskela so easyyy. So powerful. He could just slay Anderson, Casey, Cult of the tree and the Fbc at the start.

    The Fbc’s light arrays placed far from each other outside the sheriff’s station were plot convenience.

    Anderson aided Wake with writing the end despite Wake inserting Logan and David in the tale. Meh.

    Sarah and Barry were sidelined. :(

    Sam Lake most likely disliked a fairy-tale :) end for AW2 but don’t say that in horror, the hero must pay a price to save his pals (or something like that) like it’s a rule.

    It’s time.com’s best 2023 game but it’s 👎 for me.



  • Been using Samsung messages for years, not Google messages, but I just wanna comment.

    I’m OK with it. It’s been an ai race. It’s natural that Google is taking advantage of the millions of Android phones and of the fact that many are using Google messages and Gmail. Adding Gemini to Google messages is sensible to me. I just hope it won’t be annoying.

    Gmail has been my main email but I didn’t see Gemini in it. Maybe in the future?


  • Found your comment via Lemmy search.

    Hours ago I viewed Civil war. It was entertaining but not great.

    Plemons’ scene was 👍. He’s 👍 at the remorseless killer role. In my mind I compared that scene to the cliched action scene in countless movies where there were so much killing. In that cliched scene, the deaths had little effect on me. Since the writer didn’t establish some kind of connection to the viewer? Maybe? While in Plemons’ scene, I didn’t want Jessie and Lee to die, so the 2 Asians’ deaths were effective for me. Also, Jessie, Lee, Joel and Tony didn’t know what to say to Plemons’ question. Does saying something matter? What’s the right answer? So Plemons will let us go. I guess that type of writing makes tension.

    Jessie going to the line of fire at the end was lame.








  • I like that the writer thought re climate change. I think it’s been 1 of the biggest global issues for a long time. I hope there’ll be increasing use of sustainable energy for not just data centers but the whole tech world in the coming years.

    I think a digital waiter doesn’t need a rendered human face. We have food ordering kiosks. Those aren’t ai. I think those suffice. A self-checkout grocer kiosk doesn’t need a face too.

    I think “client help” is where ai can at least aid. Imagine a firm that’s been operating for decades and encountered so many kinds of client complaints. It can feed all those data to a large language model. With that model responding to most of the client complaints, the firm can reduce the number of their client support people. The model will pass the complaints that are so complex or that it doesn’t know how to address to the client support people. The model will handle the easy and medium complaints; the client support people will handle the rest.

    Idk whether the government or the public should stop ai from taking human jobs or let it. I’m torn. Optimistically, workers can find new jobs. But we should imagine that at least 1 human will be fired and can’t find a new job. He’ll be jobless for months. He’ll have an epic headache as he can’t pay next month’s bills.










  • The article is too long for me. 2 of its main ideas are “Everyone using large-language models should be aware of ai hallucination and be careful when asking those models for facts.” and “Firms that develop large-language models shouldn’t downplay the hallucination and shouldn’t force ai in every corner of tech.”

    There was already so much misinformation on the Web before Chatgpt 3.5. There’s still so much misinformation. No need for the hallucination to worsen the situation. We need a reliable source of facts. Optimistically, Google, Openai or Anthropic will find a way to reduce or eradicate the hallucination. The Google ceo said they were making progress. Maybe true. Or maybe generic pr lie so folks would stop following up re the hallucination.