• 0 Posts
  • 37 Comments
Joined 10 months ago
cake
Cake day: September 22nd, 2023

help-circle
  • But my point is that, that is due to one’s choices. You don’t actually have to get married, have kids, get on the career treadmill. But also some do not even get to have these choices, so have no choice but to follow a non-traditional path. People with disabilities, people who living in places where they are displayed due to political instability, poverty, war etc.

    Someone growing up in Ukraine right now, probably wont consider this to be the happiest time in their lives. A person with a disability struggling to establishing independence over even basic life activities. People who just say f–k it.


  • Weird article. I think if in your middle aged years you are the most unhappy, its because you have not made the right decisions for yourself. Too many people chase the things they are told will make them happy rather than what they like and know made them happy. Doing what you are “supposed” to do. Alternative paths have their hardship but for me at least, I don’t have the kind of complaint I hear from others. You always have to make some kind of compromise…


  • That is pretty interesting and thanks for posting it. I hear the words and its intriguing but to be honest, I don’t really understand it. I’d have to give it some thought and read more about it. Do you have a place you suggest going to learn more?

    I use chatgpt-4o currently for learning python and helping with grammar. I find it does great with grammar but even with relatively simple python questions it can produce some “creative” answers. Like its in the ball park but its not perfect and for a learner, that’s learning the hard way. To be fair I don’t use the assistant/code interpreter, which I have no idea about but based on its name I assume it might be better. So that’s what I based my somewhat skeptical opinion of ai on.


  • From my understanding, AI is a essentially a statistical method so naturally it will use a confidence level. Its hard for me to take the leap of faith to confidence level will correlate to accuracy. Seems to me it would be more dependent on its data set. If its data contains a commonly held belief, that is incorrect, would it not have a high confidence level on an answer with that incorrect info? If we use a highly authoritative data set, that will be very limited and we’d be back to more of a keyword system than a LLM. I am sure with time, we’ll be in more of a middle ground where accuracy will be better but what will that be? 5% 3% 10%?

    I’ll freely admit I am not an expert in this at all.


  • That is so funny.

    chatgpt: “Artificial Intelligence (AI) represents a transformative investment opportunity, characterized by robust growth potential and broad applicability across industries. The AI market, projected to exceed $190 billion by 2025, offers substantial upside in sectors such as healthcare, finance, automotive, and e-commerce. As businesses increasingly adopt AI to enhance efficiency and innovation, associated firms are poised for significant returns. Key investment areas include machine learning, natural language processing, robotics, and AI-driven analytics. Despite risks like regulatory challenges and ethical concerns, the strategic deployment of capital in AI technologies holds promise for long-term value creation. Diversification within this space is advisable to mitigate volatility.”



  • Putting aside the crypto aspect, this is a simple story of a lack of zoning and government regulation. I am sure it sucks for those who live near these places but, the problem is why they were allowed to be built near residential areas at all. There will always be noisy or polluting industry but sensible planning puts these sorts of places away from where they will most harm people and disrupt their lives. And forces them to minimize the amount of noise and pollution they produce to start with.

    This is just one example of so many for why we should want to put up with govt regulation. Trust me I know how annoying it can be but we’re doomed without it. Now that the Supreme Court has defanged our institutions i.e. the Chevron deference, you can expect a lot more of these sorts of problems and with less ability to fight it.


  • The types of crypto that web3 use are proof of stake and not proof of work type chains, so energy usage is not much different than any web based service. People don’t do nuisance, so bitcoin uses proof of work, uses lot of power and that as much as people know. There are thousands of different blockchains and almost all of them besides bitcoin use proof of stake. So just from that point of view, its not an significantly different than any web project for the climate.

    That said, the article posted above, I have no idea and other concerns about crypto and AI still apply.




  • I don’t know the source, so it’s hard for me to comment but logically the problem as stated is plausible. i.e. legacy debt preventing the move to more efficient methods.

    However, the conclusion i.e. therefore replace humans with humanoid robots does not. And then tacking on unionization is just a different subject altogether. You can staff some aspects of a factory with robots and the human’s work shifts from production to maintenance. I’ve talked to automation people and robots can be very problematic and something “advanced” I would imagine much more so.

    Although not recent, some referred to the robots as “Bob” blind one-arm builders. If very well calibrated and designed for a specific task, they can be ok, except when they go wrong. To think some “AI” driven general purpose robot is going to substantially replace human labor any time soon… I very seriously doubt that. Especially with that kook as leadership.


  • In my understanding, derivatives amplify the problems and risks. Underlying that are the money people who push on these systems as hard as they can and exploit every angle. Along the lines of pushing the boundaries, the practice of brokers “loaning” shares seems like another place that’s bound to cause issues at its limits. I really wish the govt would step in and impose much stricter regulation. I’d like to trust that buying stock is investing in a company rather than feeling like the stock market is a school of small fish swimming with sharks who cheat as much as they believe they can get away with. If the focus was on dividends vs growth, I think we’d be better off. Maybe I am wrong but that’s how I see it.

    I think of it like network security. Anything you do not explicitly disallow will be used, tried, and used in ways you probably didn’t think of. It isn’t a matter of expecting people to do the right (or legal) thing, most will but it’s a surety that some will not. That’s normal and why security is a process and systems have to adapt over time in response.



  • The great thing about the stock market compared to other investments like crypto is that stocks are based on the inherent value of the business they represent. Stocks are based on financial fundamentals. You can believe in those investments because they are based on something real and not simply rampant speculation. For example.

    Tesla. Worth more than most of the rest of the car market combined because… reasons?

    Paypal. Lost 80% of its value starting in July 2021 over a year and never recovered because of terrible problems? Huge losses? Nope, because it “only” grew at 8-9%.

    2008 US housing rated as “AAA” investment i.e. “good as cash” based on actual trash.


  • Calling LLMs, “AI” is one of the most genius marketing moves I have ever seen. It’s also the reason for the problems you mention.

    I am guessing that a lot of people are just thinking, “Well AI is just not that smart… yet! It will learn more and get smarter and then, ah ha! Skynet!” It is a fundamental misunderstanding of what LLMs are doing. It may be a partial emulation of intelligence. Like humans, it uses its prior memory and experiences (data) to guess what an answer to a new question would look like. But unlike human intelligence, it doesn’t have any idea what it is saying, actually means.






  • I didn’t make my point clear. My question wasn’t really where the image was sourced, it was more about the value of what Google is doing matching an essentially random image next to the text it scraped from a website. Why did it choose that image? Adding a random image like that seems like what a low-grade SEO would do to tick the needed boxes not a high-quality product from a multi-billion dollar company. The image in no way enhances the meaning of what I asked. In fact, it does the opposite. It is a bit of Google becoming what it mocked.