• 0 Posts
  • 77 Comments
Joined 1 年前
cake
Cake day: 2023年6月15日

help-circle


  • You can certainly try to use the power as much as possible, or sell the energy to a country with a deficit. But the problem is that you would still need to invest a lot of money to make sure the grid can handle the excess if you build renewables to cover 100% of the grid demand for now and in the future. Centralized fuel sources require much less grid changes because it flows from one place and spreads from there, so infrastructure only needs to be improved close to the source. Renewables as decentralized power sources requires the grid to be strengthened anywhere they are placed, and often that is not practical, both in financial costs and in the engineers it takes to actually do that.

    Would it be preferable? Yes. Would it happen before we already need to be fully carbon neutral? Often not.

    I’d refer you to my other post about the situation in my country. We have a small warehouse of a few football fields which stores the highest radioactivity of unusable nuclear fuel, and still has more than enough space for centuries. The rest of the fuel is simply re-used until it’s effectively regular waste. The time to build two new nuclear reactors here also costs only about 10 years, not 20.

    Rather continue with wind and solar and then batteries for the money.

    All of these things should happen regardless of nuclear progress. And they do happen. But again, building renewables isn’t just about the price.


  • Some personal thoughts: My own country (The Netherlands) has despite a very vocal anti-nuclear movement in the 20th century completely flipped now to where the only parties not in favor of Nuclear are the Greens, who at times quote the fear as a reason not to do it. As someone who treats climate change as truly existential for our country that lies below projected sea levels, it makes them look unreasonable and not taking the issue seriously. We have limited land too, and a housing crisis on top of it. So land usage is a big pain point for renewables, and even if the land is unused, it is often so close to civilization that it does affect people’s feelings of their surroundings when living near them, which might cause renewables to not make it as far as it could unrestricted. A nuclear reactor takes up fractions of the space, and can be relatively hidden from people.

    All the other parties who heavily lean in to combating climate change at least acknowledge nuclear as an option that should (and are) being explored. And even the more climate skeptical parties see nuclear as something they could stand behind. Having broad support for certain actions is also important to actually getting things done. Our two new nuclear powered plants are expected to be running by 2035. Only ten years from now, ahead of our climate goals to be net-zero in 2040.


  • People are kind of missing the point of the meme. The point is that Nuclear is down there along with renewables in safety and efficiency. It’s lacking the egregious cover up in the original meme, even if it has legitimate concerns now. And due to society’s ever increasing demand for electricity, we will heavily benefit from having a more scalable solution that doesn’t require covering and potentially disrupting massive amounts of land before their operations can be scaled up to meet extraordinary demand. Wind turbines and solar panels don’t stop working when we can’t use their electricity either, so it’s not like we can build too many of them or we risk creating complications out of peak hours. Many electrical networks aren’t built to handle the loads. A nuclear reactor can be scaled down to use less fuel and put less strain on the electrical network when unneeded.

    It should also be said that money can’t always be spent equally everywhere. And depending on the labor required, there is also a limit to how manageable infrastructure is when it scales. The people that maintain and build solar panels, hydro, wind turbines, and nuclear, are not the same people. And if we acknowledge that climate change is an existential crisis, we must put our eggs in every basket we can, to diversify the energy transition. All four of the safest and most efficient solutions we have should be tapped into. But nuclear is often skipped because of outdated conceptions and fear. It does cost a lot and takes a while to build, but it fits certain shapes in the puzzle that none of the others do as well as it does.


  • Google Docs, Sheets, and Forms should also get a mention. People forget that before that the only way to work together on documents was a shared drive with file locking while 1 person can work on a file at a time, complicated and unpractical. There are still no massively adopted replacements for these (Or they’re made by Microsoft, lol)




  • Yes, it would be much better at mitigating it and beat all humans at truth accuracy in general. And truths which can be easily individually proven and/or remain unchanged forever can basically be 100% all the time. But not all truths are that straight forward though.

    What I mentioned can’t really be unlinked from the issue, if you want to solve it completely. Have you ever found out later on that something you told someone else as fact turned out not to be so? Essentially, you ‘hallucinated’ a truth that never existed, but you were just that confident it was correct to share and spread it. It’s how we get myths, popular belief, and folklore.

    For those other truths, we simply ascertain the truth to be that which has reached a likelihood we consider it to be certain. But ideas and concepts we have in our minds constantly float around on that scale. And since we cannot really avoid talking to other people (or intelligent agents) to ascertain certain truths, misinterpretations and lies can sneak in to cause us to treat as truth that which is not. To avoid that would mean the having to be pretty much everywhere to personally interpret the information straight from the source. But then things like how fast it can process those things comes in to play. Without making guesses about what’s going to happen, you basically can’t function in reality.


  • Yes, a theoretical future AI that would be able to self-correct would eventually become more powerful than humans, especially if you could give it ways to run magnitudes more self-correcting mechanisms at the same time. But it would still be making ever so small assumptions when there is a gap in the information it has.

    It could be humble enough to admit it doesn’t know, but it can still be mistaken and think it has the right answer when it doesn’t. It would feel neigh omniscient, but it would never truly be.

    A roundtrip around the globe on glass fibre takes hundreds of milliseconds, so even if it has the truth on some matter, there’s no guarantee that didn’t change in the milliseconds it needed to become aware that the truth has changed. True omniscience simply cannot exists since information (and in turn the truth encoded by that information) also propagates at the speed of light.

    a big mistake you are making here is stating that it must be fed information that it knows to be true, this is not inherently true. You can train a model on all of the wrong things to do, as long it has the capability to understand this, it shouldn’t be a problem.

    The dataset that encodes all wrong things would be infinite in size, and constantly change. It can theoretically exist, but realistically it will never happen. And if it would be incomplete it has to make assumptions at some point based on the incomplete data it has, which would open it up to being wrong, which we would call a hallucination.


  • I’m not sure where you think I’m giving it too much credit, because as far as I read it we already totally agree lol. You’re right, methods exist to diminish the effect of hallucinations. That’s what the scientific method is. Current AI has no physical body and can’t run experiments to verify objective reality. It can’t fact check itself other than be told by the humans training it what is correct (and humans are fallible), and even then if it has gaps in what it knows it will fill it up with something probable - but which is likely going to be bullshit.

    All my point was, is that to truly fix it would be to basically create an omniscient being, which cannot exist in our physical world. It will always have to make some assumptions - just like we do.


  • Hallucinations in AI are fairly well understood as far as I’m aware. Explained in high level on the Wikipedia page for it. And I’m honestly not making any objective assessment of the technology itself. I’m making a deduction based on the laws of nature and biological facts about real life neural networks. (I do say AI is driven by the data it’s given, but that’s something even a layman might know)

    How to mitigate hallucinations is definitely something the experts are actively discussing and have limited success in doing so (and I certainly don’t have an answer there either), but a true fix should be impossible.

    I can’t exactly say why I’m passionate about it. In part I want people to be informed about what AI is and is not, because knowledge about the technology allows us to make more informed decision about the place AI takes in our society. But I’m also passionate about human psychology and creativity, and what we can learn about ourselves from the quirks we see in these technologies.



  • It will never be solved. Even the greatest hypothetical super intelligence is limited by what it can observe and process. Omniscience doesn’t exist in the physical world. Humans hallucinate too - all the time. It’s just that our approximations are usually correct, and then we don’t call it a hallucination anymore. But realistically, the signals coming from our feet take longer to process than those from our eyes, so our brain has to predict information to create the experience. It’s also why we don’t notice our blinks, or why we don’t see the blind spot our eyes have.

    AI representing a more primitive version of our brains will hallucinate far more, especially because it cannot verify anything in the real world and is limited by the data it has been given, which it has to treat as ultimate truth. The mistake was trying to turn AI into a source of truth.

    Hallucinations shouldn’t be treated like a bug. They are a feature - just not one the big tech companies wanted.

    When humans hallucinate on purpose (and not due to illness), we get imagination and dreams; fuel for fiction, but not for reality.


  • ClamDrinker@lemmy.worldtomemes@lemmy.worldA bit late
    link
    fedilink
    arrow-up
    119
    arrow-down
    9
    ·
    edit-2
    2 个月前

    The thing is, I’ve seen statements like this before. Except when I heard it, it was being used to justify ignoring women’s experiences and feelings in regard to things like sexual harassment and feeling unsafe, since that’s “just a feeling” as well. It wasn’t okay then, and it’s not okay the other way around. The truth is that feelings do matter, on both sides. Everyone should feel safe and welcome in their surroundings. And how much so that is, is reflected in how those people feel.

    The outcome of men feeling being respected and women feeling safe are not mutually exclusive. The sad part is that someone who is reading this here is far more likely to be an ally than a foe, yet the people who need to hear the intended message the most will most likely never hear it nor be bothered by it. There’s a stick being wedged here that is only meant to divide, and oh my god is it working.

    The original post about bears has completely lost all meaning and any semblance of discussion is lost because the metaphor is inflammatory by design - sometimes that’s a good thing, to highlight through absurdity. But metaphors are fragile - if it’s very likely to be misunderstood or offensive, the message is lost in emotion. Personally I think this metaphor is just highly ineffective at getting the message across, as it has driven people who would stand by the original message to the other side due to the many uncharitable interpretations it presents. And among the crowd of reasonable people are those who confirm those interpretations and muddy the water to make women seem like misandrists, and men like sexual assault deniers. This meme is simply terrible and perhaps we can move on to a better version of it that actually gets the message across well, instead of getting people at each other’s throat.






  • Its funny how something like this get posted every few days and people keep falling for it like its somehow going to end AI. The people that make these models are acutely aware of how to avoid model collapse.

    It’s totally fine for AI models to train on AI generated content that is of high enough quality. Part of the research to train models is building data sets with a text description matching the content, and filtering out content that is not organic enough (or even specifically including it as a ‘bad’ example for the AI to avoid). AI can produce material indistinguishable from human work, and it produces material that wasn’t originally in the training data. There’s no reason that can’t be good training data itself.