![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.zip/pictrs/image/77ad0608-acf8-4508-bf30-fb3df37c6bac.webp)
I couldn’t find the game’s link in the article, so here it is for others: https://store.steampowered.com/app/2468250/silkbulb_test/#:~:text=Silkbulb Test is a co,from grinding to a halt
I couldn’t find the game’s link in the article, so here it is for others: https://store.steampowered.com/app/2468250/silkbulb_test/#:~:text=Silkbulb Test is a co,from grinding to a halt
Starting off with “we’ve heard your feedback” is something I’ve never heard from an abusive parent?
Many things are designed for engagement, so what’s your point? Some people use Lemmy like Reddit and care about internet points that don’t matter. “The rising number is designed to exploit your behavioral patterns and enforce your engagement.” Instead of daily, it’s multiple times, but the point is you can paint many business models like this.
People download the app to get better at a skill. It’s designed to be effective at doing that. It’s a skill people want to learn. How is that exploitive or manipulative?
Full warning: I’ve worked in game design and F2P for like 10 years. I know there’s some personal bias, but there are much worse examples of this stuff than Duolingo or whatever. Painting good actors as bad actors is not correct.
The anecdote part at the end is irrelevant for both of us. I have the opposite experience and don’t even use this app: a bunch of my friends seem to all use it for learning languages. /shrug
Why evil? I’m not a capitalist, but it’s a language learning company being silly; they aren’t causing massive injustice.
About to be a lot of “accidental” falls out of windows.
CPTSD is not that common: some people within psychology don’t even agree that it’s a distinct diagnosis.
I’ve had PTSD since I was 10 due to a violent, childhood trauma. My abuser was a parent, and I couldn’t leave. I felt horrible fear daily, struggled to sleep for many years, and have lasting issues that I’m actively working against. Eventually, a therapist told me she believed I had CPTSD, so I spent time researching and learning about it. I was surprised it was a divisive subject (2019).
I don’t think adding the C does much. I’m not sure if the distinct diagnosis helps. Sometimes, it feels like people add the C to try and validate what they went through as harsher or warranting special care. Pain is pain, and I don’t like comparing pain in that way. Whether it’s one horrible incident, repeated incidents, or a pervasive atmosphere, everyone’s pain in their journey is valid.
BPD is another diagnosis that often gets used or combined with PTSD. In my experience, people suffering from BPD have a specific vibe that’s hard to describe (sorta like wanting relationships but often assuming poorly of others, due to trauma or imbalances). I was diagnosed with BPD at one point, but that didn’t hold water as I sought help.
Anyway…I guess I’m disappointed that it sometimes feels like people are collecting disorders or heightening them for clout or focus without understanding how that can devalue the meaning of the words. Whether you have PTSD, CPTSD, or BPD, it’s not Pokémon. Everyone’s experience is going to be unique, and classifying is there to help you identify treatment or communicate quickly with other humans. But, I don’t like when those classifications are used poorly either.
I believe in UBI, but the Captain Laserhawk show made me aware of how much it could get twisted in fucked up ways. “Don’t watch this show? -$100 from your stipend this month.” I used to think things like that were fear mongering, but the world is all kinds of weird today.
More AI:
Do you hear the denim sing? Singing a song of jean-clad men? It is the fabric of the people Who won’t wear slacks again!
When the stitching in your seams Echoes the rhythm of the looms There is a style about to gleam When tomorrow’s hemline blooms!
The expansion of that abbreviation feels like an idiocracy joke.
“We store the computer data on VBDs.” “What is a VBD?” “Very large disc^tm. It’s pretty advanced.” And then they just bring out an insanely large disc.
Maybe more apt for me would be, “We don’t need to teach math, because we have calculators.” Like…yeah, maybe a lot of people won’t need the vast amount of domain knowledge that exists in programming, but all this stuff originates from human knowledge. If it breaks, what do you do then?
I think someone else in the thread said good programming is about the architecture (maintainable, scalable, robust, secure). Many LLMs are legit black boxes, and it takes humans to understand what’s coming out, why, is it valid.
Even if we have a fancy calculator doing things, there still needs to be people who do math and can check. I’ve worked more with analytics than LLMs, and more times than I can count, the data was bad. You have to validate before everything else, otherwise garbage in, garbage out.
It’s sounds like a poignant quote, but it also feels superficial. Like, something a smart person would say to a crowd to make them say, “Ahh!” but also doesn’t hold water long.
I generally agree. It’ll be interesting what happens with models, the datasets behind them (particularly copyright claims), and more localized AI models. There have been tasks where AI greatly helped and sped me up, particularly around quick python scripts to solve a rote problem, along with early / rough documentation.
However, using this output as justification to shed head count is questionable for me because of the further business impacts (succession planning, tribal knowledge, human discussion around creative efforts).
If someone is laying people off specifically to gap fill with AI, they are missing the forest for the trees. Morale impacts whether people want to work somewhere, and I’ve been fortunate enough to enjoy the company of 95% of the people I’ve worked alongside. If our company shed major head count in favor of AI, I would probably have one foot in and one foot out.
This has been my general worry: the tech is not good enough, but it looks convincing to people with no time. People don’t understand you need at least an expert to process the output, and likely a pretty smart person for the inputs. It’s “trust but verify”, like working with a really smart parrot.
Yeah, this phrase makes way more sense within the context of a game or game theory. For me, it goes back to fighting games or sports. People play to win in those settings. The rules are heavily defined, and the players must abide. These other examples are people misusing the phrase.
It’s not as much. GaaS is the predominant model, and you make more on the LiveOps side than the launch recoup period.
Source: Developer of 10 years, x-Director at 200 person company.
There was a similar study reported the other day about using FMRI imagining and AI to recreate the “thought content” of someone’s brain. It required training for the AI in the person’s brain and some other training. It does seem these techniques can work with some specified models, but yeah, it doesn’t seem like hooking someone’s brain up to this would create a movie of their mind or something.
I think the more dangerous part is “This is step 0,” which this tech would have seemed impossible 10 years ago. Very strange times.
Easy back for me. The original RoA is one of my favorite platform fighters. I’m happy to support Dan and crew for their next venture. I can’t wait till beta opens. :)
Game designer.
I’m a Director of Game Design now.
Meta ethics focuses on the underlying framework behind morality. Whenever you’re asking, “But why is it moral?” That’s meta ethics.
Meta ethics splits between cognitivism (moral statements can be true or false) and non-cognitivism (moral statements are not true or false). One popular cognitive branch is natural moral realism, the idea there are objective moral facts. One popular non-cognitivism branch is emotivism, the idea that moral statements all all complicated “yays” or “yucks” and express emotions rather than true/false statements.
Cognitivism also has anti-realism, which is there are moral facts, but they are truth/false conditional based on each person or group. My issue is you lose the ability to call out certain behavior as wrong; slavery is wrong; not respecting others is wrong. If you want to believe all morality systems are valid, meaning your morality is no better than some radical thought group’s, then go ahead. On an emotional level, speciesism level, rights level, deontological level, utilitarian level, and many more slavery is wrong. Again, some nut job doesn’t invalidate all other thoughts. That’s my take.
As far as I can tell, this product never panned out. It was backed by 132 people to cover 150k GBP in 2017. It was called the “Cyclotron Bike”.