Congrats bizzle! I hope to grow my own someday
Congrats bizzle! I hope to grow my own someday
You’ll never get good at a new skill if youre too afraid of screwing up to even try or give up after one go. Dont expect perfection from yourself and do the best you can. Learn something from the process and do it slightly better next time.
Fortunately cooking is very forgiving. If you do one or two things slightly wrong just roll with it. Here is the guide I am using today for my butter. … also its text based counterpart If it goes well I’ll update.
If its your first time i recommend not using super nice flower just get some cheap shake that way you aren’t loosing much if the batch gets scuffed. After a few batches of cheap stuff you can take what you’ve learned and go for high quality flower.
The crock pot will be smelly do that part outside or in garage If you can. They have cheap propane ovens you can use outside for decarb too just need to get a oven thermometer and keep an eye on it.
A tool is a tool. It has no say in how it’s used. AI is no different than the computer software you use browse the internet or do other digital task.
When its used badly as an outlet for escapism or substitute for social connection it can lead to bad consequences for your personal life.
When it’s best used is as a tool to help reason through a tough task, or as a step in a creative process. As on demand assistance to aid the disabled. Or to support the neurodivergent and emotionally traumatized to open up to as a non judgemental conversational partner. Or help a super genius rubber duck their novel ideas and work through complex thought processes. It can improve peoples lives for the better if applied to the right use cases.
Its about how you choose to interact with it in your personal life, and how society, buisnesses and your governing bodies choose to use it in their own processes. And believe me, they will find ways to use it.
I think comparing llms to computers in 90s is accurate. Right now only nerds, professionals, and industry/business/military see their potential. As the tech gets figured out, utility improves, and llm desktops start getting sold as consumer grade appliances the attitude will change maybe?
It delivers on what it promises to do for many people who use LLMs. They can be used for coding assistance, Setting up automated customer support, tutoring, processing documents, structuring lots of complex information, a good generally accurate knowledge on many topics, acting as an editor for your writings, lots more too.
Its a rapidly advancing pioneer technology like computers were in the 90s so every 6 months to a year is a new breakthrough in over all intelligence or a new ability. Now the new llm models can process images or audio as well as text.
The problem for openAI is they have serious competitors who will absolutely show up to eat their lunch if they sink as a company. Facebook/Meta with their llama models, Mistral AI with all their models, Alibaba with Qwen. Some other good smaller competiiton too like the openhermes team. All of these big tech companies have open sourced some models so you can tinker and finetune them at home while openai remains closed sourced which is ironic for the company name… Most of these ai companies offer their cloud access to models at very competitive pricing especially mistral.
The people who say AI is a trendy useless fad don’t know what they are talking about or are upset at AI. I am a part of the local llm community and have been playing around with open models for months pushing my computers hardware to its limits. Its very cool seeing just how smart they really are, what a computer that simulates human thought processes and knows a little bit of everything can actually do to help me in daily life.
Terrence Tao superstar genius mathematician describes the newest high end model from openAI as improving from a “incompentent graduate” to a “mediocre graduate” which essentially means AI are now generally smarter than the average person in many regards.
This month several comptetor llm models released which while being much smaller in size compared to openai o-1 somehow beat or equaled that big openai model in many benchmarks.
Neural networks are here and they are only going to get better. Were in for a wild ride.
Have you tried a dynavap with induction heater for concentrates? The inspire wand induction Heatr comes with its own cups and bangers but dynavap is better in it
I am a part of the Gemini protocol community. Newswaffle is a service hosted on the Gemini protocol to render web pages as gemtext (a simplified variant of markdown). Newswaffle the web article scraper is developed by Acidus. Here is newswaffles github .
I’m not a developer for it however I am one of the few people on this planet who actively use it and have had many email conversations with the dev over the years. Some of my suggestions made it into their services like lowtechmagazine be added to main newswaffle page and the simple English Wikipedia being added to their wikipedia gemtext mirror.
The github you linked is actually to the portal.mozz.us which is a seperate project that let’s me share Gemini protocol stuff like newswaffle over the web with regular people who dont really know about or understand Gemini and the small net. portal.mozz.us is developed and hosted by Michael Lazar (Mozz)
Here you go. A beautiful and open source news site article text scraper called “newswaffle” Now feel free as you browse the tomshardware articles with all that crap cut right out. I love it! Let me know if you are interested in how this works.
“Weed lab”! You make the procedure of baking a ground up plant in the oven for 30 minutes then putting it in a crock pot with coconut oil/butter sound like a Breaking Bad meth cooking operation. Jesse, we need to cook some brownies :)
I respect that processing hemp flower at home isnt your thing. WNC CBD has always been top tier with its thca flower and the edibles from them will almozt certainly kick ass. They know what they’re doing.
I suggested homemade edibles or tinctures as an economic and effective option. Usually edible users look to pot for frequent pain relief or stress medicine. Buying premade stuff thats actually effective at helping gets pricey quick for medical users.
Here in USA you can buy legal thca or cbd hemp flower shake right from wholesalers online dirt cheap. Moreover there are specific cooking appliances like the magic butter maker and nova fx too which automates the whole process.
In case you give it a second thought, the pot smell released during the oven baking process can be mitigated by sealing the flower in a mason jar while cooking in the oven. The jar can easily withstand the 240f temp you decarb the herb at. This also helps recapture active terpenes and cannabanoids that vaporize at low temps.
Good luck, hope you find some awesome stuff.
Learn to make your own edibles the ones from dispensaries or online are weak and overpriced. Can do capsules, make butter, make alcohol tincture, lots of options. Just find a good supplier of shake.
Its not just AI code but AI stuff in general.
It boils down to lemmy having a disproportionate amount of leftist liberal arts college student types. Thats just the reality of this platform.
Those types tend to see AI as a threat to their creative independent business. As well as feeling slighted that their data may have been used to train a model.
Its understandable why lots of people denounce AI out of fear, spite, or ignorance. Its hard to remain fair and open to new technology when its threatening your livelihood and its early foundations may have scraped your data non-consentually for training.
So you’ll see AI hate circle jerk post every couple days from angry people who want to poison models and cheer for the idea that its just trendy nonesense. Dont debate them. Dont argue. Just let them vent and move on with your day.
Here’s the funny picture
deleted by creator
Thanks for sharing, knew him from some numberphile vids cool to see they have a mastadon account. Good to know that LLMs are crawling from “incompentent graduate” to “mediocre graduate”. Which basically means its already smarter than most people for many kinds of reasoning task.
I’m not a big fan of the way the guy speaks though, as is common for super intelligent academic types they have to use overly complicated wording to formally describe even the most basic opinions while mixing in hints of inflated ego and intellectual superiority. He should start experimenting with having o-1 as his editor and summarize his toots.
Hey @brucethemoose hope you don’t mind if I ding you one more time. Today I loaded up with qwen 14b and 32b. Yes, 32B (Q3_KS). I didn’t do much testing with 14B but it spoke well and fast. Was more excited to play with the 32B once I found out it would run to be honest. It just barely makes the mark of tolerable speed just under 2T/s (really more like 1.7 with some context loaded in). I really do mean barely, the people who think 5t/s is slow would eat their heart out. However that reasoning and coherence though? Off the charts. I like the way it speaks more than mistral small too. So wow just wow is all I can say. Can’t believe all the good models that came out in such a short time and leaps made in the past two months. Thank you again for recommending qwen don’t think I would have tried the 32B without your input.
Truck gang
At least their coworkers know who the vampire is.
I am glad to have helped you out! @angrystego I hope you enjoy searxng and it bcomes a useful tool in your life. Paulgo is an excellent first instance choice. It was my daily driver when I wrote up that guide and seems to still hold up well today.
Now I use search.inetol.net and can recommend that as a good alternative in case paulgo isnt quite what you’re looking for or has too many timeout api errors. As always its a good idea to visit searx.space and try out some of the top instances with highest response time.
The linked paper was a good read. Thank you.
Thanks for the recommendation. Today I tried out Mistral Small IQ4_XS in combination with running kobold through a headless terminal environment to squeeze out that last bit of vram. With that, the GPU layers offloaded were able to be bumped up from 28 to 34. The token speed went up from 2.7t/s to 3.7t/s which is like a 50% speed increase. I imagine going to Q3 would get things even faster or allow for a bump in context size.
I appreciate you recommending Qwen too, ill look into it.
Yeah, I know better than to get involved in debating someone more interested in spitting out five paragraph essays trying to deconstruct and invalidate others views one by one, than bothering to double check if they’re still talking to the same person.
I believe you aren’t interested in exchanging ideas and different viewpoints. You want to win an argument and validate that your view is the right one. Sorry, im not that kind of person who enjoys arguing back and forth over the internet or in general. Look elsewhere for a debate opponent to sharpen your rhetoric on.
I wish you well in life whoever you are but there is no point in us talking. We will just have to see how the future goes in the next 10 years.