• 0 Posts
  • 33 Comments
Joined 2 years ago
cake
Cake day: June 13th, 2023

help-circle
  • Like google plus.

    For me the Apple environment really cemented for me that consumers actively enjoy removing their own autonomy structurally, which is a big part of why this stuff has become so normalized.

    Putting a rootkit on their cds should have buried sony. Antitrust should be a thing too. The mickey mouse protection act should have socially killed Disney, which only found success by exploiting works that no longer held copyright. Etc.

    Those with power have lost all accountability, and all tools, especially AI, will be used against us if we do not cooperatively figure out how to fix the increasing power imbalance.

    The more power someone has, the harder the gavel should fall on them when they fuck the entire planet in whichever way.

    At this point, any new consumer friendly behaviour comes only to establish territory before hoarding and exploiting when enabled to do so.

    Amazon using deceptive design to influence general user behaviours should lead to billions and billions in fines until changed. Etc.

    Build local movements to cooperate at larger scale and fight back. If the general public is ranting about planned obsolescence and general monopolistic behaviours, maybe something could be affected before people are forced into violent desperation. People are too busy being mad at each other for some intentionally divisive narrative or another, and the general public just can’t give a fuck about affecting the people who actually dictate the shape of society.

    Also if you burn down all AI this is still true. But it’s easier to yell at technology than they system using it to further remove your autonomy.


  • will try to take it in good humour, but i love how i got compared to ai, adhd(AuDHD would be the real wombo combo here so you get points), and schizophrenic people.

    and i would hope i don’t confabulate half as much as an LLM.

    although an understanding of the modern situation does require an unfortunately theoretical take, while, unfortunately, there’s more noise, and conspiracy theories being socially reified than most people can remember. but i’d like to think i’m weighting this take via the best available expert consensus that i can find and source. biggest ‘correction’ i’d make is that i was beaten black and blue for waiting outside of the library, which was unrelated to the protest.

    if you do actually care, and can handle more than the internet’s usual 140 character tweet limit, here’s some elaboration.

    the ‘sycophancy into delusion effect’ i refer to can be seen widely reported on most news sites, where chatgpt and the like cause a feedback-loop into a psychotic break. this is one individual and machine, but a group that forgives the same things has the same sycophantic effect. predictive processing and the bayesian brain are leading theories in psychology that work well nested with other leading theories such as global workspace.

    that global workspace video is a very recent example with michael levin from tufts, who often works with friston’s free energy principle and active inference (included notes in wiki)

    friston has hundreds of thousands of citations, if you care about pedigree. i hope i do not poorly capture or inaccurately represent any of their ideas, but if you’d like to drink from the source, you have my full recommendation.

    that’s where the “saving energy” stuff comes from. while DKE might not perfectly and accurately explain the situation, i’m all for better ways to convey that eco-niche specific intelligence doesn’t always transfer, especially if it’s ‘overfit to a local minima.’ otherwise knowing you need high samples to gauge your intelligence in any particular niche is also related to the framework i’m describing. in the bio-world you have overspecialization, like pandas too fit to a specific environment, which may focus on skills that don’t transfer outside of that environment. there’s a lot more to gain from the full bayesian perspective, but there is a lot to be gained just by looking at how systems can successfully co-construct, and their possible failure states that are inevitable as systems grow apart into new niche environments.

    there’s actually an interplay between that ‘energy saving’ property and putting energy back out which can be used to explore the environment, build a more robust model, and survive greater environmental shifts. this is explained in active inference. good, but slightly old textbook on MITpress. lots of other online resources for the curious.

    i’m saying that meta-awareness of the failure states in these specific system dynamics could do much more general and robust good for society than being socially pressured into climbing the socio-economic hierarchy as hard as possible.

    there’s a term for an imagined AI going rogue due to being overfit to a single goal. this is called a ‘paperclip maximizer.’ i compare the current socio-economic system to that failure. you know, ‘capitalism number go up!’

    i don’t think any studies i’ve seen disagree with that take, but if there’s a relevant expert who’s got a strong weighting i’m unaware of, i’m always open to updating my weights.

    as for learning yourself into some information bubble, or how someone can hold ridiculous beliefs without the need to question them, such as grand confidence despite low evidence, is often by taking something you have low evidence about, and having high confidence. and then giving it a high weighting. funny enough, friston’s dysconnection hypothesis is about framing schizophrenia as precision weighting issues, but i don’t think they are the kind i have TY.

    mahault has a phd under friston, and her epistemic papers are essential IMO.

    so there you have it, the larger environment of my thoughts, largely focused around one of the most cited neuroscience experts of all time, and michael levin who i mentioned is doing some of the coolest current empirical results in modern biology.

    i tried, thank you if you got this far. if nothing else, please stay curious, but beware information silos that disable coms completely, or otherwise create barriers to properly comprehending the systems being represented. ‘nothing about us without us’ is important for a reason.

    otherwise, wish i could compress these complex topics into fewer words, but words are a lossy compression format.


  • Love this comment. If anyone knows anything about machine learning or brains, this resembles modal limitations in learning.

    A lot of our intelligence is shaped around our sensory experience, because we build tools for thinking via the tools we’ve already built, ever since baby motorbabbling to figure how our limbs work. Why Hellen Keller had such trouble learning, but once she got an interface she could engage with for communication, things took off.

    We always use different tools, but some people don’t see colour. This doesn’t mean they are stupid when they answer differently in describing a rainbow.

    Also why llms struggle with visual/physical concepts if the logic requires information that doesn’t translate through text well. Etc.

    Point being, on top of how shitty memorization is as the be all end all, learning and properly framing issues will have similar blindspots like not recognizing the anvil cloud.

    This is also why people in informational bubbles can confirm their own model from ‘learning’ over people’s lives experiences.

    Like most issues, it doesn’t mean throwing the baby out with the bathwater, but epistemic humility is important, and it is important not to ignore the possibility of blindspots, even when confidence is high.

    Always in context of the robustness of the framing around it, with the same rules applied at that level. Why “nothing about us without us” is important.

    But also we gotta stop people giving high confidence to high dissonance problems, and socializing it into law. We should be past the “mmr causes autism” debate by now, but I’m hearing it from the head of health in the USA.


  • I could see why you’d say that. Stress creates environments of basic survival, which kills cognitive thought. More immediate survival is more salient.

    That being said, if you have access to the internet, you have access to countless free educational tools.

    Too much privilege brings sycophantic bubbles of delusion, like billionaires.

    Having all the time and money also let’s you do a whole thing tank about how to ruin a country to fit your preferences. See the heritage foundation as prime example.

    That being said, while it is less easy for the poor, it’s still essential to attempt that open mind and learn, so you don’t get trapped by a socialized category error applied as fact.

    This is where we need predictive processing and the Bayesian brain to understand how beliefs are weighted and compared, and the failure states that might being.

    Basically, poor weighting or system communication leads to an over affirmation of something that should have been high uncertainty, if measured from other directions.

    Instead of seeing high cognitive dissonance as a sign to align low probability, it gets socialized into acceptance to save the energy of trying to worry about our deal with what, to that system, appears intractable.

    DKE is at least useful in framing how each expertise eco-niche is filled with complexity that doesn’t Transfer. This is why scientists stict to their expertise, where they have high dimensions of understanding, and low dissonance to uphold.

    This can be over-prioritized until no dissonance outside of microscopic niches that act more like data collection than science.

    Experts however can work together to find truths that diffuse dissonance generally, to continue building understanding.

    If the peasants could socialize that laziness was a lack of meta awareness of the greater dissonance diffusing web of shared expert consensus, instead of laziness being the act of not feeding the socio-economic hierarchy machine, which is famous for maximizing paperclips and crushing orphans.

    Pretty sure I got beaten black and blue waiting for library access. Had to protest to keep a library open when I’m gradeschool.

    So, growth mindset isn’t a privilege, but general access to affordances, pedigree, time, tools, social connections, etc, are all extra hurdles for growth mindset in impoverished places.

    If there’s no internet access at all, then that’s just a disabled system.

    Is not static with people, and Issue with growth mindset would just be vulnerability to learning yourself into some information bubble that intentionally cuts off communication, so that you can only use that group as a resource for building your world model, bringing you to where the closed brains go just to save energy, and keeping you there forever.

    Groups that are cool with making confident choices fueled by preference in high dissonance spaces. which basically acts like fertile soil for socializing strong cult beliefs and structures.

    They also use weird unconscious tools that keep them in the bubble. Listen to almost anyone that’s escaped a cult for good elaboration there. Our brains will do a lot to keep us from becoming a social pariah in our given environment we have grown into.


  • i think it’s a framing issue, and AI development is catching a lot of flak for the general failures of our current socio-economic hierarchy. also people having been shouting “super intelligence or bust” for decades now. i just keep watching it get better much more quickly than most people’s estimates, and understand the implications of it. i do appreciate discouraging idiot business people from shunting AI into everything that doesn’t need it, because buzzword or they can use it to exploit something. some likely just used it as an excuse to fire people, but again, that’s not actually the AI’s fault. that is this shitty system. i guess my issue is people keep framing this as “AI bad” instead of “corpos bad”

    if the loom was never invented, we would still live in an oppressive society sliding towards fascism. people tend to miss the forest for the trees when looking at tech tools politically. also people are blind to the environment, which is often more important than the thing itself. and the loom is still useful.

    compression and polysemy growing your dimensions of understanding in a high dimensional environment, which is also changing shape, comprehension growing with the erasure of your blindspots. collective intelligence (and how diversity helps cover more blindspots) predictive processing (and how we should embrace lack of confidence, but understand the strength of proper weighting for predictions, even when a single blindspot can shift the entire landscape, making no framework flawless or perfectly reliable.) and understanding how everything we know is just the best map of the territory we’ve figured out so far. if you want to know judge how subtle but in our face blindspots can be, look up how to test your literal blindspot, you just need 30 seconds a paper with two small dots to see how blind we are to our blindspots. etc.

    more than fighting the new tools we can use, we need to claim them, and the rest of the world, away from those who ensure that all tools will only exist to exploit us.

    am i shouting to the void? wasting the breath of my digits? will humanity ever learn to stop acting like dumb angry monkeys?


  • let’s make another article completely misrepresenting opinions/trajectories and the general state of things, because we know it’ll sell and it will get the ignorant fighting with those who actually have an idea of what’s going on, because they saw in an article that AI was eating the pets.

    please seek media sources that actually seek to inform rather than provoke or instigate confusion or division through misrepresentation and disinformation.

    these days you can’t even try to fix a category error introduced by the media without getting cussed out and blocked from congregate sites because you ‘support the evil thing’ that the article said was evil, and everyone in the group hates, without even an attempt to understand the context, or what part of the thing is even being discussed.

    also, can we talk more about breaking up the big companies so they don’t have a hold on the technology, rather than getting mad at everyone who interacts with modern technology?

    legit ss bad feels like fighting rightwing misinformation about migrant workers and trans people.

    just make people mad, and teach them that communication is a waste of energy.
    we need to learn how to tell who is informing rather than obfuscating, through historicity of accuracy, and consensus with other experts from diverse perspectives. not building tribes upon who agrees with us. and don’t blame experts for not also learning how to apply a novel and virtually impossible level of compression when explaining their complex expertise, when you don’t even want to learn a word or concept. it’s like being asked to describe how cameras work, and then getting called an idiot when some analogy used can be imagined in a less useful context that doesn’t apply 1:1 with the complex subject being summarized.

    outside of that, find better sources of information. fuck this communication disabling ragebait.

    cause now just having a history of rebuking this garbage gets you dismissed, because a history of interacting with the topic on this platform is a good enough vibe check to just not attempt understanding and interaction.

    TLDR: the quality of the articles and conversation on this subject are so generally ill-informed that it hurts, and obviously trying to craft environments of angry engagement rather than informing.

    also i wonder if anyone will actually engage with this topic rather than get angry, cuss me out, and not hear a single thing being communicated.


  • Or maybe the solution is in dissolving the socio-economic class hierarchy, which can only exist as an epistemic paperclip maximizer. Rather than also kneecapping useful technology.

    I feel much of the critique and repulsion comes from people without much knowledge of either art/art history, or AI. Nor even the problems and history of socio-economic policies.

    Monkeys just want to be angry and throw poop at the things they don’t understand. No conversation, no nuance, and no understanding of how such behaviours roll out the red carpet for continued ‘elite’ abuses that shape our every aspect of life.

    The revulsion is justified, but misdirected. Stop blaming technology for the problems of the system, and start going after the system that is the problem.



  • It’s the “you stole my style” artists attacking artists all over again. And digital art isn’t real att/cameras are evil/cgi isn’t real art all over with a more organic and intelligent medium.

    The issue is the same as it has always been. Anything and everything is funneled to the rich and the poor blame the poor who use technology, because anthropocentric bias makes it easier to vilify than the assholes building our cage around us.

    The apple “ecosystem” has done much more damage than AI artists, but people can’t seem to comprehend how. Also Disney and corpos broke copyright so that its just a way for the rich to own words and names and concepts, so that the poor can’t use them to get ahead.

    All art is a remix. Disney only became successful using other artists hard work in the Commons. Now the Commons is a century more out of grasp, so only the rich can own the artists and hoard the growth of art.

    Also which artists actually have the time and money to litigate? I guess copyright does help some nepo artists.

    Nepotism is the main way to earn your right to invest into becoming an artist that isn’t fatiguing towards collapse of life.

    But let’s keep yelling at the technology for being evil.


  • That argument was to be had with Apple twenty years ago as they built their walled garden, which intentionally frustrates people into going all in apple. Still can’t get anyone to care about dark patters/deceptive design, or disney attacking the creative Commons which it parasitically grew out of. AI isn’t and has never been the real issue. It’s just absorbs all the hate the corpos should be getting as they use it, along with every other tool at their disposal, to slowly fuck us into subservience. Honestly, AI is teaching us the importance of diverse perspectives in intelligent systems, and the dangers of overfitting, which exist in our own brains and social/economic systems.

    Same issue, different social ecosystem being hoarded by the wealthy.



  • I see intelligence as filling areas of concept space within an econiche in a way that proves functional for actions within that space. I think we are discovering more that “nature” has little commitment, and is just optimizing preparedness for expected levels of entropy within the functional eco-niche.

    Most people haven’t even started paying attention to distributed systems building shared enactive models, but they are already capable of things that should be considered groundbreaking considering the time and finances of development.

    That being said, localized narrow generative models are just building large individual models of predictive process that doesn’t by default actively update information.

    People who attack AI for just being prediction machines really need to look into predictive processing, or learn how much we organics just guess and confabulate ontop of vestigial social priors.

    But no, corpos are using it so computer bad human good, even though the main issue here is the humans that have unlimited power and are encouraged into bad actions due to flawed social posturing systems and the confabulating of wealth with competency.


  • While I agree about the conflict of interest, I would largely say the same thing despite no such conflict of interest. However I see intelligence as a modular and many dimensional concept. If it scales as anticipated, it will still need to be organized into different forms of informational or computational flow for anything resembling an actively intelligent system.

    On that note, the recent developments with active inference like RXinfer are astonishing given the current level of attention being paid. Seeing how llms are being treated, I’m almost glad it’s not being absorbed into the hype and hate cycle.


  • Perhaps instead we could just restructure our epistemically confabulated reality in a way that doesn’t inevitably lead to unnecessary conflict due to diverging models that haven’t grown the necessary priors to peacefully allow comprehension and the ability exist simultaneously.

    breath

    We are finally coming to comprehend how our brains work, and how intelligent systems generally work at any scale, in any ecosystem. Subconsciously enacted social systems included.

    We’re seeing developments that make me extremely optimistic, even if everything else is currently on fire. We just need a few more years without self focused turds blowing up the world.


  • The main issue though is the economic system, not the technology.

    My hope is that it shakes things up fast enough that they can’t boil the frog, and something actually changes.

    Having capable AI is a more blatantly valid excuse to demand a change in economic balance and redistribution. The only alternative would be destroy all technology and return to monkey. Id rather we just fix the system so that technological advancements don’t seem negative because the wealthy have already hoarded all new gains of every new technology for this past handful of decades.

    Such power is discretely weaponized through propaganda, influencing, and economic reorganizing to ensure the equilibrium stays until the world is burned to ash, in sacrifice to the lifestyle of the confidently selfish.

    I mean, we could have just rejected the loom. I don’t think we’d actually be better off, but I believe some of the technological gain should have been less hoardable by existing elite. Almost like they used wealth to prevent any gains from slipping away to the poor. Fixing the issue before it was this bad was the proper answer. Now people don’t even want to consider that option, or say it’s too difficult so we should just destroy the loom.

    There is a markov blanket around the perpetuating lifestyle of modern aristocrats, obviously capable of surviving every perturbation. every gain as a society has made that reality more true entirely due to the direction of where new power is distributed. People are afraid of AI turning into a paperclip maximizer, but that’s already what happened to our abstracted social reality. Maximums being maximized and minimums being minimized in the complex chaotic system of billions of people leads to inevitable increase of accumulation of power and wealth wherever it has already been gathered. Unless we can dissolve the political and social barrier maintaining this trend, it we will be stuck with our suffering regardless of whether we develop new technology or don’t.

    Although doesn’t really matter where you are or what system you’re in right now. Odds are there is a set of rich asshole’s working as hard as possible to see you are kept from any piece of the pie that would destabilize the status quo.

    I’m hoping AI is drastic enough that the actual problem isn’t ignored.



  • I conflate these things because they come from the same intentional source. I associate the copywrite chasing lawyers with the brands that own them, it is just a more generalized example.

    Also an intern who can give you a songs lyrics are trained on that data. Any effectively advanced future system is largely the same, unless it is just accessing a database or index, like web searching.

    Copyright itself is already a terrible mess that largely serves brands who can afford lawyers to harass or contest infringements. Especially apparent after companies like Disney have all but murdered the public domain as a concept. See the mickey mouse protection act, as well as other related legislation.

    This snowballs into an economy where the Disney company, and similarly benefited brands can hold on to ancient copyrights, and use their standing value to own and control the development and markets of new intellectual properties.

    Now, a neuralnet trained on copywritten material can reference that memory, at least as accurately as an intern pulling from memory, unless they are accessing a database to pull the information. To me, sueing on that bases ultimately follows the logic that would dictate we have copywritten material removed from our own stochastic memory, as we have now ensured high dimensional informational storage is a form of copywrite infringement if anyone instigated the effort to draw on that information.

    Ultimately, I believe our current system of copywrite is entirely incompatible with future technologies, and could lead to some scary arguments and actions from the overbearing oligarchy. To argue in favour of these actions is to argue never to let artificial intelligence learn as humans do. Given our need for this technology to survive the near future as a species, or at least minimize the excessive human suffering, I think the ultimate cost of pandering to these companies may be indescribably horrid.


  • Music publishers sue happy in the face of any new technological development? You don’t say.

    If an intern gives you some song lyrics on demand, do they sue the parents?

    Do we develop all future A.I. Technology only when it can completely eschew copyrighted material from their comprehension?

    "I am sorry, I’m not allowed to refer to the brand name you are brandishing. Please buy our brand allowance package #35 for any action or communication regarding this brand content. "

    I dream of a future when we think of the benefit of humanity over the maintenance of our owners’ authoritarian control.


  • Might have to edit this after I’ve actually slept.

    human emotion and human style intelligences are not exclusive in the entire realm of emotion and intelligence. I define intelligence and sentience on different scales. I consider intelligence the extent of capable utility and function, and emotion as just a different set of utilities and functions within a larger intelligent system. Human style intelligence requires human style emotion. I consider gpt an intelligence, a calculator an intelligence, and a stomach an intelligence. I believe intelligence can be preconscious or unconscious. Rather, a part of consciousness independent from a functional system complex enough for emergent qualia and sentience. Emotions are one part in this system exclusive to adaptation within the historic human evolutionary environment. I think you might be underestimating the alien nature of abstract intelligences.

    I’m not sure why you are so confident in this statement. You still haven’t given any actual reason for this belief. You are addressing it as consensus, so there should be a very clear reason why no successful considerably intelligent function exists without human style emotion.

    You have also not defined your interpretation of what intelligence is, you’ve only denied that any function untied to human emotion could be an intelligent system.

    If we had a system that could flawlessly complete françois chollet’s abstraction and reasoning corpus, would you suggest it is connected to specifically human emotional traits due to its success? Or is that still not intelligence if it still lacks emotion?

    You said neural function is not intelligence. But you would also exclude non-neural informational systems such as collective cooperating cell systems?

    Are you suggesting the real time ability to preserve contextual information is tied to emotion? Sense interpretation? Spacial mapping with attention? You have me at a loss.

    Even though your stomach cells interacting is an advanced function, it’s completely devoid of any intelligent behaviour? Then shouldn’t the cells fail to cooperate and dissolve into a non functioning system? again, are we only including higher introspective cognitive function? Although you can have emotionally reactive systems without that. At what evolutionary stage do you switch from an environmental reaction to an intelligent system? The moment you start calling it emotion? Qualia?

    I’m lacking the entire basis of your conviction. You still have not made any reference to any aspect of neuroscience, psychology, or even philosophy that explains your reasoning. I’ve seen the opinion out there, but not strict form or in consensus as you seem to suggest.

    You still have not shown why any functional system capable of addressing complex tasks is distinct from intelligence without human style emotion. Do you not believe in swarm intelligence? Or again do you define intelligence by fully conscious, sentient, and emotional experience? At that point you’re just defining intelligence as emotional experience completely independent from the ability to solve complex problems, complete tasks, or make decisions with outcomes reducing prediction error. At which point we could have completely unintelligent robots capable of doing science and completing complex tasks beyond human capability.

    At which point, I see no use in your interpretation of intelligence.


  • What aspect of intelligence? The calculative intelligence in a calculator? The basic environmental response we see in amoeba? Are you saying that every single piece of evidence shows a causal relationship between every neuronal function and our exact human emotional experience? Are you suggesting gpt has emotions because it is capable of certain intelligent tasks? Are you specifically tying emotion to abstraction and reasoning beyond gpt?

    I’ve not seen any evidence suggesting what you are suggesting, and I do not understand what you are referencing or how you are defining the causal relationship between intelligence and emotion.

    I also did not say that the system will have nothing resembling the abstract notion of emotion, I’m just noting the specific reasons human emotions developed as they have, and I would consider individual emotions a unique form of intelligence to serve its own function.

    There is no reason to assume the anthropomorphic emotional inclinations that you are assuming. I also do not agree with your assertions of consensus that all intelligent function is tied specifically to the human emotional experience.

    TLDR: what?