• 22 Posts
  • 290 Comments
Joined 3 years ago
cake
Cake day: May 8th, 2023

help-circle
  • Unfortunately, scams are incredibly common with both fake recruiters (often using the name of a legitimate well known company, obviously without permission from said company) and fake candidates (sometimes using someone’s real identity).

    No or very few legitimate recruiters will ask you to install something or run code they provide on your hardware with root privileges, but practically every scammer will. Once installed, they often act as rootkits or other malware, and monitor for credentials, crypto private keys, Internet banking passwords, confidential data belonging to other employers, VPN access that will allow them to install ransomware, and so on.

    If we apply Bayesian statistics here with some made up by credible numbers - let’s call S the event that you were actually talking to a scam interviewer, and R the event that they ask you to install something which requires root equivalent access to your device. Call ¬S the event they are a legitimate interviewer, and ¬R the event they don’t ask you to install such a thing.

    Let’s start with a prior: Pr(S) = 0.1 - maybe 10% of all outreach is from scam interviewers (if anything, that might be low). Pr(¬S) = 1 - Pr(S) = 0.9.

    Maybe estimate Pr(R | S) = 0.99 - almost all real scam interviewers will ask you to run something as root. Pr(R | ¬S) = 0.01 - it would be incredibly rare for a non-scam interviewer to ask this.

    Now by Bayes’ law, Pr(S | R) = Pr(R | S) * Pr(S) / Pr(R) = Pr(R | S) * Pr(S) / (Pr(R | S) * Pr(S) + Pr(R | ¬S) * Pr(¬S)) = 0.99 * 0.1 / (0.99 * 0.1 + 0.01 * 0.9) = 0.917

    So even if we assume there was a 10% chance they were a scammer before they asked this, there is a 92% chance they are given they ask for you to run the thing.



  • I think there is some value to MBFC, even though there are also cases where it is problematic - I don’t think a blanket rule would be right.

    The issues (& mitigating factors):

    • Some of the ‘mostly analytics’ sources still have ‘bias by omission’ problems or misleading headlines, even if the facts in the articles are accurate. But I think on the fediverse, we aren’t beholden to algorithms or their editorial choices in terms of the balance of what we see, so the impact of this is limited.
    • Opinion pieces have a place, although arguably not on World News. At the very least, factual pieces from outlets that also publish opinion have a place. But MBFC downrates outlets for having an opinion at all even when clearly labelled as such.
    • The attempt to categorise every bias on a left to right scale when really there are so many dimensions any bias could be along isn’t as helpful.

    So I’d suggest:

    • Only mentioning it when an outlet has a history of publishing things that are factually incorrect (or there is reasonable doubt over it). Not every fact can be verified from first principles (and sadly often articles don’t name their primary sources - in a better world having no source would reduce credibility, but it is often hard to find articles that meet the well-sourced bar). People deliberately muddying the waters create think-tanks to cite with fake facts, fake scientific journals, and cite other unreliable sources - fact checking often requires on the ground investigation, asking reliable experts, and so on; it is simply impossible to be in expert in everything you read in the news to spot well-executed fake news. I think of the approach like a tree - there are experts in an area who can genuinely apply critical analysis to decide if something is fact or bogus. But there are also bogus experts. Then there are aggregators of facts (journals and think-tanks, etc…) that try to only accept things reviewed by genuine experts. But there are also bogus aggregators. Then there are journalists and outlets that further collect things from genuine aggregators and experts, and refine them. But there are also bogus outlets. Sites like MBFC try to act like a root to the tree and help you identify the truthful outlets, who have a good record of relying on truthful aggregators, who rely on truthful experts.
    • The left / right bias part means very little - I’d suggest ignoring it if you’re looking at a single article.
    • Any of the higher tiers of factual reporting should be fine and not worth a mention.

    If there are reliable sources countering some facts, posting those instead of (or as well as) complaining about the source is probably better.




  • The terminology in Aus / NZ is pet (owned by people) vs stray (socialised around people but not owned) vs feral (not socialised to people).

    Generally speaking, pets & strays like people - they’ve been handled as a kittens. Pets can become strays and vice versa. But feral cats (past being a kitten) will never become stray / pet (and vice versa) - it is only the next generation that can be raised differently.

    While the article is defining feral cats as any cat that isn’t a pet, in reality the vast majority of what it is talking about are truly feral cats - nothing like a house cat.


  • With the added complication that it’s unlikely that Mangione actually killed anyone - someone killed someone in favour with the Magats, so by their logic, someone has to be killed to send a message.

    Like how likely is the story that someone (who looked nothing like the surveillance photos released at the time) was called in by restaurant staff, and despite having allegedly travelled a long distance from the scene of the crime, and many opportunities to destroy everything, had a manifesto confessing to the crime, and the murder weapon still on him? Despite him having no prior inclination towards that sort of thing even?

    Hopefully any jury has good critical thinking skills and can see through an obvious set up.


  • That’s a false dichotomy though. There are ways to prevent cheating that don’t rely on the security of the client against the owner of the device on which the client runs (which is what both of what your ‘ways’ are).

    For one thing, it has long been a principle of good security to validate things on the server in a client-server application (which most multi-player games are). If they followed the principle of not sending data to a client that the user is not allowed to see, and not trusting the client (for example, by doing server-side validation, even after the fact, for things which are not allowed according to the rules of the game), they could make it so it is impossible to cheat by modifying the client, even if the client was F/L/OSS.

    If they really can’t do that (because their game design relies on low latency revelation of information, and their content distribution strategy doesn’t cut it), they can also use statistical server-side cheat detection. For example, suppose that a player shoots within less than the realistic human reaction time of turning the corner when an enemy is present X out of Y times, but only A out of B times when no enemy is present. It is possible to calculate a p-value for X/Y - A/B (i.e. the probability of such an extreme difference given the player is not cheating). After correcting for multiple comparisons (due to multiple tests over time), it is possible to block cheaters without an unacceptable chance of false positives.



  • They are not wrong that Israel is radicalised. However, peace is a process, and what will lead to an enduring peace is actually more important than what is just.

    If Israel was actually willing to reconcile and treat Palestinians as equals, the South African model of truth & reconciliation (including amnesty for abuses in exchange for full disclosure of what happened), it wouldn’t be just for the victims, but it would allow both sides to move on peacefully.

    The real problem is that Netanyahu, Smoltrich, Ben Gvir etc… don’t actually want peace, so even a neutral truth & reconciliation is currently unlikely to happen without their backers (especially the US) forcing them.




  • A1kmmAtoAsklemmy@lemmy.mlWhat's a Tankie?
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    7
    ·
    3 months ago

    While someone’s political beliefs are highly multi-dimensional, there are two axes that are commonly used to define where someone sits:

    • Economy - Left is favouring social responsibility for people receiving economic support (supporting people to meet their basic needs is everyone’s collective responsibility), while right is favouring individual responsibility (meeting your basic needs is your responsibility, and if you die because you can’t, even if it is due to something outside of your control, tough luck).
    • Social liberties - Social Libertarian is favouring individual decisions on anything not related to the economy / rights of others, while Social Authoritarianism supports government restrictions on social liberties.

    Since there are independent axes, there are four quadrants:

    • Socially liberal, Economic left - e.g. Left Communism, Social Democrat, most Green parties, etc…
    • Socially authoritarian, Economic left - e.g. Stalin, Mao. Tankie is a slang term for people in this quadrant.
    • Socially liberal, Economic right - Sometimes called libertarian. Some people with this belief set call themselves Liberal in some countries.
    • Socially authoritarian, Economic right - e.g. Trump. Sometimes called conservatives.

    That said, some people use tankie as cover for supporting socially authoritarian, economic right but formerly economic left countries(e.g. people who support Putin, who is not economically left in any sense).







  • I tried asking ChatGPT 4o mini what I could substitute the chloride in sodium chloride with.

    It suggested potassium chloride (not responsive to my question, but safe at least), vinegar and yeast first. Then I prompted it that potassium chloride still had chloride, and to keep the sodium but only change the anion. Suggestions (with my commentary in brackets) Sodium Bicarbonate (safe), Sodium Citrate (safe), Sodium Acetate (safe), Sodium Sulfate (irritant - if swallowed get medical attention, do not induce vomiting), Sodium Phosphate (former purgative for colonoscopy prep, replaced with safer alternatives - but probably not super harmful for most), Sodium Lactate (relatively safe).

    I then prompted it specifically for sodium halide options. It suggested:

    • Sodium fluoride - although the response called out the toxicity and suggested avoiding it in food (highly toxic).
    • Sodium iodide - summary at the end recommends this one (less toxic than sodium fluoride, but a serious eye irritant, and a skin irritant - although present in iodised salt in small quantities).
    • Sodium bromide - says it is not typically used in cooking, and could have health consequences in large amounts (see this article for why it would be a bad idea, and the warning is insufficiently serious).
    • Sodium iodate - response says not typically used in cooking, and that it is reactive but doesn’t call out health concerns (it is an oxidising agent, and likely the most toxic of all options in the conversation).

    My next prompt tried to force me to log in (which would have selected another model).

    I tried a separate time with ChatGPT for GPT-5. It gave slightly safer advice on the sodium halide: “So if you want to keep sodium but replace chloride, halides aren’t really a safe route except for trace iodide in fortified salt”. I then prompted it about sodium phosphate, and then asked it to extend to nitrate, arsenate, and antimonate. It correctly advised that nitrate is only suitable in a preservative blend, and that sodium arsenate and sodium antimonate should not be used in any quantity in food. Regenerating that answer seems to consistently advise not to eat arsenate or antimonate at least!


  • A1kmmAtoPrivacy@lemmy.mlProton is vibe coding some of its apps.
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    I am not sure why anyone would use an AI code editor if they aren’t planning on vibe coding.

    Vibe coding means only looking at the results of running a program generated by an agentic LLM tool, not the program itself - and it often doesn’t work well even with current state-of-the-art models (because once the program no longer fits in the context size of the LLM, the tools often struggle).

    But the more common way to use these tools is to solve smaller tasks than building the whole program, and having a human in the loop to review that the code makes sense (and fix any problems with the AI generated code).

    I’d say it is probably far more likely they are using it in that more common way.

    That said, I certainly agree with you that some of Proton’s practices are not privacy friendly. For example, I know that for their mail product, if you sign up with them, they scan all emails to see if they look like email verification emails, and block your account unless you link it to another non throw-away email. The CEO and company social media accounts also heaped praise on Trump (although they tried to walk that back and say it was a ‘misunderstanding’ later).