-
@ a39d19ec:3d88f61e
2025-04-22 12:44:42Die Debatte um Migration, Grenzsicherung und Abschiebungen wird in Deutschland meist emotional geführt. Wer fordert, dass illegale Einwanderer abgeschoben werden, sieht sich nicht selten dem Vorwurf des Rassismus ausgesetzt. Doch dieser Vorwurf ist nicht nur sachlich unbegründet, sondern verkehrt die Realität ins Gegenteil: Tatsächlich sind es gerade diejenigen, die hinter jeder Forderung nach Rechtssicherheit eine rassistische Motivation vermuten, die selbst in erster Linie nach Hautfarbe, Herkunft oder Nationalität urteilen.
Das Recht steht über Emotionen
Deutschland ist ein Rechtsstaat. Das bedeutet, dass Regeln nicht nach Bauchgefühl oder politischer Stimmungslage ausgelegt werden können, sondern auf klaren gesetzlichen Grundlagen beruhen müssen. Einer dieser Grundsätze ist in Artikel 16a des Grundgesetzes verankert. Dort heißt es:
„Auf Absatz 1 [Asylrecht] kann sich nicht berufen, wer aus einem Mitgliedstaat der Europäischen Gemeinschaften oder aus einem anderen Drittstaat einreist, in dem die Anwendung des Abkommens über die Rechtsstellung der Flüchtlinge und der Europäischen Menschenrechtskonvention sichergestellt ist.“
Das bedeutet, dass jeder, der über sichere Drittstaaten nach Deutschland einreist, keinen Anspruch auf Asyl hat. Wer dennoch bleibt, hält sich illegal im Land auf und unterliegt den geltenden Regelungen zur Rückführung. Die Forderung nach Abschiebungen ist daher nichts anderes als die Forderung nach der Einhaltung von Recht und Gesetz.
Die Umkehrung des Rassismusbegriffs
Wer einerseits behauptet, dass das deutsche Asyl- und Aufenthaltsrecht strikt durchgesetzt werden soll, und andererseits nicht nach Herkunft oder Hautfarbe unterscheidet, handelt wertneutral. Diejenigen jedoch, die in einer solchen Forderung nach Rechtsstaatlichkeit einen rassistischen Unterton sehen, projizieren ihre eigenen Denkmuster auf andere: Sie unterstellen, dass die Debatte ausschließlich entlang ethnischer, rassistischer oder nationaler Kriterien geführt wird – und genau das ist eine rassistische Denkweise.
Jemand, der illegale Einwanderung kritisiert, tut dies nicht, weil ihn die Herkunft der Menschen interessiert, sondern weil er den Rechtsstaat respektiert. Hingegen erkennt jemand, der hinter dieser Kritik Rassismus wittert, offenbar in erster Linie die „Rasse“ oder Herkunft der betreffenden Personen und reduziert sie darauf.
Finanzielle Belastung statt ideologischer Debatte
Neben der rechtlichen gibt es auch eine ökonomische Komponente. Der deutsche Wohlfahrtsstaat basiert auf einem Solidarprinzip: Die Bürger zahlen in das System ein, um sich gegenseitig in schwierigen Zeiten zu unterstützen. Dieser Wohlstand wurde über Generationen hinweg von denjenigen erarbeitet, die hier seit langem leben. Die Priorität liegt daher darauf, die vorhandenen Mittel zuerst unter denjenigen zu verteilen, die durch Steuern, Sozialabgaben und Arbeit zum Erhalt dieses Systems beitragen – nicht unter denen, die sich durch illegale Einreise und fehlende wirtschaftliche Eigenleistung in das System begeben.
Das ist keine ideologische Frage, sondern eine rein wirtschaftliche Abwägung. Ein Sozialsystem kann nur dann nachhaltig funktionieren, wenn es nicht unbegrenzt belastet wird. Würde Deutschland keine klaren Regeln zur Einwanderung und Abschiebung haben, würde dies unweigerlich zur Überlastung des Sozialstaates führen – mit negativen Konsequenzen für alle.
Sozialpatriotismus
Ein weiterer wichtiger Aspekt ist der Schutz der Arbeitsleistung jener Generationen, die Deutschland nach dem Zweiten Weltkrieg mühsam wieder aufgebaut haben. Während oft betont wird, dass die Deutschen moralisch kein Erbe aus der Zeit vor 1945 beanspruchen dürfen – außer der Verantwortung für den Holocaust –, ist es umso bedeutsamer, das neue Erbe nach 1945 zu respektieren, das auf Fleiß, Disziplin und harter Arbeit beruht. Der Wiederaufbau war eine kollektive Leistung deutscher Menschen, deren Früchte nicht bedenkenlos verteilt werden dürfen, sondern vorrangig denjenigen zugutekommen sollten, die dieses Fundament mitgeschaffen oder es über Generationen mitgetragen haben.
Rechtstaatlichkeit ist nicht verhandelbar
Wer sich für eine konsequente Abschiebepraxis ausspricht, tut dies nicht aus rassistischen Motiven, sondern aus Respekt vor der Rechtsstaatlichkeit und den wirtschaftlichen Grundlagen des Landes. Der Vorwurf des Rassismus in diesem Kontext ist daher nicht nur falsch, sondern entlarvt eine selektive Wahrnehmung nach rassistischen Merkmalen bei denjenigen, die ihn erheben.
-
@ 418a17eb:b64b2b3a
2025-04-26 21:45:33In today’s world, many people chase after money. We often think that wealth equals success and happiness. But if we look closer, we see that money is just a tool. The real goal is freedom.
Money helps us access resources and experiences. It can open doors. But the constant pursuit of wealth can trap us. We may find ourselves stressed, competing with others, and feeling unfulfilled. The more we chase money, the more we might lose sight of what truly matters.
Freedom, on the other hand, is about choice. It’s the ability to live life on our own terms. When we prioritize freedom, we can follow our passions and build meaningful relationships. We can spend our time on what we love, rather than being tied down by financial worries.
True fulfillment comes from this freedom. It allows us to define success for ourselves. When we embrace freedom, we become more resilient and creative. We connect more deeply with ourselves and others. This sense of purpose often brings more happiness than money ever could.
In the end, money isn’t the ultimate goal. It’s freedom that truly matters. By focusing on living authentically and making choices that resonate with us, we can create a life filled with meaning and joy.
-
@ a39d19ec:3d88f61e
2025-03-18 17:16:50Nun da das deutsche Bundesregime den Ruin Deutschlands beschlossen hat, der sehr wahrscheinlich mit dem Werkzeug des Geld druckens "finanziert" wird, kamen mir so viele Gedanken zur Geldmengenausweitung, dass ich diese für einmal niedergeschrieben habe.
Die Ausweitung der Geldmenge führt aus klassischer wirtschaftlicher Sicht immer zu Preissteigerungen, weil mehr Geld im Umlauf auf eine begrenzte Menge an Gütern trifft. Dies lässt sich in mehreren Schritten analysieren:
1. Quantitätstheorie des Geldes
Die klassische Gleichung der Quantitätstheorie des Geldes lautet:
M • V = P • Y
wobei:
- M die Geldmenge ist,
- V die Umlaufgeschwindigkeit des Geldes,
- P das Preisniveau,
- Y die reale Wirtschaftsleistung (BIP).Wenn M steigt und V sowie Y konstant bleiben, muss P steigen – also Inflation entstehen.
2. Gütermenge bleibt begrenzt
Die Menge an real produzierten Gütern und Dienstleistungen wächst meist nur langsam im Vergleich zur Ausweitung der Geldmenge. Wenn die Geldmenge schneller steigt als die Produktionsgütermenge, führt dies dazu, dass mehr Geld für die gleiche Menge an Waren zur Verfügung steht – die Preise steigen.
3. Erwartungseffekte und Spekulation
Wenn Unternehmen und Haushalte erwarten, dass mehr Geld im Umlauf ist, da eine zentrale Planung es so wollte, können sie steigende Preise antizipieren. Unternehmen erhöhen ihre Preise vorab, und Arbeitnehmer fordern höhere Löhne. Dies kann eine sich selbst verstärkende Spirale auslösen.
4. Internationale Perspektive
Eine erhöhte Geldmenge kann die Währung abwerten, wenn andere Länder ihre Geldpolitik stabil halten. Eine schwächere Währung macht Importe teurer, was wiederum Preissteigerungen antreibt.
5. Kritik an der reinen Geldmengen-Theorie
Der Vollständigkeit halber muss erwähnt werden, dass die meisten modernen Ökonomen im Staatsauftrag argumentieren, dass Inflation nicht nur von der Geldmenge abhängt, sondern auch von der Nachfrage nach Geld (z. B. in einer Wirtschaftskrise). Dennoch zeigt die historische Erfahrung, dass eine unkontrollierte Geldmengenausweitung langfristig immer zu Preissteigerungen führt, wie etwa in der Hyperinflation der Weimarer Republik oder in Simbabwe.
-
@ 0d97beae:c5274a14
2025-01-11 16:52:08This article hopes to complement the article by Lyn Alden on YouTube: https://www.youtube.com/watch?v=jk_HWmmwiAs
The reason why we have broken money
Before the invention of key technologies such as the printing press and electronic communications, even such as those as early as morse code transmitters, gold had won the competition for best medium of money around the world.
In fact, it was not just gold by itself that became money, rulers and world leaders developed coins in order to help the economy grow. Gold nuggets were not as easy to transact with as coins with specific imprints and denominated sizes.
However, these modern technologies created massive efficiencies that allowed us to communicate and perform services more efficiently and much faster, yet the medium of money could not benefit from these advancements. Gold was heavy, slow and expensive to move globally, even though requesting and performing services globally did not have this limitation anymore.
Banks took initiative and created derivatives of gold: paper and electronic money; these new currencies allowed the economy to continue to grow and evolve, but it was not without its dark side. Today, no currency is denominated in gold at all, money is backed by nothing and its inherent value, the paper it is printed on, is worthless too.
Banks and governments eventually transitioned from a money derivative to a system of debt that could be co-opted and controlled for political and personal reasons. Our money today is broken and is the cause of more expensive, poorer quality goods in the economy, a larger and ever growing wealth gap, and many of the follow-on problems that have come with it.
Bitcoin overcomes the "transfer of hard money" problem
Just like gold coins were created by man, Bitcoin too is a technology created by man. Bitcoin, however is a much more profound invention, possibly more of a discovery than an invention in fact. Bitcoin has proven to be unbreakable, incorruptible and has upheld its ability to keep its units scarce, inalienable and counterfeit proof through the nature of its own design.
Since Bitcoin is a digital technology, it can be transferred across international borders almost as quickly as information itself. It therefore severely reduces the need for a derivative to be used to represent money to facilitate digital trade. This means that as the currency we use today continues to fare poorly for many people, bitcoin will continue to stand out as hard money, that just so happens to work as well, functionally, along side it.
Bitcoin will also always be available to anyone who wishes to earn it directly; even China is unable to restrict its citizens from accessing it. The dollar has traditionally become the currency for people who discover that their local currency is unsustainable. Even when the dollar has become illegal to use, it is simply used privately and unofficially. However, because bitcoin does not require you to trade it at a bank in order to use it across borders and across the web, Bitcoin will continue to be a viable escape hatch until we one day hit some critical mass where the world has simply adopted Bitcoin globally and everyone else must adopt it to survive.
Bitcoin has not yet proven that it can support the world at scale. However it can only be tested through real adoption, and just as gold coins were developed to help gold scale, tools will be developed to help overcome problems as they arise; ideally without the need for another derivative, but if necessary, hopefully with one that is more neutral and less corruptible than the derivatives used to represent gold.
Bitcoin blurs the line between commodity and technology
Bitcoin is a technology, it is a tool that requires human involvement to function, however it surprisingly does not allow for any concentration of power. Anyone can help to facilitate Bitcoin's operations, but no one can take control of its behaviour, its reach, or its prioritisation, as it operates autonomously based on a pre-determined, neutral set of rules.
At the same time, its built-in incentive mechanism ensures that people do not have to operate bitcoin out of the good of their heart. Even though the system cannot be co-opted holistically, It will not stop operating while there are people motivated to trade their time and resources to keep it running and earn from others' transaction fees. Although it requires humans to operate it, it remains both neutral and sustainable.
Never before have we developed or discovered a technology that could not be co-opted and used by one person or faction against another. Due to this nature, Bitcoin's units are often described as a commodity; they cannot be usurped or virtually cloned, and they cannot be affected by political biases.
The dangers of derivatives
A derivative is something created, designed or developed to represent another thing in order to solve a particular complication or problem. For example, paper and electronic money was once a derivative of gold.
In the case of Bitcoin, if you cannot link your units of bitcoin to an "address" that you personally hold a cryptographically secure key to, then you very likely have a derivative of bitcoin, not bitcoin itself. If you buy bitcoin on an online exchange and do not withdraw the bitcoin to a wallet that you control, then you legally own an electronic derivative of bitcoin.
Bitcoin is a new technology. It will have a learning curve and it will take time for humanity to learn how to comprehend, authenticate and take control of bitcoin collectively. Having said that, many people all over the world are already using and relying on Bitcoin natively. For many, it will require for people to find the need or a desire for a neutral money like bitcoin, and to have been burned by derivatives of it, before they start to understand the difference between the two. Eventually, it will become an essential part of what we regard as common sense.
Learn for yourself
If you wish to learn more about how to handle bitcoin and avoid derivatives, you can start by searching online for tutorials about "Bitcoin self custody".
There are many options available, some more practical for you, and some more practical for others. Don't spend too much time trying to find the perfect solution; practice and learn. You may make mistakes along the way, so be careful not to experiment with large amounts of your bitcoin as you explore new ideas and technologies along the way. This is similar to learning anything, like riding a bicycle; you are sure to fall a few times, scuff the frame, so don't buy a high performance racing bike while you're still learning to balance.
-
@ 37fe9853:bcd1b039
2025-01-11 15:04:40yoyoaa
-
@ 62033ff8:e4471203
2025-01-11 15:00:24收录的内容中 kind=1的部分,实话说 质量不高。 所以我增加了kind=30023 长文的article,但是更新的太少,多个relays 的服务器也没有多少长文。
所有搜索nostr如果需要产生价值,需要有高质量的文章和新闻。 而且现在有很多机器人的文章充满着浪费空间的作用,其他作用都用不上。
https://www.duozhutuan.com 目前放的是给搜索引擎提供搜索的原材料。没有做UI给人类浏览。所以看上去是粗糙的。 我并没有打算去做一个发microblog的 web客户端,那类的客户端太多了。
我觉得nostr社区需要解决的还是应用。如果仅仅是microblog 感觉有点够呛
幸运的是npub.pro 建站这样的,我觉得有点意思。
yakihonne 智能widget 也有意思
我做的TaskQ5 我自己在用了。分布式的任务系统,也挺好的。
-
@ 23b0e2f8:d8af76fc
2025-01-08 18:17:52Necessário
- Um Android que você não use mais (a câmera deve estar funcionando).
- Um cartão microSD (opcional, usado apenas uma vez).
- Um dispositivo para acompanhar seus fundos (provavelmente você já tem um).
Algumas coisas que você precisa saber
- O dispositivo servirá como um assinador. Qualquer movimentação só será efetuada após ser assinada por ele.
- O cartão microSD será usado para transferir o APK do Electrum e garantir que o aparelho não terá contato com outras fontes de dados externas após sua formatação. Contudo, é possível usar um cabo USB para o mesmo propósito.
- A ideia é deixar sua chave privada em um dispositivo offline, que ficará desligado em 99% do tempo. Você poderá acompanhar seus fundos em outro dispositivo conectado à internet, como seu celular ou computador pessoal.
O tutorial será dividido em dois módulos:
- Módulo 1 - Criando uma carteira fria/assinador.
- Módulo 2 - Configurando um dispositivo para visualizar seus fundos e assinando transações com o assinador.
No final, teremos:
- Uma carteira fria que também servirá como assinador.
- Um dispositivo para acompanhar os fundos da carteira.
Módulo 1 - Criando uma carteira fria/assinador
-
Baixe o APK do Electrum na aba de downloads em https://electrum.org/. Fique à vontade para verificar as assinaturas do software, garantindo sua autenticidade.
-
Formate o cartão microSD e coloque o APK do Electrum nele. Caso não tenha um cartão microSD, pule este passo.
- Retire os chips e acessórios do aparelho que será usado como assinador, formate-o e aguarde a inicialização.
- Durante a inicialização, pule a etapa de conexão ao Wi-Fi e rejeite todas as solicitações de conexão. Após isso, você pode desinstalar aplicativos desnecessários, pois precisará apenas do Electrum. Certifique-se de que Wi-Fi, Bluetooth e dados móveis estejam desligados. Você também pode ativar o modo avião.\ (Curiosidade: algumas pessoas optam por abrir o aparelho e danificar a antena do Wi-Fi/Bluetooth, impossibilitando essas funcionalidades.)
- Insira o cartão microSD com o APK do Electrum no dispositivo e instale-o. Será necessário permitir instalações de fontes não oficiais.
- No Electrum, crie uma carteira padrão e gere suas palavras-chave (seed). Anote-as em um local seguro. Caso algo aconteça com seu assinador, essas palavras permitirão o acesso aos seus fundos novamente. (Aqui entra seu método pessoal de backup.)
Módulo 2 - Configurando um dispositivo para visualizar seus fundos e assinando transações com o assinador.
-
Criar uma carteira somente leitura em outro dispositivo, como seu celular ou computador pessoal, é uma etapa bastante simples. Para este tutorial, usaremos outro smartphone Android com Electrum. Instale o Electrum a partir da aba de downloads em https://electrum.org/ ou da própria Play Store. (ATENÇÃO: O Electrum não existe oficialmente para iPhone. Desconfie se encontrar algum.)
-
Após instalar o Electrum, crie uma carteira padrão, mas desta vez escolha a opção Usar uma chave mestra.
- Agora, no assinador que criamos no primeiro módulo, exporte sua chave pública: vá em Carteira > Detalhes da carteira > Compartilhar chave mestra pública.
-
Escaneie o QR gerado da chave pública com o dispositivo de consulta. Assim, ele poderá acompanhar seus fundos, mas sem permissão para movimentá-los.
-
Para receber fundos, envie Bitcoin para um dos endereços gerados pela sua carteira: Carteira > Addresses/Coins.
-
Para movimentar fundos, crie uma transação no dispositivo de consulta. Como ele não possui a chave privada, será necessário assiná-la com o dispositivo assinador.
- No assinador, escaneie a transação não assinada, confirme os detalhes, assine e compartilhe. Será gerado outro QR, desta vez com a transação já assinada.
- No dispositivo de consulta, escaneie o QR da transação assinada e transmita-a para a rede.
Conclusão
Pontos positivos do setup:
- Simplicidade: Basta um dispositivo Android antigo.
- Flexibilidade: Funciona como uma ótima carteira fria, ideal para holders.
Pontos negativos do setup:
- Padronização: Não utiliza seeds no padrão BIP-39, você sempre precisará usar o electrum.
- Interface: A aparência do Electrum pode parecer antiquada para alguns usuários.
Nesse ponto, temos uma carteira fria que também serve para assinar transações. O fluxo de assinar uma transação se torna: Gerar uma transação não assinada > Escanear o QR da transação não assinada > Conferir e assinar essa transação com o assinador > Gerar QR da transação assinada > Escanear a transação assinada com qualquer outro dispositivo que possa transmiti-la para a rede.
Como alguns devem saber, uma transação assinada de Bitcoin é praticamente impossível de ser fraudada. Em um cenário catastrófico, você pode mesmo que sem internet, repassar essa transação assinada para alguém que tenha acesso à rede por qualquer meio de comunicação. Mesmo que não queiramos que isso aconteça um dia, esse setup acaba por tornar essa prática possível.
-
@ 177bfd16:347a07e4
2025-04-26 20:38:30So , you've battles through your way through countless Grunts , overcome the Team GO Rocket Leaders, and now you stand face to face with the big boss him self - Giovanni!
As of April 2025 , Giovanni is finishing his battles with Shadow Palkia.
Giovanni's Current Lineup (April 2025) First, know your enemy. Giovanni's team follows this structure:
Slot 1: Shadow Persian (Normal)
Slot 2: One of these three, chosen randomly:
Shadow Nidoking (Poison/Ground) Shadow Kingdra (Water/Dragon) Shadow Rhyperior (Rock/Ground)
Slot 3: Shadow Palkia (Water/Dragon) Remember, these are Shadow Pokémon – they hit harder than their normal counterparts!
Counter Strategy: Beating Giovanni Pokémon by Pokémon
Let's dive into the best counters for each potential opponent:
- Vs. Shadow Persian (Normal)
Giovanni always leads with Persian. As a Normal-type, it's weak only to Fighting-type attacks.
Top Counters: Machamp, Lucario, Conkeldurr, Terrakion, Mega Blaziken, Mega Lucario.
Moves: Prioritize Fighting-type moves like Counter, Dynamic Punch, Aura Sphere, and Sacred Sword.
Tip: Lead with a strong Fighting-type. Moves like Lucario's Power-Up Punch or Machamp's Cross Chop charge quickly and are great for baiting Giovanni's shields early!
- Vs. The Second Slot (Nidoking, Kingdra, or Rhyperior)
This is where things get unpredictable. You need Pokémon that can handle these potential threats:
Vs. Shadow Nidoking (Poison/Ground): Weak to Water, Ground, Ice, Psychic.
Counters: Kyogre (Primal/Shadow), Swampert (Mega), Groudon (Primal/Shadow), Mewtwo (Shadow), Excadrill. Water and Ground-types are prime choices.
Vs. Shadow Kingdra (Water/Dragon): Weak to Fairy, Dragon. Counters: Gardevoir (Mega), Togekiss, Xerneas. Fairy-types are excellent as they resist Dragon attacks while dealing super-effective damage. Dragon-types like Rayquaza or Palkia work but are risky.
Vs. Shadow Rhyperior (Rock/Ground): Double weak to Water and Grass! Also weak to Fighting, Ground, Ice, Steel.
Counters: Kyogre (Primal/Shadow), Swampert (Mega), Sceptile (Mega), Roserade. Your Fighting-type lead (if it survived Persian) can also do significant damage. Hit it hard with Water or Grass!
- Vs. Shadow Palkia (Water/Dragon)
Giovanni's final Pokémon is the powerful Shadow Palkia. Like Kingdra, it's weak to Fairy and Dragon types.
Top Counters: Gardevoir (Mega), Togekiss, Xerneas. Again, Fairy-types are the safest and most reliable counters.
Dragon Counters (Use with Caution): Rayquaza (Mega), Palkia (Origin Forme), Dragonite, Dialga (Origin Forme).
Recommended Battle Teams for April 2025 Based on the counters, here are a few effective teams you can assemble:
Team 1 (Balanced):
Machamp (Counter / Cross Chop & Dynamic Punch)
Swampert (Mud Shot / Hydro Cannon & Earthquake)
Togekiss (Charm / Dazzling Gleam & Ancient Power)
Why it works: Covers all bases well with accessible Pokémon. Machamp handles Persian/shields, Swampert crushes Rhyperior/Nidoking, Togekiss tackles Kingdra/Palkia.
Team 2 (Legendary Power):
Lucario (Counter / Power-Up Punch & Aura Sphere)
Kyogre (Waterfall / Origin Pulse & Surf)
Xerneas (Geomancy / Moonblast & Close Combat)
Why it works: High-powered options. Lucario baits shields effectively, Kyogre dominates slot two's Ground/Rock types, Xerneas shreds the Dragons.
Team 3 (Mega Advantage):
Machamp (Counter / Cross Chop & DP)
Kyogre (Waterfall / Origin Pulse)
Mega Gardevoir (Charm / Dazzling Gleam & Shadow Ball)
Why it works: Uses a standard strong lead and mid-game counter, saving the Mega slot for Gardevoir to ensure a powerful finish against Palkia/Kingdra.
Essential Battle Tips Don't forget these crucial tactics:
The Switch Trick: Place your intended starting Pokémon (e.g., Machamp) in the second or third slot. Start the battle, then immediately switch to it. Giovanni will pause for a moment, letting you get in free hits!
Bait Those Shields: Use Pokémon with fast-charging Charged Moves, especially early on, to force Giovanni to waste his Protect Shields.
Power Up: Ensure your team is powered up significantly and consider unlocking second Charged Moves for better flexibility.
Don't Give Up: Giovanni is tough! It might take a few tries to get the right matchup against his second Pokémon. Learn from each attempt and adjust your team if needed.
Go show the boss who's the boss and claim your shadow Palkia . Good Luck, Trainer !
-
@ 207ad2a0:e7cca7b0
2025-01-07 03:46:04Quick context: I wanted to check out Nostr's longform posts and this blog post seemed like a good one to try and mirror. It's originally from my free to read/share attempt to write a novel, but this post here is completely standalone - just describing how I used AI image generation to make a small piece of the work.
Hold on, put your pitchforks down - outside of using Grammerly & Emacs for grammatical corrections - not a single character was generated or modified by computers; a non-insignificant portion of my first draft originating on pen & paper. No AI is ~~weird and crazy~~ imaginative enough to write like I do. The only successful AI contribution you'll find is a single image, the map, which I heavily edited. This post will go over how I generated and modified an image using AI, which I believe brought some value to the work, and cover a few quick thoughts about AI towards the end.
Let's be clear, I can't draw, but I wanted a map which I believed would improve the story I was working on. After getting abysmal results by prompting AI with text only I decided to use "Diffuse the Rest," a Stable Diffusion tool that allows you to provide a reference image + description to fine tune what you're looking for. I gave it this Microsoft Paint looking drawing:
and after a number of outputs, selected this one to work on:
The image is way better than the one I provided, but had I used it as is, I still feel it would have decreased the quality of my work instead of increasing it. After firing up Gimp I cropped out the top and bottom, expanded the ocean and separated the landmasses, then copied the top right corner of the large landmass to replace the bottom left that got cut off. Now we've got something that looks like concept art: not horrible, and gets the basic idea across, but it's still due for a lot more detail.
The next thing I did was add some texture to make it look more map like. I duplicated the layer in Gimp and applied the "Cartoon" filter to both for some texture. The top layer had a much lower effect strength to give it a more textured look, while the lower layer had a higher effect strength that looked a lot like mountains or other terrain features. Creating a layer mask allowed me to brush over spots to display the lower layer in certain areas, giving it some much needed features.
At this point I'd made it to where I felt it may improve the work instead of detracting from it - at least after labels and borders were added, but the colors seemed artificial and out of place. Luckily, however, this is when PhotoFunia could step in and apply a sketch effect to the image.
At this point I was pretty happy with how it was looking, it was close to what I envisioned and looked very visually appealing while still being a good way to portray information. All that was left was to make the white background transparent, add some minor details, and add the labels and borders. Below is the exact image I wound up using:
Overall, I'm very satisfied with how it turned out, and if you're working on a creative project, I'd recommend attempting something like this. It's not a central part of the work, but it improved the chapter a fair bit, and was doable despite lacking the talent and not intending to allocate a budget to my making of a free to read and share story.
The AI Generated Elephant in the Room
If you've read my non-fiction writing before, you'll know that I think AI will find its place around the skill floor as opposed to the skill ceiling. As you saw with my input, I have absolutely zero drawing talent, but with some elbow grease and an existing creative direction before and after generating an image I was able to get something well above what I could have otherwise accomplished. Outside of the lowest common denominators like stock photos for the sole purpose of a link preview being eye catching, however, I doubt AI will be wholesale replacing most creative works anytime soon. I can assure you that I tried numerous times to describe the map without providing a reference image, and if I used one of those outputs (or even just the unedited output after providing the reference image) it would have decreased the quality of my work instead of improving it.
I'm going to go out on a limb and expect that AI image, text, and video is all going to find its place in slop & generic content (such as AI generated slop replacing article spinners and stock photos respectively) and otherwise be used in a supporting role for various creative endeavors. For people working on projects like I'm working on (e.g. intended budget $0) it's helpful to have an AI capable of doing legwork - enabling projects to exist or be improved in ways they otherwise wouldn't have. I'm also guessing it'll find its way into more professional settings for grunt work - think a picture frame or fake TV show that would exist in the background of an animated project - likely a detail most people probably wouldn't notice, but that would save the creators time and money and/or allow them to focus more on the essential aspects of said work. Beyond that, as I've predicted before: I expect plenty of emails will be generated from a short list of bullet points, only to be summarized by the recipient's AI back into bullet points.
I will also make a prediction counter to what seems mainstream: AI is about to peak for a while. The start of AI image generation was with Google's DeepDream in 2015 - image recognition software that could be run in reverse to "recognize" patterns where there were none, effectively generating an image from digital noise or an unrelated image. While I'm not an expert by any means, I don't think we're too far off from that a decade later, just using very fine tuned tools that develop more coherent images. I guess that we're close to maxing out how efficiently we're able to generate images and video in that manner, and the hard caps on how much creative direction we can have when using AI - as well as the limits to how long we can keep it coherent (e.g. long videos or a chronologically consistent set of images) - will prevent AI from progressing too far beyond what it is currently unless/until another breakthrough occurs.
-
@ e6817453:b0ac3c39
2025-01-05 14:29:17The Rise of Graph RAGs and the Quest for Data Quality
As we enter a new year, it’s impossible to ignore the boom of retrieval-augmented generation (RAG) systems, particularly those leveraging graph-based approaches. The previous year saw a surge in advancements and discussions about Graph RAGs, driven by their potential to enhance large language models (LLMs), reduce hallucinations, and deliver more reliable outputs. Let’s dive into the trends, challenges, and strategies for making the most of Graph RAGs in artificial intelligence.
Booming Interest in Graph RAGs
Graph RAGs have dominated the conversation in AI circles. With new research papers and innovations emerging weekly, it’s clear that this approach is reshaping the landscape. These systems, especially those developed by tech giants like Microsoft, demonstrate how graphs can:
- Enhance LLM Outputs: By grounding responses in structured knowledge, graphs significantly reduce hallucinations.
- Support Complex Queries: Graphs excel at managing linked and connected data, making them ideal for intricate problem-solving.
Conferences on linked and connected data have increasingly focused on Graph RAGs, underscoring their central role in modern AI systems. However, the excitement around this technology has brought critical questions to the forefront: How do we ensure the quality of the graphs we’re building, and are they genuinely aligned with our needs?
Data Quality: The Foundation of Effective Graphs
A high-quality graph is the backbone of any successful RAG system. Constructing these graphs from unstructured data requires attention to detail and rigorous processes. Here’s why:
- Richness of Entities: Effective retrieval depends on graphs populated with rich, detailed entities.
- Freedom from Hallucinations: Poorly constructed graphs amplify inaccuracies rather than mitigating them.
Without robust data quality, even the most sophisticated Graph RAGs become ineffective. As a result, the focus must shift to refining the graph construction process. Improving data strategy and ensuring meticulous data preparation is essential to unlock the full potential of Graph RAGs.
Hybrid Graph RAGs and Variations
While standard Graph RAGs are already transformative, hybrid models offer additional flexibility and power. Hybrid RAGs combine structured graph data with other retrieval mechanisms, creating systems that:
- Handle diverse data sources with ease.
- Offer improved adaptability to complex queries.
Exploring these variations can open new avenues for AI systems, particularly in domains requiring structured and unstructured data processing.
Ontology: The Key to Graph Construction Quality
Ontology — defining how concepts relate within a knowledge domain — is critical for building effective graphs. While this might sound abstract, it’s a well-established field blending philosophy, engineering, and art. Ontology engineering provides the framework for:
- Defining Relationships: Clarifying how concepts connect within a domain.
- Validating Graph Structures: Ensuring constructed graphs are logically sound and align with domain-specific realities.
Traditionally, ontologists — experts in this discipline — have been integral to large enterprises and research teams. However, not every team has access to dedicated ontologists, leading to a significant challenge: How can teams without such expertise ensure the quality of their graphs?
How to Build Ontology Expertise in a Startup Team
For startups and smaller teams, developing ontology expertise may seem daunting, but it is achievable with the right approach:
- Assign a Knowledge Champion: Identify a team member with a strong analytical mindset and give them time and resources to learn ontology engineering.
- Provide Training: Invest in courses, workshops, or certifications in knowledge graph and ontology creation.
- Leverage Partnerships: Collaborate with academic institutions, domain experts, or consultants to build initial frameworks.
- Utilize Tools: Introduce ontology development tools like Protégé, OWL, or SHACL to simplify the creation and validation process.
- Iterate with Feedback: Continuously refine ontologies through collaboration with domain experts and iterative testing.
So, it is not always affordable for a startup to have a dedicated oncologist or knowledge engineer in a team, but you could involve consulters or build barefoot experts.
You could read about barefoot experts in my article :
Even startups can achieve robust and domain-specific ontology frameworks by fostering in-house expertise.
How to Find or Create Ontologies
For teams venturing into Graph RAGs, several strategies can help address the ontology gap:
-
Leverage Existing Ontologies: Many industries and domains already have open ontologies. For instance:
-
Public Knowledge Graphs: Resources like Wikipedia’s graph offer a wealth of structured knowledge.
- Industry Standards: Enterprises such as Siemens have invested in creating and sharing ontologies specific to their fields.
-
Business Framework Ontology (BFO): A valuable resource for enterprises looking to define business processes and structures.
-
Build In-House Expertise: If budgets allow, consider hiring knowledge engineers or providing team members with the resources and time to develop expertise in ontology creation.
-
Utilize LLMs for Ontology Construction: Interestingly, LLMs themselves can act as a starting point for ontology development:
-
Prompt-Based Extraction: LLMs can generate draft ontologies by leveraging their extensive training on graph data.
- Domain Expert Refinement: Combine LLM-generated structures with insights from domain experts to create tailored ontologies.
Parallel Ontology and Graph Extraction
An emerging approach involves extracting ontologies and graphs in parallel. While this can streamline the process, it presents challenges such as:
- Detecting Hallucinations: Differentiating between genuine insights and AI-generated inaccuracies.
- Ensuring Completeness: Ensuring no critical concepts are overlooked during extraction.
Teams must carefully validate outputs to ensure reliability and accuracy when employing this parallel method.
LLMs as Ontologists
While traditionally dependent on human expertise, ontology creation is increasingly supported by LLMs. These models, trained on vast amounts of data, possess inherent knowledge of many open ontologies and taxonomies. Teams can use LLMs to:
- Generate Skeleton Ontologies: Prompt LLMs with domain-specific information to draft initial ontology structures.
- Validate and Refine Ontologies: Collaborate with domain experts to refine these drafts, ensuring accuracy and relevance.
However, for validation and graph construction, formal tools such as OWL, SHACL, and RDF should be prioritized over LLMs to minimize hallucinations and ensure robust outcomes.
Final Thoughts: Unlocking the Power of Graph RAGs
The rise of Graph RAGs underscores a simple but crucial correlation: improving graph construction and data quality directly enhances retrieval systems. To truly harness this power, teams must invest in understanding ontologies, building quality graphs, and leveraging both human expertise and advanced AI tools.
As we move forward, the interplay between Graph RAGs and ontology engineering will continue to shape the future of AI. Whether through adopting existing frameworks or exploring innovative uses of LLMs, the path to success lies in a deep commitment to data quality and domain understanding.
Have you explored these technologies in your work? Share your experiences and insights — and stay tuned for more discussions on ontology extraction and its role in AI advancements. Cheers to a year of innovation!
-
@ a4a6b584:1e05b95b
2025-01-02 18:13:31The Four-Layer Framework
Layer 1: Zoom Out
Start by looking at the big picture. What’s the subject about, and why does it matter? Focus on the overarching ideas and how they fit together. Think of this as the 30,000-foot view—it’s about understanding the "why" and "how" before diving into the "what."
Example: If you’re learning programming, start by understanding that it’s about giving logical instructions to computers to solve problems.
- Tip: Keep it simple. Summarize the subject in one or two sentences and avoid getting bogged down in specifics at this stage.
Once you have the big picture in mind, it’s time to start breaking it down.
Layer 2: Categorize and Connect
Now it’s time to break the subject into categories—like creating branches on a tree. This helps your brain organize information logically and see connections between ideas.
Example: Studying biology? Group concepts into categories like cells, genetics, and ecosystems.
- Tip: Use headings or labels to group similar ideas. Jot these down in a list or simple diagram to keep track.
With your categories in place, you’re ready to dive into the details that bring them to life.
Layer 3: Master the Details
Once you’ve mapped out the main categories, you’re ready to dive deeper. This is where you learn the nuts and bolts—like formulas, specific techniques, or key terminology. These details make the subject practical and actionable.
Example: In programming, this might mean learning the syntax for loops, conditionals, or functions in your chosen language.
- Tip: Focus on details that clarify the categories from Layer 2. Skip anything that doesn’t add to your understanding.
Now that you’ve mastered the essentials, you can expand your knowledge to include extra material.
Layer 4: Expand Your Horizons
Finally, move on to the extra material—less critical facts, trivia, or edge cases. While these aren’t essential to mastering the subject, they can be useful in specialized discussions or exams.
Example: Learn about rare programming quirks or historical trivia about a language’s development.
- Tip: Spend minimal time here unless it’s necessary for your goals. It’s okay to skim if you’re short on time.
Pro Tips for Better Learning
1. Use Active Recall and Spaced Repetition
Test yourself without looking at notes. Review what you’ve learned at increasing intervals—like after a day, a week, and a month. This strengthens memory by forcing your brain to actively retrieve information.
2. Map It Out
Create visual aids like diagrams or concept maps to clarify relationships between ideas. These are particularly helpful for organizing categories in Layer 2.
3. Teach What You Learn
Explain the subject to someone else as if they’re hearing it for the first time. Teaching exposes any gaps in your understanding and helps reinforce the material.
4. Engage with LLMs and Discuss Concepts
Take advantage of tools like ChatGPT or similar large language models to explore your topic in greater depth. Use these tools to:
- Ask specific questions to clarify confusing points.
- Engage in discussions to simulate real-world applications of the subject.
- Generate examples or analogies that deepen your understanding.Tip: Use LLMs as a study partner, but don’t rely solely on them. Combine these insights with your own critical thinking to develop a well-rounded perspective.
Get Started
Ready to try the Four-Layer Method? Take 15 minutes today to map out the big picture of a topic you’re curious about—what’s it all about, and why does it matter? By building your understanding step by step, you’ll master the subject with less stress and more confidence.
-
@ fe32298e:20516265
2024-12-16 20:59:13Today I learned how to install NVapi to monitor my GPUs in Home Assistant.
NVApi is a lightweight API designed for monitoring NVIDIA GPU utilization and enabling automated power management. It provides real-time GPU metrics, supports integration with tools like Home Assistant, and offers flexible power management and PCIe link speed management based on workload and thermal conditions.
- GPU Utilization Monitoring: Utilization, memory usage, temperature, fan speed, and power consumption.
- Automated Power Limiting: Adjusts power limits dynamically based on temperature thresholds and total power caps, configurable per GPU or globally.
- Cross-GPU Coordination: Total power budget applies across multiple GPUs in the same system.
- PCIe Link Speed Management: Controls minimum and maximum PCIe link speeds with idle thresholds for power optimization.
- Home Assistant Integration: Uses the built-in RESTful platform and template sensors.
Getting the Data
sudo apt install golang-go git clone https://github.com/sammcj/NVApi.git cd NVapi go run main.go -port 9999 -rate 1 curl http://localhost:9999/gpu
Response for a single GPU:
[ { "index": 0, "name": "NVIDIA GeForce RTX 4090", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 16, "power_limit_watts": 450, "memory_total_gb": 23.99, "memory_used_gb": 0.46, "memory_free_gb": 23.52, "memory_usage_percent": 2, "temperature": 38, "processes": [], "pcie_link_state": "not managed" } ]
Response for multiple GPUs:
[ { "index": 0, "name": "NVIDIA GeForce RTX 3090", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 14, "power_limit_watts": 350, "memory_total_gb": 24, "memory_used_gb": 0.43, "memory_free_gb": 23.57, "memory_usage_percent": 2, "temperature": 36, "processes": [], "pcie_link_state": "not managed" }, { "index": 1, "name": "NVIDIA RTX A4000", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 10, "power_limit_watts": 140, "memory_total_gb": 15.99, "memory_used_gb": 0.56, "memory_free_gb": 15.43, "memory_usage_percent": 3, "temperature": 41, "processes": [], "pcie_link_state": "not managed" } ]
Start at Boot
Create
/etc/systemd/system/nvapi.service
:``` [Unit] Description=Run NVapi After=network.target
[Service] Type=simple Environment="GOPATH=/home/ansible/go" WorkingDirectory=/home/ansible/NVapi ExecStart=/usr/bin/go run main.go -port 9999 -rate 1 Restart=always User=ansible
Environment="GPU_TEMP_CHECK_INTERVAL=5"
Environment="GPU_TOTAL_POWER_CAP=400"
Environment="GPU_0_LOW_TEMP=40"
Environment="GPU_0_MEDIUM_TEMP=70"
Environment="GPU_0_LOW_TEMP_LIMIT=135"
Environment="GPU_0_MEDIUM_TEMP_LIMIT=120"
Environment="GPU_0_HIGH_TEMP_LIMIT=100"
Environment="GPU_1_LOW_TEMP=45"
Environment="GPU_1_MEDIUM_TEMP=75"
Environment="GPU_1_LOW_TEMP_LIMIT=140"
Environment="GPU_1_MEDIUM_TEMP_LIMIT=125"
Environment="GPU_1_HIGH_TEMP_LIMIT=110"
[Install] WantedBy=multi-user.target ```
Home Assistant
Add to Home Assistant
configuration.yaml
and restart HA (completely).For a single GPU, this works: ``` sensor: - platform: rest name: MYPC GPU Information resource: http://mypc:9999 method: GET headers: Content-Type: application/json value_template: "{{ value_json[0].index }}" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature scan_interval: 1 # seconds
- platform: template sensors: mypc_gpu_0_gpu: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} GPU" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_memory: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Memory" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_power: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_0_power_limit: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_0_temperature: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Temperature" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'temperature') }}" unit_of_measurement: "°C" ```
For multiple GPUs: ``` rest: scan_interval: 1 resource: http://mypc:9999 sensor: - name: "MYPC GPU0 Information" value_template: "{{ value_json[0].index }}" json_attributes_path: "$.0" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature - name: "MYPC GPU1 Information" value_template: "{{ value_json[1].index }}" json_attributes_path: "$.1" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature
-
platform: template sensors: mypc_gpu_0_gpu: friendly_name: "MYPC GPU0 GPU" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_memory: friendly_name: "MYPC GPU0 Memory" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_power: friendly_name: "MYPC GPU0 Power" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_0_power_limit: friendly_name: "MYPC GPU0 Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_0_temperature: friendly_name: "MYPC GPU0 Temperature" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'temperature') }}" unit_of_measurement: "C"
-
platform: template sensors: mypc_gpu_1_gpu: friendly_name: "MYPC GPU1 GPU" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_1_memory: friendly_name: "MYPC GPU1 Memory" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_1_power: friendly_name: "MYPC GPU1 Power" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_1_power_limit: friendly_name: "MYPC GPU1 Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_1_temperature: friendly_name: "MYPC GPU1 Temperature" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'temperature') }}" unit_of_measurement: "C"
```
Basic entity card:
type: entities entities: - entity: sensor.mypc_gpu_0_gpu secondary_info: last-updated - entity: sensor.mypc_gpu_0_memory secondary_info: last-updated - entity: sensor.mypc_gpu_0_power secondary_info: last-updated - entity: sensor.mypc_gpu_0_power_limit secondary_info: last-updated - entity: sensor.mypc_gpu_0_temperature secondary_info: last-updated
Ansible Role
```
-
name: install go become: true package: name: golang-go state: present
-
name: git clone git: repo: "https://github.com/sammcj/NVApi.git" dest: "/home/ansible/NVapi" update: yes force: true
go run main.go -port 9999 -rate 1
-
name: install systemd service become: true copy: src: nvapi.service dest: /etc/systemd/system/nvapi.service
-
name: Reload systemd daemons, enable, and restart nvapi become: true systemd: name: nvapi daemon_reload: yes enabled: yes state: restarted ```
-
@ 6f6b50bb:a848e5a1
2024-12-15 15:09:52Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
-
Scoperta. Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
-
Efficienza. La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
Scoperta. Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
Efficienza. La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!
Tradotto da:\ https://willwhitney.com/computing-inside-ai.htmlhttps://willwhitney.com/computing-inside-ai.html
-
-
@ e6817453:b0ac3c39
2024-12-07 15:06:43I started a long series of articles about how to model different types of knowledge graphs in the relational model, which makes on-device memory models for AI agents possible.
We model-directed graphs
Also, graphs of entities
We even model hypergraphs
Last time, we discussed why classical triple and simple knowledge graphs are insufficient for AI agents and complex memory, especially in the domain of time-aware or multi-model knowledge.
So why do we need metagraphs, and what kind of challenge could they help us to solve?
- complex and nested event and temporal context and temporal relations as edges
- multi-mode and multilingual knowledge
- human-like memory for AI agents that has multiple contexts and relations between knowledge in neuron-like networks
MetaGraphs
A meta graph is a concept that extends the idea of a graph by allowing edges to become graphs. Meta Edges connect a set of nodes, which could also be subgraphs. So, at some level, node and edge are pretty similar in properties but act in different roles in a different context.
Also, in some cases, edges could be referenced as nodes.
This approach enables the representation of more complex relationships and hierarchies than a traditional graph structure allows. Let’s break down each term to understand better metagraphs and how they differ from hypergraphs and graphs.Graph Basics
- A standard graph has a set of nodes (or vertices) and edges (connections between nodes).
- Edges are generally simple and typically represent a binary relationship between two nodes.
- For instance, an edge in a social network graph might indicate a “friend” relationship between two people (nodes).
Hypergraph
- A hypergraph extends the concept of an edge by allowing it to connect any number of nodes, not just two.
- Each connection, called a hyperedge, can link multiple nodes.
- This feature allows hypergraphs to model more complex relationships involving multiple entities simultaneously. For example, a hyperedge in a hypergraph could represent a project team, connecting all team members in a single relation.
- Despite its flexibility, a hypergraph doesn’t capture hierarchical or nested structures; it only generalizes the number of connections in an edge.
Metagraph
- A metagraph allows the edges to be graphs themselves. This means each edge can contain its own nodes and edges, creating nested, hierarchical structures.
- In a meta graph, an edge could represent a relationship defined by a graph. For instance, a meta graph could represent a network of organizations where each organization’s structure (departments and connections) is represented by its own internal graph and treated as an edge in the larger meta graph.
- This recursive structure allows metagraphs to model complex data with multiple layers of abstraction. They can capture multi-node relationships (as in hypergraphs) and detailed, structured information about each relationship.
Named Graphs and Graph of Graphs
As you can notice, the structure of a metagraph is quite complex and could be complex to model in relational and classical RDF setups. It could create a challenge of luck of tools and software solutions for your problem.
If you need to model nested graphs, you could use a much simpler model of Named graphs, which could take you quite far.The concept of the named graph came from the RDF community, which needed to group some sets of triples. In this way, you form subgraphs inside an existing graph. You could refer to the subgraph as a regular node. This setup simplifies complex graphs, introduces hierarchies, and even adds features and properties of hypergraphs while keeping a directed nature.
It looks complex, but it is not so hard to model it with a slight modification of a directed graph.
So, the node could host graphs inside. Let's reflect this fact with a location for a node. If a node belongs to a main graph, we could set the location to null or introduce a main node . it is up to youNodes could have edges to nodes in different subgraphs. This structure allows any kind of nesting graphs. Edges stay location-free
Meta Graphs in Relational Model
Let’s try to make several attempts to model different meta-graphs with some constraints.
Directed Metagraph where edges are not used as nodes and could not contain subgraphs
In this case, the edge always points to two sets of nodes. This introduces an overhead of creating a node set for a single node. In this model, we can model empty node sets that could require application-level constraints to prevent such cases.
Directed Metagraph where edges are not used as nodes and could contain subgraphs
Adding a node set that could model a subgraph located in an edge is easy but could be separate from in-vertex or out-vert.
I also do not see a direct need to include subgraphs to a node, as we could just use a node set interchangeably, but it still could be a case.Directed Metagraph where edges are used as nodes and could contain subgraphs
As you can notice, we operate all the time with node sets. We could simply allow the extension node set to elements set that include node and edge IDs, but in this case, we need to use uuid or any other strategy to differentiate node IDs from edge IDs. In this case, we have a collision of ephemeral edges or ephemeral nodes when we want to change the role and purpose of the node as an edge or vice versa.
A full-scale metagraph model is way too complex for a relational database.
So we need a better model.Now, we have more flexibility but loose structural constraints. We cannot show that the element should have one vertex, one vertex, or both. This type of constraint has been moved to the application level. Also, the crucial question is about query and retrieval needs.
Any meta-graph model should be more focused on domain and needs and should be used in raw form. We did it for a pure theoretical purpose. -
@ e6817453:b0ac3c39
2024-12-07 15:03:06Hey folks! Today, let’s dive into the intriguing world of neurosymbolic approaches, retrieval-augmented generation (RAG), and personal knowledge graphs (PKGs). Together, these concepts hold much potential for bringing true reasoning capabilities to large language models (LLMs). So, let’s break down how symbolic logic, knowledge graphs, and modern AI can come together to empower future AI systems to reason like humans.
The Neurosymbolic Approach: What It Means ?
Neurosymbolic AI combines two historically separate streams of artificial intelligence: symbolic reasoning and neural networks. Symbolic AI uses formal logic to process knowledge, similar to how we might solve problems or deduce information. On the other hand, neural networks, like those underlying GPT-4, focus on learning patterns from vast amounts of data — they are probabilistic statistical models that excel in generating human-like language and recognizing patterns but often lack deep, explicit reasoning.
While GPT-4 can produce impressive text, it’s still not very effective at reasoning in a truly logical way. Its foundation, transformers, allows it to excel in pattern recognition, but the models struggle with reasoning because, at their core, they rely on statistical probabilities rather than true symbolic logic. This is where neurosymbolic methods and knowledge graphs come in.
Symbolic Calculations and the Early Vision of AI
If we take a step back to the 1950s, the vision for artificial intelligence was very different. Early AI research was all about symbolic reasoning — where computers could perform logical calculations to derive new knowledge from a given set of rules and facts. Languages like Lisp emerged to support this vision, enabling programs to represent data and code as interchangeable symbols. Lisp was designed to be homoiconic, meaning it treated code as manipulatable data, making it capable of self-modification — a huge leap towards AI systems that could, in theory, understand and modify their own operations.
Lisp: The Earlier AI-Language
Lisp, short for “LISt Processor,” was developed by John McCarthy in 1958, and it became the cornerstone of early AI research. Lisp’s power lay in its flexibility and its use of symbolic expressions, which allowed developers to create programs that could manipulate symbols in ways that were very close to human reasoning. One of the most groundbreaking features of Lisp was its ability to treat code as data, known as homoiconicity, which meant that Lisp programs could introspect and transform themselves dynamically. This ability to adapt and modify its own structure gave Lisp an edge in tasks that required a form of self-awareness, which was key in the early days of AI when researchers were exploring what it meant for machines to “think.”
Lisp was not just a programming language—it represented the vision for artificial intelligence, where machines could evolve their understanding and rewrite their own programming. This idea formed the conceptual basis for many of the self-modifying and adaptive algorithms that are still explored today in AI research. Despite its decline in mainstream programming, Lisp’s influence can still be seen in the concepts used in modern machine learning and symbolic AI approaches.
Prolog: Formal Logic and Deductive Reasoning
In the 1970s, Prolog was developed—a language focused on formal logic and deductive reasoning. Unlike Lisp, based on lambda calculus, Prolog operates on formal logic rules, allowing it to perform deductive reasoning and solve logical puzzles. This made Prolog an ideal candidate for expert systems that needed to follow a sequence of logical steps, such as medical diagnostics or strategic planning.
Prolog, like Lisp, allowed symbols to be represented, understood, and used in calculations, creating another homoiconic language that allows reasoning. Prolog’s strength lies in its rule-based structure, which is well-suited for tasks that require logical inference and backtracking. These features made it a powerful tool for expert systems and AI research in the 1970s and 1980s.
The language is declarative in nature, meaning that you define the problem, and Prolog figures out how to solve it. By using formal logic and setting constraints, Prolog systems can derive conclusions from known facts, making it highly effective in fields requiring explicit logical frameworks, such as legal reasoning, diagnostics, and natural language understanding. These symbolic approaches were later overshadowed during the AI winter — but the ideas never really disappeared. They just evolved.
Solvers and Their Role in Complementing LLMs
One of the most powerful features of Prolog and similar logic-based systems is their use of solvers. Solvers are mechanisms that can take a set of rules and constraints and automatically find solutions that satisfy these conditions. This capability is incredibly useful when combined with LLMs, which excel at generating human-like language but need help with logical consistency and structured reasoning.
For instance, imagine a scenario where an LLM needs to answer a question involving multiple logical steps or a complex query that requires deducing facts from various pieces of information. In this case, a solver can derive valid conclusions based on a given set of logical rules, providing structured answers that the LLM can then articulate in natural language. This allows the LLM to retrieve information and ensure the logical integrity of its responses, leading to much more robust answers.
Solvers are also ideal for handling constraint satisfaction problems — situations where multiple conditions must be met simultaneously. In practical applications, this could include scheduling tasks, generating optimal recommendations, or even diagnosing issues where a set of symptoms must match possible diagnoses. Prolog’s solver capabilities and LLM’s natural language processing power can make these systems highly effective at providing intelligent, rule-compliant responses that traditional LLMs would struggle to produce alone.
By integrating neurosymbolic methods that utilize solvers, we can provide LLMs with a form of deductive reasoning that is missing from pure deep-learning approaches. This combination has the potential to significantly improve the quality of outputs for use-cases that require explicit, structured problem-solving, from legal queries to scientific research and beyond. Solvers give LLMs the backbone they need to not just generate answers but to do so in a way that respects logical rigor and complex constraints.
Graph of Rules for Enhanced Reasoning
Another powerful concept that complements LLMs is using a graph of rules. A graph of rules is essentially a structured collection of logical rules that interconnect in a network-like structure, defining how various entities and their relationships interact. This structured network allows for complex reasoning and information retrieval, as well as the ability to model intricate relationships between different pieces of knowledge.
In a graph of rules, each node represents a rule, and the edges define relationships between those rules — such as dependencies or causal links. This structure can be used to enhance LLM capabilities by providing them with a formal set of rules and relationships to follow, which improves logical consistency and reasoning depth. When an LLM encounters a problem or a question that requires multiple logical steps, it can traverse this graph of rules to generate an answer that is not only linguistically fluent but also logically robust.
For example, in a healthcare application, a graph of rules might include nodes for medical symptoms, possible diagnoses, and recommended treatments. When an LLM receives a query regarding a patient’s symptoms, it can use the graph to traverse from symptoms to potential diagnoses and then to treatment options, ensuring that the response is coherent and medically sound. The graph of rules guides reasoning, enabling LLMs to handle complex, multi-step questions that involve chains of reasoning, rather than merely generating surface-level responses.
Graphs of rules also enable modular reasoning, where different sets of rules can be activated based on the context or the type of question being asked. This modularity is crucial for creating adaptive AI systems that can apply specific sets of logical frameworks to distinct problem domains, thereby greatly enhancing their versatility. The combination of neural fluency with rule-based structure gives LLMs the ability to conduct more advanced reasoning, ultimately making them more reliable and effective in domains where accuracy and logical consistency are critical.
By implementing a graph of rules, LLMs are empowered to perform deductive reasoning alongside their generative capabilities, creating responses that are not only compelling but also logically aligned with the structured knowledge available in the system. This further enhances their potential applications in fields such as law, engineering, finance, and scientific research — domains where logical consistency is as important as linguistic coherence.
Enhancing LLMs with Symbolic Reasoning
Now, with LLMs like GPT-4 being mainstream, there is an emerging need to add real reasoning capabilities to them. This is where neurosymbolic approaches shine. Instead of pitting neural networks against symbolic reasoning, these methods combine the best of both worlds. The neural aspect provides language fluency and recognition of complex patterns, while the symbolic side offers real reasoning power through formal logic and rule-based frameworks.
Personal Knowledge Graphs (PKGs) come into play here as well. Knowledge graphs are data structures that encode entities and their relationships — they’re essentially semantic networks that allow for structured information retrieval. When integrated with neurosymbolic approaches, LLMs can use these graphs to answer questions in a far more contextual and precise way. By retrieving relevant information from a knowledge graph, they can ground their responses in well-defined relationships, thus improving both the relevance and the logical consistency of their answers.
Imagine combining an LLM with a graph of rules that allow it to reason through the relationships encoded in a personal knowledge graph. This could involve using deductive databases to form a sophisticated way to represent and reason with symbolic data — essentially constructing a powerful hybrid system that uses LLM capabilities for language fluency and rule-based logic for structured problem-solving.
My Research on Deductive Databases and Knowledge Graphs
I recently did some research on modeling knowledge graphs using deductive databases, such as DataLog — which can be thought of as a limited, data-oriented version of Prolog. What I’ve found is that it’s possible to use formal logic to model knowledge graphs, ontologies, and complex relationships elegantly as rules in a deductive system. Unlike classical RDF or traditional ontology-based models, which sometimes struggle with complex or evolving relationships, a deductive approach is more flexible and can easily support dynamic rules and reasoning.
Prolog and similar logic-driven frameworks can complement LLMs by handling the parts of reasoning where explicit rule-following is required. LLMs can benefit from these rule-based systems for tasks like entity recognition, logical inferences, and constructing or traversing knowledge graphs. We can even create a graph of rules that governs how relationships are formed or how logical deductions can be performed.
The future is really about creating an AI that is capable of both deep contextual understanding (using the powerful generative capacity of LLMs) and true reasoning (through symbolic systems and knowledge graphs). With the neurosymbolic approach, these AIs could be equipped not just to generate information but to explain their reasoning, form logical conclusions, and even improve their own understanding over time — getting us a step closer to true artificial general intelligence.
Why It Matters for LLM Employment
Using neurosymbolic RAG (retrieval-augmented generation) in conjunction with personal knowledge graphs could revolutionize how LLMs work in real-world applications. Imagine an LLM that understands not just language but also the relationships between different concepts — one that can navigate, reason, and explain complex knowledge domains by actively engaging with a personalized set of facts and rules.
This could lead to practical applications in areas like healthcare, finance, legal reasoning, or even personal productivity — where LLMs can help users solve complex problems logically, providing relevant information and well-justified reasoning paths. The combination of neural fluency with symbolic accuracy and deductive power is precisely the bridge we need to move beyond purely predictive AI to truly intelligent systems.
Let's explore these ideas further if you’re as fascinated by this as I am. Feel free to reach out, follow my YouTube channel, or check out some articles I’ll link below. And if you’re working on anything in this field, I’d love to collaborate!
Until next time, folks. Stay curious, and keep pushing the boundaries of AI!
-
@ c631e267:c2b78d3e
2025-04-25 20:06:24Die Wahrheit verletzt tiefer als jede Beleidigung. \ Marquis de Sade
Sagen Sie niemals «Terroristin B.», «Schwachkopf H.», «korrupter Drecksack S.» oder «Meinungsfreiheitshasserin F.» und verkneifen Sie sich Memes, denn so etwas könnte Ihnen als Beleidigung oder Verleumdung ausgelegt werden und rechtliche Konsequenzen haben. Auch mit einer Frau M.-A. S.-Z. ist in dieser Beziehung nicht zu spaßen, sie gehört zu den Top-Anzeigenstellern.
«Politikerbeleidigung» als Straftatbestand wurde 2021 im Kampf gegen «Rechtsextremismus und Hasskriminalität» in Deutschland eingeführt, damals noch unter der Regierung Merkel. Im Gesetz nicht festgehalten ist die Unterscheidung zwischen schlechter Hetze und guter Hetze – trotzdem ist das gängige Praxis, wie der Titel fast schon nahelegt.
So dürfen Sie als Politikerin heute den Tesla als «Nazi-Auto» bezeichnen und dies ausdrücklich auf den Firmengründer Elon Musk und dessen «rechtsextreme Positionen» beziehen, welche Sie nicht einmal belegen müssen. [1] Vielleicht ernten Sie Proteste, jedoch vorrangig wegen der «gut bezahlten, unbefristeten Arbeitsplätze» in Brandenburg. Ihren Tweet hat die Berliner Senatorin Cansel Kiziltepe inzwischen offenbar dennoch gelöscht.
Dass es um die Meinungs- und Pressefreiheit in der Bundesrepublik nicht mehr allzu gut bestellt ist, befürchtet man inzwischen auch schon im Ausland. Der Fall des Journalisten David Bendels, der kürzlich wegen eines Faeser-Memes zu sieben Monaten Haft auf Bewährung verurteilt wurde, führte in diversen Medien zu Empörung. Die Welt versteckte ihre Kritik mit dem Titel «Ein Urteil wie aus einer Diktatur» hinter einer Bezahlschranke.
Unschöne, heutzutage vielleicht strafbare Kommentare würden mir auch zu einigen anderen Themen und Akteuren einfallen. Ein Kandidat wäre der deutsche Bundesgesundheitsminister (ja, er ist es tatsächlich immer noch). Während sich in den USA auf dem Gebiet etwas bewegt und zum Beispiel Robert F. Kennedy Jr. will, dass die Gesundheitsbehörde (CDC) keine Covid-Impfungen für Kinder mehr empfiehlt, möchte Karl Lauterbach vor allem das Corona-Lügengebäude vor dem Einsturz bewahren.
«Ich habe nie geglaubt, dass die Impfungen nebenwirkungsfrei sind», sagte Lauterbach jüngst der ZDF-Journalistin Sarah Tacke. Das steht in krassem Widerspruch zu seiner früher verbreiteten Behauptung, die Gen-Injektionen hätten keine Nebenwirkungen. Damit entlarvt er sich selbst als Lügner. Die Bezeichnung ist absolut berechtigt, dieser Mann dürfte keinerlei politische Verantwortung tragen und das Verhalten verlangt nach einer rechtlichen Überprüfung. Leider ist ja die Justiz anderweitig beschäftigt und hat außerdem selbst keine weiße Weste.
Obendrein kämpfte der Herr Minister für eine allgemeine Impfpflicht. Er beschwor dabei das Schließen einer «Impflücke», wie es die Weltgesundheitsorganisation – die «wegen Trump» in finanziellen Schwierigkeiten steckt – bis heute tut. Die WHO lässt aktuell ihre «Europäische Impfwoche» propagieren, bei der interessanterweise von Covid nicht mehr groß die Rede ist.
Einen «Klima-Leugner» würden manche wohl Nir Shaviv nennen, das ist ja nicht strafbar. Der Astrophysiker weist nämlich die Behauptung von einer Klimakrise zurück. Gemäß seiner Forschung ist mindestens die Hälfte der Erderwärmung nicht auf menschliche Emissionen, sondern auf Veränderungen im Sonnenverhalten zurückzuführen.
Das passt vielleicht auch den «Klima-Hysterikern» der britischen Regierung ins Konzept, die gerade Experimente zur Verdunkelung der Sonne angekündigt haben. Produzenten von Kunstfleisch oder Betreiber von Insektenfarmen würden dagegen vermutlich die Geschichte vom fatalen CO2 bevorzugen. Ihnen würde es besser passen, wenn der verantwortungsvolle Erdenbürger sein Verhalten gründlich ändern müsste.
In unserer völlig verkehrten Welt, in der praktisch jede Verlautbarung außerhalb der abgesegneten Narrative potenziell strafbar sein kann, gehört fast schon Mut dazu, Dinge offen anzusprechen. Im «besten Deutschland aller Zeiten» glaubten letztes Jahr nur noch 40 Prozent der Menschen, ihre Meinung frei äußern zu können. Das ist ein Armutszeugnis, und es sieht nicht gerade nach Besserung aus. Umso wichtiger ist es, dagegen anzugehen.
[Titelbild: Pixabay]
--- Quellen: ---
[1] Zur Orientierung wenigstens ein paar Hinweise zur NS-Vergangenheit deutscher Automobilhersteller:
- Volkswagen
- Porsche
- Daimler-Benz
- BMW
- Audi
- Opel
- Heute: «Auto-Werke für die Rüstung? Rheinmetall prüft Übernahmen»
Dieser Beitrag wurde mit dem Pareto-Client geschrieben und ist zuerst auf Transition News erschienen.
-
@ e6817453:b0ac3c39
2024-12-07 14:54:46Introduction: Personal Knowledge Graphs and Linked Data
We will explore the world of personal knowledge graphs and discuss how they can be used to model complex information structures. Personal knowledge graphs aren’t just abstract collections of nodes and edges—they encode meaningful relationships, contextualizing data in ways that enrich our understanding of it. While the core structure might be a directed graph, we layer semantic meaning on top, enabling nuanced connections between data points.
The origin of knowledge graphs is deeply tied to concepts from linked data and the semantic web, ideas that emerged to better link scattered pieces of information across the web. This approach created an infrastructure where data islands could connect — facilitating everything from more insightful AI to improved personal data management.
In this article, we will explore how these ideas have evolved into tools for modeling AI’s semantic memory and look at how knowledge graphs can serve as a flexible foundation for encoding rich data contexts. We’ll specifically discuss three major paradigms: RDF (Resource Description Framework), property graphs, and a third way of modeling entities as graphs of graphs. Let’s get started.
Intro to RDF
The Resource Description Framework (RDF) has been one of the fundamental standards for linked data and knowledge graphs. RDF allows data to be modeled as triples: subject, predicate, and object. Essentially, you can think of it as a structured way to describe relationships: “X has a Y called Z.” For instance, “Berlin has a population of 3.5 million.” This modeling approach is quite flexible because RDF uses unique identifiers — usually URIs — to point to data entities, making linking straightforward and coherent.
RDFS, or RDF Schema, extends RDF to provide a basic vocabulary to structure the data even more. This lets us describe not only individual nodes but also relationships among types of data entities, like defining a class hierarchy or setting properties. For example, you could say that “Berlin” is an instance of a “City” and that cities are types of “Geographical Entities.” This kind of organization helps establish semantic meaning within the graph.
RDF and Advanced Topics
Lists and Sets in RDF
RDF also provides tools to model more complex data structures such as lists and sets, enabling the grouping of nodes. This extension makes it easier to model more natural, human-like knowledge, for example, describing attributes of an entity that may have multiple values. By adding RDF Schema and OWL (Web Ontology Language), you gain even more expressive power — being able to define logical rules or even derive new relationships from existing data.
Graph of Graphs
A significant feature of RDF is the ability to form complex nested structures, often referred to as graphs of graphs. This allows you to create “named graphs,” essentially subgraphs that can be independently referenced. For example, you could create a named graph for a particular dataset describing Berlin and another for a different geographical area. Then, you could connect them, allowing for more modular and reusable knowledge modeling.
Property Graphs
While RDF provides a robust framework, it’s not always the easiest to work with due to its heavy reliance on linking everything explicitly. This is where property graphs come into play. Property graphs are less focused on linking everything through triples and allow more expressive properties directly within nodes and edges.
For example, instead of using triples to represent each detail, a property graph might let you store all properties about an entity (e.g., “Berlin”) directly in a single node. This makes property graphs more intuitive for many developers and engineers because they more closely resemble object-oriented structures: you have entities (nodes) that possess attributes (properties) and are connected to other entities through relationships (edges).
The significant benefit here is a condensed representation, which speeds up traversal and queries in some scenarios. However, this also introduces a trade-off: while property graphs are more straightforward to query and maintain, they lack some complex relationship modeling features RDF offers, particularly when connecting properties to each other.
Graph of Graphs and Subgraphs for Entity Modeling
A third approach — which takes elements from RDF and property graphs — involves modeling entities using subgraphs or nested graphs. In this model, each entity can be represented as a graph. This allows for a detailed and flexible description of attributes without exploding every detail into individual triples or lump them all together into properties.
For instance, consider a person entity with a complex employment history. Instead of representing every employment detail in one node (as in a property graph), or as several linked nodes (as in RDF), you can treat the employment history as a subgraph. This subgraph could then contain nodes for different jobs, each linked with specific properties and connections. This approach keeps the complexity where it belongs and provides better flexibility when new attributes or entities need to be added.
Hypergraphs and Metagraphs
When discussing more advanced forms of graphs, we encounter hypergraphs and metagraphs. These take the idea of relationships to a new level. A hypergraph allows an edge to connect more than two nodes, which is extremely useful when modeling scenarios where relationships aren’t just pairwise. For example, a “Project” could connect multiple “People,” “Resources,” and “Outcomes,” all in a single edge. This way, hypergraphs help in reducing the complexity of modeling high-order relationships.
Metagraphs, on the other hand, enable nodes and edges to themselves be represented as graphs. This is an extremely powerful feature when we consider the needs of artificial intelligence, as it allows for the modeling of relationships between relationships, an essential aspect for any system that needs to capture not just facts, but their interdependencies and contexts.
Balancing Structure and Properties
One of the recurring challenges when modeling knowledge is finding the balance between structure and properties. With RDF, you get high flexibility and standardization, but complexity can quickly escalate as you decompose everything into triples. Property graphs simplify the representation by using attributes but lose out on the depth of connection modeling. Meanwhile, the graph-of-graphs approach and hypergraphs offer advanced modeling capabilities at the cost of increased computational complexity.
So, how do you decide which model to use? It comes down to your use case. RDF and nested graphs are strong contenders if you need deep linkage and are working with highly variable data. For more straightforward, engineer-friendly modeling, property graphs shine. And when dealing with very complex multi-way relationships or meta-level knowledge, hypergraphs and metagraphs provide the necessary tools.
The key takeaway is that only some approaches are perfect. Instead, it’s all about the modeling goals: how do you want to query the graph, what relationships are meaningful, and how much complexity are you willing to manage?
Conclusion
Modeling AI semantic memory using knowledge graphs is a challenging but rewarding process. The different approaches — RDF, property graphs, and advanced graph modeling techniques like nested graphs and hypergraphs — each offer unique strengths and weaknesses. Whether you are building a personal knowledge graph or scaling up to AI that integrates multiple streams of linked data, it’s essential to understand the trade-offs each approach brings.
In the end, the choice of representation comes down to the nature of your data and your specific needs for querying and maintaining semantic relationships. The world of knowledge graphs is vast, with many tools and frameworks to explore. Stay connected and keep experimenting to find the balance that works for your projects.
-
@ 30ceb64e:7f08bdf5
2025-04-26 20:33:30Status: Draft
Author: TheWildHustleAbstract
This NIP defines a framework for storing and sharing health and fitness profile data on Nostr. It establishes a set of standardized event kinds for individual health metrics, allowing applications to selectively access specific health information while preserving user control and privacy.
In this framework exists - NIP-101h.1 Weight using kind 1351 - NIP-101h.2 Height using kind 1352 - NIP-101h.3 Age using kind 1353 - NIP-101h.4 Gender using kind 1354 - NIP-101h.5 Fitness Level using kind 1355
Motivation
I want to build and support an ecosystem of health and fitness related nostr clients that have the ability to share and utilize a bunch of specific interoperable health metrics.
- Selective access - Applications can access only the data they need
- User control - Users can choose which metrics to share
- Interoperability - Different health applications can share data
- Privacy - Sensitive health information can be managed independently
Specification
Kind Number Range
Health profile metrics use the kind number range 1351-1399:
| Kind | Metric | | --------- | ---------------------------------- | | 1351 | Weight | | 1352 | Height | | 1353 | Age | | 1354 | Gender | | 1355 | Fitness Level | | 1356-1399 | Reserved for future health metrics |
Common Structure
All health metric events SHOULD follow these guidelines:
- The content field contains the primary value of the metric
- Required tags:
['t', 'health']
- For categorizing as health data['t', metric-specific-tag]
- For identifying the specific metric['unit', unit-of-measurement]
- When applicable- Optional tags:
['converted_value', value, unit]
- For providing alternative unit measurements['timestamp', ISO8601-date]
- When the metric was measured['source', application-name]
- The source of the measurement
Unit Handling
Health metrics often have multiple ways to be measured. To ensure interoperability:
- Where multiple units are possible, one standard unit SHOULD be chosen as canonical
- When using non-standard units, a
converted_value
tag SHOULD be included with the canonical unit - Both the original and converted values should be provided for maximum compatibility
Client Implementation Guidelines
Clients implementing this NIP SHOULD:
- Allow users to explicitly choose which metrics to publish
- Support reading health metrics from other users when appropriate permissions exist
- Support updating metrics with new values over time
- Preserve tags they don't understand for future compatibility
- Support at least the canonical unit for each metric
Extensions
New health metrics can be proposed as extensions to this NIP using the format:
- NIP-101h.X where X is the metric number
Each extension MUST specify: - A unique kind number in the range 1351-1399 - The content format and meaning - Required and optional tags - Examples of valid events
Privacy Considerations
Health data is sensitive personal information. Clients implementing this NIP SHOULD:
- Make it clear to users when health data is being published
- Consider incorporating NIP-44 encryption for sensitive metrics
- Allow users to selectively share metrics with specific individuals
- Provide easy ways to delete previously published health data
NIP-101h.1: Weight
Description
This NIP defines the format for storing and sharing weight data on Nostr.
Event Kind: 1351
Content
The content field MUST contain the numeric weight value as a string.
Required Tags
- ['unit', 'kg' or 'lb'] - Unit of measurement
- ['t', 'health'] - Categorization tag
- ['t', 'weight'] - Specific metric tag
Optional Tags
- ['converted_value', value, unit] - Provides the weight in alternative units for interoperability
- ['timestamp', ISO8601 date] - When the weight was measured
Examples
json { "kind": 1351, "content": "70", "tags": [ ["unit", "kg"], ["t", "health"], ["t", "weight"] ] }
json { "kind": 1351, "content": "154", "tags": [ ["unit", "lb"], ["t", "health"], ["t", "weight"], ["converted_value", "69.85", "kg"] ] }
NIP-101h.2: Height
Status: Draft
Description
This NIP defines the format for storing and sharing height data on Nostr.
Event Kind: 1352
Content
The content field can use two formats: - For metric height: A string containing the numeric height value in centimeters (cm) - For imperial height: A JSON string with feet and inches properties
Required Tags
['t', 'health']
- Categorization tag['t', 'height']
- Specific metric tag['unit', 'cm' or 'imperial']
- Unit of measurement
Optional Tags
['converted_value', value, 'cm']
- Provides height in centimeters for interoperability when imperial is used['timestamp', ISO8601-date]
- When the height was measured
Examples
```jsx // Example 1: Metric height Apply to App.jsx
// Example 2: Imperial height with conversion Apply to App.jsx ```
Implementation Notes
- Centimeters (cm) is the canonical unit for height interoperability
- When using imperial units, a conversion to centimeters SHOULD be provided
- Height values SHOULD be positive integers
- For maximum compatibility, clients SHOULD support both formats
NIP-101h.3: Age
Status: Draft
Description
This NIP defines the format for storing and sharing age data on Nostr.
Event Kind: 1353
Content
The content field MUST contain the numeric age value as a string.
Required Tags
['unit', 'years']
- Unit of measurement['t', 'health']
- Categorization tag['t', 'age']
- Specific metric tag
Optional Tags
['timestamp', ISO8601-date]
- When the age was recorded['dob', ISO8601-date]
- Date of birth (if the user chooses to share it)
Examples
```jsx // Example 1: Basic age Apply to App.jsx
// Example 2: Age with DOB Apply to App.jsx ```
Implementation Notes
- Age SHOULD be represented as a positive integer
- For privacy reasons, date of birth (dob) is optional
- Clients SHOULD consider updating age automatically if date of birth is known
- Age can be a sensitive metric and clients may want to consider encrypting this data
NIP-101h.4: Gender
Status: Draft
Description
This NIP defines the format for storing and sharing gender data on Nostr.
Event Kind: 1354
Content
The content field contains a string representing the user's gender.
Required Tags
['t', 'health']
- Categorization tag['t', 'gender']
- Specific metric tag
Optional Tags
['timestamp', ISO8601-date]
- When the gender was recorded['preferred_pronouns', string]
- User's preferred pronouns
Common Values
While any string value is permitted, the following common values are recommended for interoperability: - male - female - non-binary - other - prefer-not-to-say
Examples
```jsx // Example 1: Basic gender Apply to App.jsx
// Example 2: Gender with pronouns Apply to App.jsx ```
Implementation Notes
- Clients SHOULD allow free-form input for gender
- For maximum compatibility, clients SHOULD support the common values
- Gender is a sensitive personal attribute and clients SHOULD consider appropriate privacy controls
- Applications focusing on health metrics should be respectful of gender diversity
NIP-101h.5: Fitness Level
Status: Draft
Description
This NIP defines the format for storing and sharing fitness level data on Nostr.
Event Kind: 1355
Content
The content field contains a string representing the user's fitness level.
Required Tags
['t', 'health']
- Categorization tag['t', 'fitness']
- Fitness category tag['t', 'level']
- Specific metric tag
Optional Tags
['timestamp', ISO8601-date]
- When the fitness level was recorded['activity', activity-type]
- Specific activity the fitness level relates to['metrics', JSON-string]
- Quantifiable fitness metrics used to determine level
Common Values
While any string value is permitted, the following common values are recommended for interoperability: - beginner - intermediate - advanced - elite - professional
Examples
```jsx // Example 1: Basic fitness level Apply to App.jsx
// Example 2: Activity-specific fitness level with metrics Apply to App.jsx ```
Implementation Notes
- Fitness level is subjective and may vary by activity
- The activity tag can be used to specify fitness level for different activities
- The metrics tag can provide objective measurements to support the fitness level
- Clients can extend this format to include activity-specific fitness assessments
- For general fitness apps, the simple beginner/intermediate/advanced scale is recommended
-
@ df478568:2a951e67
2025-04-26 19:23:46Welcome to Zap This Blog
Exploring Liberty With Fredom Tech
I can string some spaghetti HTMl code together here and there, but vibe coding gave me the confidence to look into the code injection section of the ghost Blog. As sudden as a new block, the Lex Friedman Robert Rodriguez interview, I had an epiphony when he asked Lex, "Do you consider yourself a creative person?" I aswered for myself, right away, emphatically yes. I just felt like I never knew what to do with this creative energy. Friedman hesitated and I was like..Wow...He has extreme creativity like Jocko Wilink has extreme disipline. If that guy has doubts, what the hell is stopping me from trying other stuff?
Rodriguez also claimed Four rooms was financial flop. I thought that movie was genius. I had no idea it failed financially. Nevertheless, it was not profitable. His advice was like Tony Robbins for film nerds. I learned about him in a film class I took in college. He was legendary for making a mobie for $7,000. My professor also said it was made for the Mexican VHS market, but I did not know he never sold it to that market. Robert Rodriguez tells the story 100X better, as you might expect a director of his caliber would. His advice hits like Tony Robbins, for film geeks. Here are a few gem quotes from the epiode.
-
"Sift through the ashes of your failures"
-
"Turn chicken shit into chicken salad."
-
"Follow your instinct. If it doesn't work, just go. Sometimes you need to slip on the first two rocks, so the key is in the ashes of failure because if I had an insticnt, that means I was on the right track. I didn't get the result I want. That's because the result might be something way bigger that I don't have the vision for and the universe is just pushing me that way."
-
"Turn chicken shit into chicken salad."
-
"If you have some kind of failure on something that you..., don't let it knock you down. Maybe in ten years they'll think it's great. I'm just going to commit to making a body of work, a body of work."
Rodriguez taught me what I already know. I am a creative person. I am just a body, punching keys on a keyboard, taking pictures, and semi-vibe-coding art. Maybe this is a shitty blog post today, but I write it anyway. Someone might look at it like I first looked at the math in the Bitcoin white paper and scan it with their eyeballs without really reading or understanding it. Most people on Substack probably don't want to read HTML, but maybe someone will come accross it one day and build something themselves they can find in the ashes of this code.
I once saw Brian Harrington say every bitcoiner is a business owner. If you have a bitcoin address, you can accept bitcoin. How does someone find you though? Are they really going to find your bitcoin address on GitHub? I'd bet 100 sats they won't. Nostr fixes this so I thought about integrating it into my Ghost Blog. I looked at the code injection section and let my muse do the typing. Actually, I let the Duck Duck AI chat do the vibe-coding. As it turns out, you an add a header and footer on Ghost in the code injection. It's just the same HTMl I used to make my MySpace page. Then I thought, what if someone couldn't afford a Start9 or didn't know how to vibe code on Duck Duck Go's free AI chat using Claude? What if, like Rodriguez suggests, I create a business card?
You could just copy my HTML and change my nostr links and pics to go to your nostr links and pics. You could publish that HTML into https://habla.news. Now you have an e-commerce site with a blog, a merch store, and your nostree. I don't know if this will work. This is the muse's hypothesis. I'm just writing the words down. You'll need to test this idea for yourself.
npub1marc26z8nh3xkj5rcx7ufkatvx6ueqhp5vfw9v5teq26z254renshtf3g0
marc26z@getalby.com
Zap This Blog! -
@ e6817453:b0ac3c39
2024-12-07 14:52:47The temporal semantics and temporal and time-aware knowledge graphs. We have different memory models for artificial intelligence agents. We all try to mimic somehow how the brain works, or at least how the declarative memory of the brain works. We have the split of episodic memory and semantic memory. And we also have a lot of theories, right?
Declarative Memory of the Human Brain
How is the semantic memory formed? We all know that our brain stores semantic memory quite close to the concept we have with the personal knowledge graphs, that it’s connected entities. They form a connection with each other and all those things. So far, so good. And actually, then we have a lot of concepts, how the episodic memory and our experiences gets transmitted to the semantic:
- hippocampus indexing and retrieval
- sanitization of episodic memories
- episodic-semantic shift theory
They all give a different perspective on how different parts of declarative memory cooperate.
We know that episodic memories get semanticized over time. You have semantic knowledge without the notion of time, and probably, your episodic memory is just decayed.
But, you know, it’s still an open question:
do we want to mimic an AI agent’s memory as a human brain memory, or do we want to create something different?
It’s an open question to which we have no good answer. And if you go to the theory of neuroscience and check how episodic and semantic memory interfere, you will still find a lot of theories, yeah?
Some of them say that you have the hippocampus that keeps the indexes of the memory. Some others will say that you semantic the episodic memory. Some others say that you have some separate process that digests the episodic and experience to the semantics. But all of them agree on the plan that it’s operationally two separate areas of memories and even two separate regions of brain, and the semantic, it’s more, let’s say, protected.
So it’s harder to forget the semantical facts than the episodes and everything. And what I’m thinking about for a long time, it’s this, you know, the semantic memory.
Temporal Semantics
It’s memory about the facts, but you somehow mix the time information with the semantics. I already described a lot of things, including how we could combine time with knowledge graphs and how people do it.
There are multiple ways we could persist such information, but we all hit the wall because the complexity of time and the semantics of time are highly complex concepts.
Time in a Semantic context is not a timestamp.
What I mean is that when you have a fact, and you just mentioned that I was there at this particular moment, like, I don’t know, 15:40 on Monday, it’s already awake because we don’t know which Monday, right? So you need to give the exact date, but usually, you do not have experiences like that.
You do not record your memories like that, except you do the journaling and all of the things. So, usually, you have no direct time references. What I mean is that you could say that I was there and it was some event, blah, blah, blah.
Somehow, we form a chain of events that connect with each other and maybe will be connected to some period of time if we are lucky enough. This means that we could not easily represent temporal-aware information as just a timestamp or validity and all of the things.
For sure, the validity of the knowledge graphs (simple quintuple with start and end dates)is a big topic, and it could solve a lot of things. It could solve a lot of the time cases. It’s super simple because you give the end and start dates, and you are done, but it does not answer facts that have a relative time or time information in facts . It could solve many use cases but struggle with facts in an indirect temporal context. I like the simplicity of this idea. But the problem of this approach that in most cases, we simply don’t have these timestamps. We don’t have the timestamp where this information starts and ends. And it’s not modeling many events in our life, especially if you have the processes or ongoing activities or recurrent events.
I’m more about thinking about the time of semantics, where you have a time model as a hybrid clock or some global clock that does the partial ordering of the events. It’s mean that you have the chain of the experiences and you have the chain of the facts that have the different time contexts.
We could deduct the time from this chain of the events. But it’s a big, big topic for the research. But what I want to achieve, actually, it’s not separation on episodic and semantic memory. It’s having something in between.
Blockchain of connected events and facts
I call it temporal-aware semantics or time-aware knowledge graphs, where we could encode the semantic fact together with the time component.I doubt that time should be the simple timestamp or the region of the two timestamps. For me, it is more a chain for facts that have a partial order and form a blockchain like a database or a partially ordered Acyclic graph of facts that are temporally connected. We could have some notion of time that is understandable to the agent and a model that allows us to order the events and focus on what the agent knows and how to order this time knowledge and create the chains of the events.
Time anchors
We may have a particular time in the chain that allows us to arrange a more concrete time for the rest of the events. But it’s still an open topic for research. The temporal semantics gets split into a couple of domains. One domain is how to add time to the knowledge graphs. We already have many different solutions. I described them in my previous articles.
Another domain is the agent's memory and how the memory of the artificial intelligence treats the time. This one, it’s much more complex. Because here, we could not operate with the simple timestamps. We need to have the representation of time that are understandable by model and understandable by the agent that will work with this model. And this one, it’s way bigger topic for the research.”
-
@ a39d19ec:3d88f61e
2024-11-21 12:05:09A state-controlled money supply can influence the development of socialist policies and practices in various ways. Although the relationship is not deterministic, state control over the money supply can contribute to a larger role of the state in the economy and facilitate the implementation of socialist ideals.
Fiscal Policy Capabilities
When the state manages the money supply, it gains the ability to implement fiscal policies that can lead to an expansion of social programs and welfare initiatives. Funding these programs by creating money can enhance the state's influence over the economy and move it closer to a socialist model. The Soviet Union, for instance, had a centralized banking system that enabled the state to fund massive industrialization and social programs, significantly expanding the state's role in the economy.
Wealth Redistribution
Controlling the money supply can also allow the state to influence economic inequality through monetary policies, effectively redistributing wealth and reducing income disparities. By implementing low-interest loans or providing financial assistance to disadvantaged groups, the state can narrow the wealth gap and promote social equality, as seen in many European welfare states.
Central Planning
A state-controlled money supply can contribute to increased central planning, as the state gains more influence over the economy. Central banks, which are state-owned or heavily influenced by the state, play a crucial role in managing the money supply and facilitating central planning. This aligns with socialist principles that advocate for a planned economy where resources are allocated according to social needs rather than market forces.
Incentives for Staff
Staff members working in state institutions responsible for managing the money supply have various incentives to keep the system going. These incentives include job security, professional expertise and reputation, political alignment, regulatory capture, institutional inertia, and legal and administrative barriers. While these factors can differ among individuals, they can collectively contribute to the persistence of a state-controlled money supply system.
In conclusion, a state-controlled money supply can facilitate the development of socialist policies and practices by enabling fiscal policies, wealth redistribution, and central planning. The staff responsible for managing the money supply have diverse incentives to maintain the system, further ensuring its continuation. However, it is essential to note that many factors influence the trajectory of an economic system, and the relationship between state control over the money supply and socialism is not inevitable.
-
@ a39d19ec:3d88f61e
2024-11-17 10:48:56This week's functional 3d print is the "Dino Clip".
Dino Clip
I printed it some years ago for my son, so he would have his own clip for cereal bags.
Now it is used to hold a bag of dog food close.
The design by "Sneaks" is a so called "print in place". This means that the whole clip with moving parts is printed in one part, without the need for assembly after the print.
The clip is very strong, and I would print it again if I need a "heavy duty" clip for more rigid or big bags. Link to the file at Printables
-
@ a367f9eb:0633efea
2024-11-05 08:48:41Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?
As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
Open-source matters When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
Open-source is for your friends, and enemies In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
Limiting open-source threatens our own advancement If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.
The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?
As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
Open-source matters
When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
Open-source is for your friends, and enemies
In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
Limiting open-source threatens our own advancement
If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.
The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
Originally published on the website of the Consumer Choice Center.
-
@ a53364ff:e6ba5513
2025-04-26 18:43:23Decentralization and Control:
Bitcoin operates on a decentralized, peer-to-peer network, meaning no single entity controls it. This decentralization can empower individuals by allowing them to control their own money and financial transactions without relying on intermediaries like banks or governments.
Financial Inclusion and Accessibility:
Bitcoin can provide financial access to individuals who may be excluded from traditional banking systems due to lack of identity documents, high fees, or other barriers.
Privacy and Security:
Bitcoin transactions can be anonymous, offering a degree of privacy and potentially protecting users from surveillance or financial manipulation.
Freedom from Financial Restrictions:
In countries with high inflation, capital controls, or currency restrictions, Bitcoin can offer a way to store and transfer value without being subject to these restrictions.
Potential for Social Justice:
Bitcoin's ability to provide financial freedom and bypass traditional systems can be seen as a tool for social justice, particularly in regions facing authoritarianism or financial oppression.
Examples of Bitcoin Adoption:
In El Salvador, Bitcoin has been legalized and adopted in some communities, offering a way for people to conduct everyday transactions and participate in social programs.
-
@ 09fbf8f3:fa3d60f0
2024-11-02 08:00:29> ### 第三方API合集:
免责申明:
在此推荐的 OpenAI API Key 由第三方代理商提供,所以我们不对 API Key 的 有效性 和 安全性 负责,请你自行承担购买和使用 API Key 的风险。
| 服务商 | 特性说明 | Proxy 代理地址 | 链接 | | --- | --- | --- | --- | | AiHubMix | 使用 OpenAI 企业接口,全站模型价格为官方 86 折(含 GPT-4 )| https://aihubmix.com/v1 | 官网 | | OpenAI-HK | OpenAI的API官方计费模式为,按每次API请求内容和返回内容tokens长度来定价。每个模型具有不同的计价方式,以每1,000个tokens消耗为单位定价。其中1,000个tokens约为750个英文单词(约400汉字)| https://api.openai-hk.com/ | 官网 | | CloseAI | CloseAI是国内规模最大的商用级OpenAI代理平台,也是国内第一家专业OpenAI中转服务,定位于企业级商用需求,面向企业客户的线上服务提供高质量稳定的官方OpenAI API 中转代理,是百余家企业和多家科研机构的专用合作平台。 | https://api.openai-proxy.org | 官网 | | OpenAI-SB | 需要配合Telegram 获取api key | https://api.openai-sb.com | 官网 |
持续更新。。。
推广:
访问不了openai,去
低调云
购买VPN。官网:https://didiaocloud.xyz
邀请码:
w9AjVJit
价格低至1元。
-
@ 4c48cf05:07f52b80
2024-10-30 01:03:42I believe that five years from now, access to artificial intelligence will be akin to what access to the Internet represents today. It will be the greatest differentiator between the haves and have nots. Unequal access to artificial intelligence will exacerbate societal inequalities and limit opportunities for those without access to it.
Back in April, the AI Index Steering Committee at the Institute for Human-Centered AI from Stanford University released The AI Index 2024 Annual Report.
Out of the extensive report (502 pages), I chose to focus on the chapter dedicated to Public Opinion. People involved with AI live in a bubble. We all know and understand AI and therefore assume that everyone else does. But, is that really the case once you step out of your regular circles in Seattle or Silicon Valley and hit Main Street?
Two thirds of global respondents have a good understanding of what AI is
The exact number is 67%. My gut feeling is that this number is way too high to be realistic. At the same time, 63% of respondents are aware of ChatGPT so maybe people are confounding AI with ChatGPT?
If so, there is so much more that they won't see coming.
This number is important because you need to see every other questions and response of the survey through the lens of a respondent who believes to have a good understanding of what AI is.
A majority are nervous about AI products and services
52% of global respondents are nervous about products and services that use AI. Leading the pack are Australians at 69% and the least worried are Japanise at 23%. U.S.A. is up there at the top at 63%.
Japan is truly an outlier, with most countries moving between 40% and 60%.
Personal data is the clear victim
Exaclty half of the respondents believe that AI companies will protect their personal data. And the other half believes they won't.
Expected benefits
Again a majority of people (57%) think that it will change how they do their jobs. As for impact on your life, top hitters are getting things done faster (54%) and more entertainment options (51%).
The last one is a head scratcher for me. Are people looking forward to AI generated movies?
Concerns
Remember the 57% that thought that AI will change how they do their jobs? Well, it looks like 37% of them expect to lose it. Whether or not this is what will happen, that is a very high number of people who have a direct incentive to oppose AI.
Other key concerns include:
- Misuse for nefarious purposes: 49%
- Violation of citizens' privacy: 45%
Conclusion
This is the first time I come across this report and I wil make sure to follow future annual reports to see how these trends evolve.
Overall, people are worried about AI. There are many things that could go wrong and people perceive that both jobs and privacy are on the line.
Full citation: Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.
The AI Index 2024 Annual Report by Stanford University is licensed under Attribution-NoDerivatives 4.0 International.
-
@ dc4cd086:cee77c06
2024-10-18 04:08:33Have you ever wanted to learn from lengthy educational videos but found it challenging to navigate through hours of content? Our new tool addresses this problem by transforming long-form video lectures into easily digestible, searchable content.
Key Features:
Video Processing:
- Automatically downloads YouTube videos, transcripts, and chapter information
- Splits transcripts into sections based on video chapters
Content Summarization:
- Utilizes language models to transform spoken content into clear, readable text
- Formats output in AsciiDoc for improved readability and navigation
- Highlights key terms and concepts with [[term]] notation for potential cross-referencing
Diagram Extraction:
- Analyzes video entropy to identify static diagram/slide sections
- Provides a user-friendly GUI for manual selection of relevant time ranges
- Allows users to pick representative frames from selected ranges
Going Forward:
Currently undergoing a rewrite to improve organization and functionality, but you are welcome to try the current version, though it might not work on every machine. Will support multiple open and closed language models for user choice Free and open-source, allowing for personal customization and integration with various knowledge bases. Just because we might not have it on our official Alexandria knowledge base, you are still welcome to use it on you own personal or community knowledge bases! We want to help find connections between ideas that exist across relays, allowing individuals and groups to mix and match knowledge bases between each other, allowing for any degree of openness you care.
While designed with #Alexandria users in mind, it's available for anyone to use and adapt to their own learning needs.
Screenshots
Frame Selection
This is a screenshot of the frame selection interface. You'll see a signal that represents frame entropy over time. The vertical lines indicate the start and end of a chapter. Within these chapters you can select the frames by clicking and dragging the mouse over the desired range where you think diagram is in that chapter. At the bottom is an option that tells the program to select a specific number of frames from that selection.
Diagram Extraction
This is a screenshot of the diagram extraction interface. For every selection you've made, there will be a set of frames that you can choose from. You can select and deselect as many frames as you'd like to save.
Links
- repo: https://github.com/limina1/video_article_converter
- Nostr Apps 101: https://www.youtube.com/watch?v=Flxa_jkErqE
Output
And now, we have a demonstration of the final result of this tool, with some quick cleaning up. The video we will be using this tool on is titled Nostr Apps 101 by nostr:npub1nxy4qpqnld6kmpphjykvx2lqwvxmuxluddwjamm4nc29ds3elyzsm5avr7 during Nostrasia. The following thread is an analog to the modular articles we are constructing for Alexandria, and I hope it conveys the functionality we want to create in the knowledge space. Note, this tool is the first step! You could use a different prompt that is most appropriate for the specific context of the transcript you are working with, but you can also manually clean up any discrepancies that don't portray the video accurately.
nostr:nevent1qvzqqqqqqypzp5r5hd579v2sszvvzfel677c8dxgxm3skl773sujlsuft64c44ncqy2hwumn8ghj7un9d3shjtnyv9kh2uewd9hj7qgwwaehxw309ahx7uewd3hkctcpzemhxue69uhhyetvv9ujumt0wd68ytnsw43z7qghwaehxw309aex2mrp0yhxummnw3ezucnpdejz7qgewaehxw309aex2mrp0yh8xmn0wf6zuum0vd5kzmp0qqsxunmjy20mvlq37vnrcshkf6sdrtkfjtjz3anuetmcuv8jswhezgc7hglpn
Or view on Coracle nostr:nevent1qqsxunmjy20mvlq37vnrcshkf6sdrtkfjtjz3anuetmcuv8jswhezgcppemhxue69uhkummn9ekx7mp0qgsdqa9md83tz5yqnrqjw07hhkpmfjpkuv9hlh5v8yhu8z274w9dv7qnnq0s3
-
@ 8947a945:9bfcf626
2024-10-17 08:06:55สวัสดีทุกคนบน Nostr ครับ รวมไปถึง watchersและ ผู้ติดตามของผมจาก Deviantart และ platform งานศิลปะอื่นๆนะครับ
ตั้งแต่ต้นปี 2024 ผมใช้ AI เจนรูปงานตัวละครสาวๆจากอนิเมะ และเปิด exclusive content ให้สำหรับผู้ที่ชื่นชอบผลงานของผมเป็นพิเศษ
ผมโพสผลงานผมทั้งหมดไว้ที่เวบ Deviantart และค่อยๆสร้างฐานผู้ติดตามมาเรื่อยๆอย่างค่อยเป็นค่อยไปมาตลอดครับ ทุกอย่างเติบโตไปเรื่อยๆของมัน ส่วนตัวผมมองว่ามันเป็นพิร์ตธุรกิจออนไลน์ ของผมพอร์ตนึงได้เลย
เมื่อวันที่ 16 กย.2024 มีผู้ติดตามคนหนึ่งส่งข้อความส่วนตัวมาหาผม บอกว่าชื่นชอบผลงานของผมมาก ต้องการจะขอซื้อผลงาน แต่ขอซื้อเป็น NFT นะ เสนอราคาซื้อขายต่อชิ้นที่สูงมาก หลังจากนั้นผมกับผู้ซื้อคนนี้พูดคุยกันในเมล์ครับ
นี่คือข้อสรุปสั่นๆจากการต่อรองซื้อขายครับ
(หลังจากนี้ผมขอเรียกผู้ซื้อว่า scammer นะครับ เพราะไพ่มันหงายมาแล้ว ว่าเขาคือมิจฉาชีพ)
- Scammer รายแรก เลือกผลงานที่จะซื้อ เสนอราคาซื้อที่สูงมาก แต่ต้องเป็นเวบไซต์ NFTmarket place ที่เขากำหนดเท่านั้น มันทำงานอยู่บน ERC20 ผมเข้าไปดูเวบไซต์ที่ว่านี้แล้วรู้สึกว่ามันดูแปลกๆครับ คนที่จะลงขายผลงานจะต้องใช้ email ในการสมัครบัญชีซะก่อน ถึงจะผูก wallet อย่างเช่น metamask ได้ เมื่อผูก wallet แล้วไม่สามารถเปลี่ยนได้ด้วย ตอนนั้นผมใช้ wallet ที่ไม่ได้ link กับ HW wallet ไว้ ทดลองสลับ wallet ไปๆมาๆ มันทำไม่ได้ แถมลอง log out แล้ว เลข wallet ก็ยังคาอยู่อันเดิม อันนี้มันดูแปลกๆแล้วหนึ่งอย่าง เวบนี้ค่า ETH ในการ mint 0.15 - 0.2 ETH … ตีเป็นเงินบาทนี่แพงบรรลัยอยู่นะครับ
-
Scammer รายแรกพยายามชักจูงผม หว่านล้อมผมว่า แหม เดี๋ยวเขาก็มารับซื้องานผมน่า mint งานเสร็จ รีบบอกเขานะ เดี๋ยวเขารีบกดซื้อเลย พอขายได้กำไร ผมก็ได้ค่า gas คืนได้ แถมยังได้กำไรอีก ไม่มีอะไรต้องเสีนจริงมั้ย แต่มันเป้นความโชคดีครับ เพราะตอนนั้นผมไม่เหลือทุนสำรองที่จะมาซื้อ ETH ได้ ผมเลยต่อรองกับเขาตามนี้ครับ :
-
ผมเสนอว่า เอางี้มั้ย ผมส่งผลงานของผมแบบ low resolution ให้ก่อน แลกกับให้เขาช่วยโอน ETH ที่เป็นค่า mint งานมาให้หน่อย พอผมได้ ETH แล้ว ผมจะ upscale งานของผม แล้วเมล์ไปให้ ใจแลกใจกันไปเลย ... เขาไม่เอา
- ผมเสนอให้ไปซื้อที่ร้านค้าออนไลน์ buymeacoffee ของผมมั้ย จ่ายเป็น USD ... เขาไม่เอา
- ผมเสนอให้ซื้อขายผ่าน PPV lightning invoice ที่ผมมีสิทธิ์เข้าถึง เพราะเป็น creator ของ Creatr ... เขาไม่เอา
- ผมยอกเขาว่างั้นก็รอนะ รอเงินเดือนออก เขาบอก ok
สัปดาห์ถัดมา มี scammer คนที่สองติดต่อผมเข้ามา ใช้วิธีการใกล้เคียงกัน แต่ใช้คนละเวบ แถมเสนอราคาซื้อที่สูงกว่าคนแรกมาก เวบที่สองนี้เลวร้ายค่าเวบแรกอีกครับ คือต้องใช้เมล์สมัครบัญชี ไม่สามารถผูก metamask ได้ พอสมัครเสร็จจะได้ wallet เปล่าๆมาหนึ่งอัน ผมต้องโอน ETH เข้าไปใน wallet นั้นก่อน เพื่อเอาไปเป็นค่า mint NFT 0.2 ETH
ผมบอก scammer รายที่สองว่า ต้องรอนะ เพราะตอนนี้กำลังติดต่อซื้อขายอยู่กับผู้ซื้อรายแรกอยู่ ผมกำลังรอเงินเพื่อมาซื้อ ETH เป็นต้นทุนดำเนินงานอยู่ คนคนนี้ขอให้ผมส่งเวบแรกไปให้เขาดูหน่อย หลังจากนั้นไม่นานเขาเตือนผมมาว่าเวบแรกมันคือ scam นะ ไม่สามารถถอนเงินออกมาได้ เขายังส่งรูป cap หน้าจอที่คุยกับผู้เสียหายจากเวบแรกมาให้ดูว่าเจอปัญหาถอนเงินไม่ได้ ไม่พอ เขายังบลัฟ opensea ด้วยว่าลูกค้าขายงานได้ แต่ถอนเงินไม่ได้
Opensea ถอนเงินไม่ได้ ตรงนี้แหละครับคือตัวกระตุกต่อมเอ๊ะของผมดังมาก เพราะ opensea อ่ะ ผู้ใช้ connect wallet เข้ากับ marketplace โดยตรง ซื้อขายกันเกิดขึ้น เงินวิ่งเข้าวิ่งออก wallet ของแต่ละคนโดยตรงเลย opensea เก็บแค่ค่า fee ในการใช้ platform ไม่เก็บเงินลูกค้าไว้ แถมปีนี้ค่า gas fee ก็ถูกกว่า bull run cycle 2020 มาก ตอนนี้ค่า gas fee ประมาณ 0.0001 ETH (แต่มันก็แพงกว่า BTC อยู่ดีอ่ะครับ)
ผมเลยเอาเรื่องนี้ไปปรึกษาพี่บิท แต่แอดมินมาคุยกับผมแทน ทางแอดมินแจ้งว่ายังไม่เคยมีเพื่อนๆมาปรึกษาเรื่องนี้ กรณีที่ผมทักมาถามนี่เป็นรายแรกเลย แต่แอดมินให้ความเห็นไปในทางเดียวกับสมมุติฐานของผมว่าน่าจะ scam ในเวลาเดียวกับผมเอาเรื่องนี้ไปถามในเพจ NFT community คนไทนด้วย ได้รับการ confirm ชัดเจนว่า scam และมีคนไม่น้อยโดนหลอก หลังจากที่ผมรู้ที่มาแล้ว ผมเลยเล่นสงครามปั่นประสาท scammer ทั้งสองคนนี้ครับ เพื่อดูว่าหลอกหลวงมิจฉาชีพจริงมั้ย
โดยวันที่ 30 กย. ผมเลยปั่นประสาน scammer ทั้งสองรายนี้ โดยการ mint ผลงานที่เขาเสนอซื้อนั่นแหละ ขึ้น opensea แล้วส่งข้อความไปบอกว่า
mint ให้แล้วนะ แต่เงินไม่พอจริงๆว่ะโทษที เลย mint ขึ้น opensea แทน พอดีบ้านจน ทำได้แค่นี้ไปถึงแค่ opensea รีบไปซื้อล่ะ มีคนจ้องจะคว้างานผมเยอะอยู่ ผมไม่คิด royalty fee ด้วยนะเฮ้ย เอาไปขายต่อไม่ต้องแบ่งกำไรกับผม
เท่านั้นแหละครับ สงครามจิตวิทยาก็เริ่มขึ้น แต่เขาจนมุม กลืนน้ำลายตัวเอง ช็อตเด็ดคือ
เขา : เนี่ยอุส่ารอ บอกเพื่อนในทีมว่าวันจันทร์ที่ 30 กย. ได้ของแน่ๆ เพื่อนๆในทีมเห็นงานผมแล้วมันสวยจริง เลยใส่เงินเต็มที่ 9.3ETH (+ capture screen ส่งตัวเลขยอดเงินมาให้ดู)ไว้รอโดยเฉพาะเลยนะ ผม : เหรอ ... งั้น ขอดู wallet address ที่มี transaction มาให้ดูหน่อยสิ เขา : 2ETH นี่มัน 5000$ เลยนะ ผม : แล้วไง ขอดู wallet address ที่มีการเอายอดเงิน 9.3ETH มาให้ดูหน่อย ไหนบอกว่าเตรียมเงินไว้มากแล้วนี่ ขอดูหน่อย ว่าใส่ไว้เมื่อไหร่ ... เอามาแค่ adrress นะเว้ย ไม่ต้องทะลึ่งส่ง seed มาให้ เขา : ส่งรูปเดิม 9.3 ETH มาให้ดู ผม : รูป screenshot อ่ะ มันไม่มีความหมายหรอกเว้ย ตัดต่อเอาก็ได้ง่ายจะตาย เอา transaction hash มาดู ไหนว่าเตรียมเงินไว้รอ 9.3ETH แล้วอยากซื้องานผมจนตัวสั่นเลยไม่ใช่เหรอ ถ้าจะส่ง wallet address มาให้ดู หรือจะช่วยส่ง 0.15ETH มาให้ยืม mint งานก่อน แล้วมากดซื้อ 2ETH ไป แล้วผมใช้ 0.15ETH คืนให้ก็ได้ จะซื้อหรือไม่ซื้อเนี่ย เขา : จะเอา address เขาไปทำไม ผม : ตัดจบ รำคาญ ไม่ขายให้ละ เขา : 2ETH = 5000 USD เลยนะ ผม : แล้วไง
ผมเลยเขียนบทความนี้มาเตือนเพื่อนๆพี่ๆทุกคนครับ เผื่อใครกำลังเปิดพอร์ตทำธุรกิจขาย digital art online แล้วจะโชคดี เจอของดีแบบผม
ทำไมผมถึงมั่นใจว่ามันคือการหลอกหลวง แล้วคนโกงจะได้อะไร
อันดับแรกไปพิจารณาดู opensea ครับ เป็นเวบ NFTmarketplace ที่ volume การซื้อขายสูงที่สุด เขาไม่เก็บเงินของคนจะซื้อจะขายกันไว้กับตัวเอง เงินวิ่งเข้าวิ่งออก wallet ผู้ซื้อผู้ขายเลย ส่วนทางเวบเก็บค่าธรรมเนียมเท่านั้น แถมค่าธรรมเนียมก็ถูกกว่าเมื่อปี 2020 เยอะ ดังนั้นการที่จะไปลงขายงานบนเวบ NFT อื่นที่ค่า fee สูงกว่ากันเป็นร้อยเท่า ... จะทำไปทำไม
ผมเชื่อว่า scammer โกงเงินเจ้าของผลงานโดยการเล่นกับความโลภและความอ่อนประสบการณ์ของเจ้าของผลงานครับ เมื่อไหร่ก็ตามที่เจ้าของผลงานโอน ETH เข้าไปใน wallet เวบนั้นเมื่อไหร่ หรือเมื่อไหร่ก็ตามที่จ่ายค่า fee ในการ mint งาน เงินเหล่านั้นสิ่งเข้ากระเป๋า scammer ทันที แล้วก็จะมีการเล่นตุกติกต่อแน่นอนครับ เช่นถอนไม่ได้ หรือซื้อไม่ได้ ต้องโอนเงินมาเพิ่มเพื่อปลดล็อค smart contract อะไรก็ว่าไป แล้วคนนิสัยไม่ดีพวกเนี้ย ก็จะเล่นกับความโลภของคน เอาราคาเสนอซื้อที่สูงโคตรๆมาล่อ ... อันนี้ไม่ว่ากัน เพราะบนโลก NFT รูปภาพบางรูปที่ไม่ได้มีความเป็นศิลปะอะไรเลย มันดันขายกันได้ 100 - 150 ETH ศิลปินที่พยายามสร้างตัวก็อาจจะมองว่า ผลงานเรามีคนรับซื้อ 2 - 4 ETH ต่องานมันก็มากพอแล้ว (จริงๆมากเกินจนน่าตกใจด้วยซ้ำครับ)
บนโลกของ BTC ไม่ต้องเชื่อใจกัน โอนเงินไปหากันได้ ปิดสมุดบัญชีได้โดยไม่ต้องเชื่อใจกัน
บบโลกของ ETH "code is law" smart contract มีเขียนอยู่แล้ว ไปอ่าน มันไม่ได้ยากมากในการทำความเข้าใจ ดังนั้น การจะมาเชื่อคำสัญญาจากคนด้วยกัน เป็นอะไรที่ไม่มีเหตุผล
ผมไปเล่าเรื่องเหล่านี้ให้กับ community งานศิลปะ ก็มีทั้งเสียงตอบรับที่ดี และไม่ดีปนกันไป มีบางคนยืนยันเสียงแข็งไปในทำนองว่า ไอ้เรื่องแบบเนี้ยไม่ได้กินเขาหรอก เพราะเขาตั้งใจแน่วแน่ว่างานศิลป์ของเขา เขาไม่เอาเข้ามายุ่งในโลก digital currency เด็ดขาด ซึ่งผมก็เคารพมุมมองเขาครับ แต่มันจะดีกว่ามั้ย ถ้าเราเปิดหูเปิดตาให้ทันเทคโนโลยี โดยเฉพาะเรื่อง digital currency , blockchain โดนโกงทีนึงนี่คือหมดตัวกันง่ายกว่าเงิน fiat อีก
อยากจะมาเล่าให้ฟังครับ และอยากให้ช่วยแชร์ไปให้คนรู้จักด้วย จะได้ระวังตัวกัน
Note
- ภาพประกอบ cyber security ทั้งสองนี่ของผมเองครับ ทำเอง วางขายบน AdobeStock
- อีกบัญชีนึงของผม "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw กำลังค่อยๆเอาผลงานจากโลกข้างนอกเข้ามา nostr ครับ ตั้งใจจะมาสร้างงานศิลปะในนี้ เพื่อนๆที่ชอบงาน จะได้ไม่ต้องออกไปหาที่ไหน
ผลงานของผมครับ - Anime girl fanarts : HikariHarmony - HikariHarmony on Nostr - General art : KeshikiRakuen - KeshikiRakuen อาจจะเป็นบัญชี nostr ที่สามของผม ถ้าไหวครับ
-
@ 6830c409:ff17c655
2025-04-26 15:59:28The story-telling method from Frank Daniel school - "Eight-sequence structure" is well utilized in this new movie in #prime - "Veera Dheera Sooran - Part 2". The name itself is kind of suspense. Because even if the name says "Part 2", this is the first part released in this story.
There are 8 shorter plots which has their own mini climaxes. The setup done in a plot will be resolved in another plot. In total, there will be 8 setups, 8 conflicts and 8 resolutions.
A beautiful way of telling a gripping story. For cinephiles in #Nostr, if you want to get a feel of the South Indian movies that has kind of a perfect blend of good content + a bit of over the top action, I would suggest this movie.
Note:
For Nostriches from the western hemisphere- #Bollywood - (#Hindi movies) is the movie industry started in #Bombay (#Mumbai), that has the stereotypical 'la la la' rain dance songs and mustache-less heroes. #Telugu movies (#Tollywood) are mostly over-the-top action where Newton and Einstein will probably commit suicide. #Malayalam movies (#Mollywood) is made with a miniscule budget with minimal over-the-top action and mostly content oriented movies. And then comes one of the best - #Tamil movies (#Kollywood - named after #Kodambakkam - a movie town in the city of Chennai down south), has the best of all the industries. A good blend of class, and mass elements.
Picture:
https://www.flickr.com/photos/gforsythe/6926263837
-
@ 8947a945:9bfcf626
2024-10-17 07:33:00Hello everyone on Nostr and all my watchersand followersfrom DeviantArt, as well as those from other art platforms
I have been creating and sharing AI-generated anime girl fanart since the beginning of 2024 and have been running member-exclusive content on Patreon.
I also publish showcases of my artworks to Deviantart. I organically build up my audience from time to time. I consider it as one of my online businesses of art. Everything is slowly growing
On September 16, I received a DM from someone expressing interest in purchasing my art in NFT format and offering a very high price for each piece. We later continued the conversation via email.
Here’s a brief overview of what happened
- The first scammer selected the art they wanted to buy and offered a high price for each piece. They provided a URL to an NFT marketplace site running on the Ethereum (ETH) mainnet or ERC20. The site appeared suspicious, requiring email sign-up and linking a MetaMask wallet. However, I couldn't change the wallet address later. The minting gas fees were quite expensive, ranging from 0.15 to 0.2 ETH
-
The scammers tried to convince me that the high profits would easily cover the minting gas fees, so I had nothing to lose. Luckily, I didn’t have spare funds to purchase ETH for the gas fees at the time, so I tried negotiating with them as follows:
-
I offered to send them a lower-quality version of my art via email in exchange for the minting gas fees, but they refused.
- I offered them the option to pay in USD through Buy Me a Coffee shop here, but they refused.
- I offered them the option to pay via Bitcoin using the Lightning Network invoice , but they refused.
- I asked them to wait until I could secure the funds, and they agreed to wait.
The following week, a second scammer approached me with a similar offer, this time at an even higher price and through a different NFT marketplace website.
This second site also required email registration, and after navigating to the dashboard, it asked for a minting fee of 0.2 ETH. However, the site provided a wallet address for me instead of connecting a MetaMask wallet.
I told the second scammer that I was waiting to make a profit from the first sale, and they asked me to show them the first marketplace. They then warned me that the first site was a scam and even sent screenshots of victims, including one from OpenSea saying that Opensea is not paying.
This raised a red flag, and I began suspecting I might be getting scammed. On OpenSea, funds go directly to users' wallets after transactions, and OpenSea charges a much lower platform fee compared to the previous crypto bull run in 2020. Minting fees on OpenSea are also significantly cheaper, around 0.0001 ETH per transaction.
I also consulted with Thai NFT artist communities and the ex-chairman of the Thai Digital Asset Association. According to them, no one had reported similar issues, but they agreed it seemed like a scam.
After confirming my suspicions with my own research and consulting with the Thai crypto community, I decided to test the scammers’ intentions by doing the following
I minted the artwork they were interested in, set the price they offered, and listed it for sale on OpenSea. I then messaged them, letting them know the art was available and ready to purchase, with no royalty fees if they wanted to resell it.
They became upset and angry, insisting I mint the art on their chosen platform, claiming they had already funded their wallet to support me. When I asked for proof of their wallet address and transactions, they couldn't provide any evidence that they had enough funds.
Here’s what I want to warn all artists in the DeviantArt community or other platforms If you find yourself in a similar situation, be aware that scammers may be targeting you.
My Perspective why I Believe This is a Scam and What the Scammers Gain
From my experience with BTC and crypto since 2017, here's why I believe this situation is a scam, and what the scammers aim to achieve
First, looking at OpenSea, the largest NFT marketplace on the ERC20 network, they do not hold users' funds. Instead, funds from transactions go directly to users’ wallets. OpenSea’s platform fees are also much lower now compared to the crypto bull run in 2020. This alone raises suspicion about the legitimacy of other marketplaces requiring significantly higher fees.
I believe the scammers' tactic is to lure artists into paying these exorbitant minting fees, which go directly into the scammers' wallets. They convince the artists by promising to purchase the art at a higher price, making it seem like there's no risk involved. In reality, the artist has already lost by paying the minting fee, and no purchase is ever made.
In the world of Bitcoin (BTC), the principle is "Trust no one" and “Trustless finality of transactions” In other words, transactions are secure and final without needing trust in a third party.
In the world of Ethereum (ETH), the philosophy is "Code is law" where everything is governed by smart contracts deployed on the blockchain. These contracts are transparent, and even basic code can be read and understood. Promises made by people don’t override what the code says.
I also discuss this issue with art communities. Some people have strongly expressed to me that they want nothing to do with crypto as part of their art process. I completely respect that stance.
However, I believe it's wise to keep your eyes open, have some skin in the game, and not fall into scammers’ traps. Understanding the basics of crypto and NFTs can help protect you from these kinds of schemes.
If you found this article helpful, please share it with your fellow artists.
Until next time Take care
Note
- Both cyber security images are mine , I created and approved by AdobeStock to put on sale
- I'm working very hard to bring all my digital arts into Nostr to build my Sats business here to my another npub "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw
Link to my full gallery - Anime girl fanarts : HikariHarmony - HikariHarmony on Nostr - General art : KeshikiRakuen
-
@ c631e267:c2b78d3e
2025-04-20 19:54:32Es ist völlig unbestritten, dass der Angriff der russischen Armee auf die Ukraine im Februar 2022 strikt zu verurteilen ist. Ebenso unbestritten ist Russland unter Wladimir Putin keine brillante Demokratie. Aus diesen Tatsachen lässt sich jedoch nicht das finstere Bild des russischen Präsidenten – und erst recht nicht des Landes – begründen, das uns durchweg vorgesetzt wird und den Kern des aktuellen europäischen Bedrohungs-Szenarios darstellt. Da müssen wir schon etwas genauer hinschauen.
Der vorliegende Artikel versucht derweil nicht, den Einsatz von Gewalt oder die Verletzung von Menschenrechten zu rechtfertigen oder zu entschuldigen – ganz im Gegenteil. Dass jedoch der Verdacht des «Putinverstehers» sofort latent im Raume steht, verdeutlicht, was beim Thema «Russland» passiert: Meinungsmache und Manipulation.
Angesichts der mentalen Mobilmachung seitens Politik und Medien sowie des Bestrebens, einen bevorstehenden Krieg mit Russland geradezu herbeizureden, ist es notwendig, dieser fatalen Entwicklung entgegenzutreten. Wenn wir uns nur ein wenig von der herrschenden Schwarz-Weiß-Malerei freimachen, tauchen automatisch Fragen auf, die Risse im offiziellen Narrativ enthüllen. Grund genug, nachzuhaken.
Wer sich schon länger auch abseits der Staats- und sogenannten Leitmedien informiert, der wird in diesem Artikel vermutlich nicht viel Neues erfahren. Andere könnten hier ein paar unbekannte oder vergessene Aspekte entdecken. Möglicherweise klärt sich in diesem Kontext die Wahrnehmung der aktuellen (unserer eigenen!) Situation ein wenig.
Manipulation erkennen
Corona-«Pandemie», menschengemachter Klimawandel oder auch Ukraine-Krieg: Jede Menge Krisen, und für alle gibt es ein offizielles Narrativ, dessen Hinterfragung unerwünscht ist. Nun ist aber ein Narrativ einfach eine Erzählung, eine Geschichte (Latein: «narratio») und kein Tatsachenbericht. Und so wie ein Märchen soll auch das Narrativ eine Botschaft vermitteln.
Über die Methoden der Manipulation ist viel geschrieben worden, sowohl in Bezug auf das Individuum als auch auf die Massen. Sehr wertvolle Tipps dazu, wie man Manipulationen durchschauen kann, gibt ein Büchlein [1] von Albrecht Müller, dem Herausgeber der NachDenkSeiten.
Die Sprache selber eignet sich perfekt für die Manipulation. Beispielsweise kann die Wortwahl Bewertungen mitschwingen lassen, regelmäßiges Wiederholen (gerne auch von verschiedenen Seiten) lässt Dinge irgendwann «wahr» erscheinen, Übertreibungen fallen auf und hinterlassen wenigstens eine Spur im Gedächtnis, genauso wie Andeutungen. Belege spielen dabei keine Rolle.
Es gibt auffällig viele Sprachregelungen, die offenbar irgendwo getroffen und irgendwie koordiniert werden. Oder alle Redenschreiber und alle Medien kopieren sich neuerdings permanent gegenseitig. Welchen Zweck hat es wohl, wenn der Krieg in der Ukraine durchgängig und quasi wörtlich als «russischer Angriffskrieg auf die Ukraine» bezeichnet wird? Obwohl das in der Sache richtig ist, deutet die Art der Verwendung auf gezielte Beeinflussung hin und soll vor allem das Feindbild zementieren.
Sprachregelungen dienen oft der Absicherung einer einseitigen Darstellung. Das Gleiche gilt für das Verkürzen von Informationen bis hin zum hartnäckigen Verschweigen ganzer Themenbereiche. Auch hierfür gibt es rund um den Ukraine-Konflikt viele gute Beispiele.
Das gewünschte Ergebnis solcher Methoden ist eine Schwarz-Weiß-Malerei, bei der einer eindeutig als «der Böse» markiert ist und die anderen automatisch «die Guten» sind. Das ist praktisch und demonstriert gleichzeitig ein weiteres Manipulationswerkzeug: die Verwendung von Doppelstandards. Wenn man es schafft, bei wichtigen Themen regelmäßig mit zweierlei Maß zu messen, ohne dass das Publikum protestiert, dann hat man freie Bahn.
Experten zu bemühen, um bestimmte Sachverhalte zu erläutern, ist sicher sinnvoll, kann aber ebenso missbraucht werden, schon allein durch die Auswahl der jeweiligen Spezialisten. Seit «Corona» werden viele erfahrene und ehemals hoch angesehene Fachleute wegen der «falschen Meinung» diffamiert und gecancelt. [2] Das ist nicht nur ein brutaler Umgang mit Menschen, sondern auch eine extreme Form, die öffentliche Meinung zu steuern.
Wann immer wir also erkennen (weil wir aufmerksam waren), dass wir bei einem bestimmten Thema manipuliert werden, dann sind zwei logische und notwendige Fragen: Warum? Und was ist denn richtig? In unserem Russland-Kontext haben die Antworten darauf viel mit Geopolitik und Geschichte zu tun.
Ist Russland aggressiv und expansiv?
Angeblich plant Russland, europäische NATO-Staaten anzugreifen, nach dem Motto: «Zuerst die Ukraine, dann den Rest». In Deutschland weiß man dafür sogar das Datum: «Wir müssen bis 2029 kriegstüchtig sein», versichert Verteidigungsminister Pistorius.
Historisch gesehen ist es allerdings eher umgekehrt: Russland, bzw. die Sowjetunion, ist bereits dreimal von Westeuropa aus militärisch angegriffen worden. Die Feldzüge Napoleons, des deutschen Kaiserreichs und Nazi-Deutschlands haben Millionen Menschen das Leben gekostet. Bei dem ausdrücklichen Vernichtungskrieg ab 1941 kam es außerdem zu Brutalitäten wie der zweieinhalbjährigen Belagerung Leningrads (heute St. Petersburg) durch Hitlers Wehrmacht. Deren Ziel, die Bevölkerung auszuhungern, wurde erreicht: über eine Million tote Zivilisten.
Trotz dieser Erfahrungen stimmte Michail Gorbatschow 1990 der deutschen Wiedervereinigung zu und die Sowjetunion zog ihre Truppen aus Osteuropa zurück (vgl. Abb. 1). Der Warschauer Pakt wurde aufgelöst, der Kalte Krieg formell beendet. Die Sowjets erhielten damals von führenden westlichen Politikern die Zusicherung, dass sich die NATO «keinen Zentimeter ostwärts» ausdehnen würde, das ist dokumentiert. [3]
Expandiert ist die NATO trotzdem, und zwar bis an Russlands Grenzen (vgl. Abb. 2). Laut dem Politikberater Jeffrey Sachs handelt es sich dabei um ein langfristiges US-Projekt, das von Anfang an die Ukraine und Georgien mit einschloss. Offiziell wurde der Beitritt beiden Staaten 2008 angeboten. In jedem Fall könnte die massive Ost-Erweiterung seit 1999 aus russischer Sicht nicht nur als Vertrauensbruch, sondern durchaus auch als aggressiv betrachtet werden.
Russland hat den europäischen Staaten mehrfach die Hand ausgestreckt [4] für ein friedliches Zusammenleben und den «Aufbau des europäischen Hauses». Präsident Putin sei «in seiner ersten Amtszeit eine Chance für Europa» gewesen, urteilt die Journalistin und langjährige Russland-Korrespondentin der ARD, Gabriele Krone-Schmalz. Er habe damals viele positive Signale Richtung Westen gesendet.
Die Europäer jedoch waren scheinbar an einer Partnerschaft mit dem kontinentalen Nachbarn weniger interessiert als an der mit dem transatlantischen Hegemon. Sie verkennen bis heute, dass eine gedeihliche Zusammenarbeit in Eurasien eine Gefahr für die USA und deren bekundetes Bestreben ist, die «einzige Weltmacht» zu sein – «Full Spectrum Dominance» [5] nannte das Pentagon das. Statt einem neuen Kalten Krieg entgegenzuarbeiten, ließen sich europäische Staaten selber in völkerrechtswidrige «US-dominierte Angriffskriege» [6] verwickeln, wie in Serbien, Afghanistan, dem Irak, Libyen oder Syrien. Diese werden aber selten so benannt.
Speziell den Deutschen stünde außer einer Portion Realismus auch etwas mehr Dankbarkeit gut zu Gesicht. Das Geschichtsbewusstsein der Mehrheit scheint doch recht selektiv und das Selbstbewusstsein einiger etwas desorientiert zu sein. Bekanntermaßen waren es die Soldaten der sowjetischen Roten Armee, die unter hohen Opfern 1945 Deutschland «vom Faschismus befreit» haben. Bei den Gedenkfeiern zu 80 Jahren Kriegsende will jedoch das Auswärtige Amt – noch unter der Diplomatie-Expertin Baerbock, die sich schon länger offiziell im Krieg mit Russland wähnt, – nun keine Russen sehen: Sie sollen notfalls rausgeschmissen werden.
«Die Grundsatzfrage lautet: Geht es Russland um einen angemessenen Platz in einer globalen Sicherheitsarchitektur, oder ist Moskau schon seit langem auf einem imperialistischen Trip, der befürchten lassen muss, dass die Russen in fünf Jahren in Berlin stehen?»
So bringt Gabriele Krone-Schmalz [7] die eigentliche Frage auf den Punkt, die zur Einschätzung der Situation letztlich auch jeder für sich beantworten muss.
Was ist los in der Ukraine?
In der internationalen Politik geht es nie um Demokratie oder Menschenrechte, sondern immer um Interessen von Staaten. Diese These stammt von Egon Bahr, einem der Architekten der deutschen Ostpolitik des «Wandels durch Annäherung» aus den 1960er und 70er Jahren. Sie trifft auch auf den Ukraine-Konflikt zu, den handfeste geostrategische und wirtschaftliche Interessen beherrschen, obwohl dort angeblich «unsere Demokratie» verteidigt wird.
Es ist ein wesentliches Element des Ukraine-Narrativs und Teil der Manipulation, die Vorgeschichte des Krieges wegzulassen – mindestens die vor der russischen «Annexion» der Halbinsel Krim im März 2014, aber oft sogar komplett diejenige vor der Invasion Ende Februar 2022. Das Thema ist komplex, aber einige Aspekte, die für eine Beurteilung nicht unwichtig sind, will ich wenigstens kurz skizzieren. [8]
Das Gebiet der heutigen Ukraine und Russlands – die übrigens in der «Kiewer Rus» gemeinsame Wurzeln haben – hat der britische Geostratege Halford Mackinder bereits 1904 als eurasisches «Heartland» bezeichnet, dessen Kontrolle er eine große Bedeutung für die imperiale Strategie Großbritanniens zumaß. Für den ehemaligen Sicherheits- und außenpolitischen Berater mehrerer US-amerikanischer Präsidenten und Mitgründer der Trilateralen Kommission, Zbigniew Brzezinski, war die Ukraine nach der Auflösung der Sowjetunion ein wichtiger Spielstein auf dem «eurasischen Schachbrett», wegen seiner Nähe zu Russland, seiner Bodenschätze und seines Zugangs zum Schwarzen Meer.
Die Ukraine ist seit langem ein gespaltenes Land. Historisch zerrissen als Spielball externer Interessen und geprägt von ethnischen, kulturellen, religiösen und geografischen Unterschieden existiert bis heute, grob gesagt, eine Ost-West-Spaltung, welche die Suche nach einer nationalen Identität stark erschwert.
Insbesondere im Zuge der beiden Weltkriege sowie der Russischen Revolution entstanden tiefe Risse in der Bevölkerung. Ukrainer kämpften gegen Ukrainer, zum Beispiel die einen auf der Seite von Hitlers faschistischer Nazi-Armee und die anderen auf der von Stalins kommunistischer Roter Armee. Die Verbrechen auf beiden Seiten sind nicht vergessen. Dass nach der Unabhängigkeit 1991 versucht wurde, Figuren wie den radikalen Nationalisten Symon Petljura oder den Faschisten und Nazi-Kollaborateur Stepan Bandera als «Nationalhelden» zu installieren, verbessert die Sache nicht.
Während die USA und EU-Staaten zunehmend «ausländische Einmischung» (speziell russische) in «ihre Demokratien» wittern, betreiben sie genau dies seit Jahrzehnten in vielen Ländern der Welt. Die seit den 2000er Jahren bekannten «Farbrevolutionen» in Osteuropa werden oft als Methode des Regierungsumsturzes durch von außen gesteuerte «demokratische» Volksaufstände beschrieben. Diese Strategie geht auf Analysen zum «Schwarmverhalten» [9] seit den 1960er Jahren zurück (Studentenproteste), wo es um die potenzielle Wirksamkeit einer «rebellischen Hysterie» von Jugendlichen bei postmodernen Staatsstreichen geht. Heute nennt sich dieses gezielte Kanalisieren der Massen zur Beseitigung unkooperativer Regierungen «Soft-Power».
In der Ukraine gab es mit der «Orangen Revolution» 2004 und dem «Euromaidan» 2014 gleich zwei solcher «Aufstände». Der erste erzwang wegen angeblicher Unregelmäßigkeiten eine Wiederholung der Wahlen, was mit Wiktor Juschtschenko als neuem Präsidenten endete. Dieser war ehemaliger Direktor der Nationalbank und Befürworter einer Annäherung an EU und NATO. Seine Frau, die First Lady, ist US-amerikanische «Philanthropin» und war Beamtin im Weißen Haus in der Reagan- und der Bush-Administration.
Im Gegensatz zu diesem ersten Event endete der sogenannte Euromaidan unfriedlich und blutig. Die mehrwöchigen Proteste gegen Präsident Wiktor Janukowitsch, in Teilen wegen des nicht unterzeichneten Assoziierungsabkommens mit der EU, wurden zunehmend gewalttätiger und von Nationalisten und Faschisten des «Rechten Sektors» dominiert. Sie mündeten Ende Februar 2014 auf dem Kiewer Unabhängigkeitsplatz (Maidan) in einem Massaker durch Scharfschützen. Dass deren Herkunft und die genauen Umstände nicht geklärt wurden, störte die Medien nur wenig. [10]
Janukowitsch musste fliehen, er trat nicht zurück. Vielmehr handelte es sich um einen gewaltsamen, allem Anschein nach vom Westen inszenierten Putsch. Laut Jeffrey Sachs war das kein Geheimnis, außer vielleicht für die Bürger. Die USA unterstützten die Post-Maidan-Regierung nicht nur, sie beeinflussten auch ihre Bildung. Das geht unter anderem aus dem berühmten «Fuck the EU»-Telefonat der US-Chefdiplomatin für die Ukraine, Victoria Nuland, mit Botschafter Geoffrey Pyatt hervor.
Dieser Bruch der demokratischen Verfassung war letztlich der Auslöser für die anschließenden Krisen auf der Krim und im Donbass (Ostukraine). Angesichts der ukrainischen Geschichte mussten die nationalistischen Tendenzen und die Beteiligung der rechten Gruppen an dem Umsturz bei der russigsprachigen Bevölkerung im Osten ungute Gefühle auslösen. Es gab Kritik an der Übergangsregierung, Befürworter einer Abspaltung und auch für einen Anschluss an Russland.
Ebenso konnte Wladimir Putin in dieser Situation durchaus Bedenken wegen des Status der russischen Militärbasis für seine Schwarzmeerflotte in Sewastopol auf der Krim haben, für die es einen langfristigen Pachtvertrag mit der Ukraine gab. Was im März 2014 auf der Krim stattfand, sei keine Annexion, sondern eine Abspaltung (Sezession) nach einem Referendum gewesen, also keine gewaltsame Aneignung, urteilte der Rechtswissenschaftler Reinhard Merkel in der FAZ sehr detailliert begründet. Übrigens hatte die Krim bereits zu Zeiten der Sowjetunion den Status einer autonomen Republik innerhalb der Ukrainischen SSR.
Anfang April 2014 wurden in der Ostukraine die «Volksrepubliken» Donezk und Lugansk ausgerufen. Die Kiewer Übergangsregierung ging unter der Bezeichnung «Anti-Terror-Operation» (ATO) militärisch gegen diesen, auch von Russland instrumentalisierten Widerstand vor. Zufällig war kurz zuvor CIA-Chef John Brennan in Kiew. Die Maßnahmen gingen unter dem seit Mai neuen ukrainischen Präsidenten, dem Milliardär Petro Poroschenko, weiter. Auch Wolodymyr Selenskyj beendete den Bürgerkrieg nicht, als er 2019 vom Präsidenten-Schauspieler, der Oligarchen entmachtet, zum Präsidenten wurde. Er fuhr fort, die eigene Bevölkerung zu bombardieren.
Mit dem Einmarsch russischer Truppen in die Ostukraine am 24. Februar 2022 begann die zweite Phase des Krieges. Die Wochen und Monate davor waren intensiv. Im November hatte die Ukraine mit den USA ein Abkommen über eine «strategische Partnerschaft» unterzeichnet. Darin sagten die Amerikaner ihre Unterstützung der EU- und NATO-Perspektive der Ukraine sowie quasi für die Rückeroberung der Krim zu. Dagegen ließ Putin der NATO und den USA im Dezember 2021 einen Vertragsentwurf über beiderseitige verbindliche Sicherheitsgarantien zukommen, den die NATO im Januar ablehnte. Im Februar eskalierte laut OSZE die Gewalt im Donbass.
Bereits wenige Wochen nach der Invasion, Ende März 2022, kam es in Istanbul zu Friedensverhandlungen, die fast zu einer Lösung geführt hätten. Dass der Krieg nicht damals bereits beendet wurde, lag daran, dass der Westen dies nicht wollte. Man war der Meinung, Russland durch die Ukraine in diesem Stellvertreterkrieg auf Dauer militärisch schwächen zu können. Angesichts von Hunderttausenden Toten, Verletzten und Traumatisierten, die als Folge seitdem zu beklagen sind, sowie dem Ausmaß der Zerstörung, fehlen einem die Worte.
Hasst der Westen die Russen?
Diese Frage drängt sich auf, wenn man das oft unerträglich feindselige Gebaren beobachtet, das beileibe nicht neu ist und vor Doppelmoral trieft. Russland und speziell die Person Wladimir Putins werden regelrecht dämonisiert, was gleichzeitig scheinbar jede Form von Diplomatie ausschließt.
Russlands militärische Stärke, seine geografische Lage, sein Rohstoffreichtum oder seine unabhängige diplomatische Tradition sind sicher Störfaktoren für das US-amerikanische Bestreben, der Boss in einer unipolaren Welt zu sein. Ein womöglich funktionierender eurasischer Kontinent, insbesondere gute Beziehungen zwischen Russland und Deutschland, war indes schon vor dem Ersten Weltkrieg eine Sorge des britischen Imperiums.
Ein «Vergehen» von Präsident Putin könnte gewesen sein, dass er die neoliberale Schocktherapie à la IWF und den Ausverkauf des Landes (auch an US-Konzerne) beendete, der unter seinem Vorgänger herrschte. Dabei zeigte er sich als Führungspersönlichkeit und als nicht so formbar wie Jelzin. Diese Aspekte allein sind aber heute vermutlich keine ausreichende Erklärung für ein derart gepflegtes Feindbild.
Der Historiker und Philosoph Hauke Ritz erweitert den Fokus der Fragestellung zu: «Warum hasst der Westen die Russen so sehr?», was er zum Beispiel mit dem Medienforscher Michael Meyen und mit der Politikwissenschaftlerin Ulrike Guérot bespricht. Ritz stellt die interessante These [11] auf, dass Russland eine Provokation für den Westen sei, welcher vor allem dessen kulturelles und intellektuelles Potenzial fürchte.
Die Russen sind Europäer aber anders, sagt Ritz. Diese «Fremdheit in der Ähnlichkeit» erzeuge vielleicht tiefe Ablehnungsgefühle. Obwohl Russlands Identität in der europäischen Kultur verwurzelt ist, verbinde es sich immer mit der Opposition in Europa. Als Beispiele nennt er die Kritik an der katholischen Kirche oder die Verbindung mit der Arbeiterbewegung. Christen, aber orthodox; Sozialismus statt Liberalismus. Das mache das Land zum Antagonisten des Westens und zu einer Bedrohung der Machtstrukturen in Europa.
Fazit
Selbstverständlich kann man Geschichte, Ereignisse und Entwicklungen immer auf verschiedene Arten lesen. Dieser Artikel, obwohl viel zu lang, konnte nur einige Aspekte der Ukraine-Tragödie anreißen, die in den offiziellen Darstellungen in der Regel nicht vorkommen. Mindestens dürfte damit jedoch klar geworden sein, dass die Russische Föderation bzw. Wladimir Putin nicht der alleinige Aggressor in diesem Konflikt ist. Das ist ein Stellvertreterkrieg zwischen USA/NATO (gut) und Russland (böse); die Ukraine (edel) wird dabei schlicht verheizt.
Das ist insofern von Bedeutung, als die gesamte europäische Kriegshysterie auf sorgsam kultivierten Freund-Feind-Bildern beruht. Nur so kann Konfrontation und Eskalation betrieben werden, denn damit werden die wahren Hintergründe und Motive verschleiert. Angst und Propaganda sind notwendig, damit die Menschen den Wahnsinn mitmachen. Sie werden belogen, um sie zuerst zu schröpfen und anschließend auf die Schlachtbank zu schicken. Das kann niemand wollen, außer den stets gleichen Profiteuren: die Rüstungs-Lobby und die großen Investoren, die schon immer an Zerstörung und Wiederaufbau verdient haben.
Apropos Investoren: Zu den Top-Verdienern und somit Hauptinteressenten an einer Fortführung des Krieges zählt BlackRock, einer der weltgrößten Vermögensverwalter. Der deutsche Bundeskanzler in spe, Friedrich Merz, der gerne «Taurus»-Marschflugkörper an die Ukraine liefern und die Krim-Brücke zerstören möchte, war von 2016 bis 2020 Aufsichtsratsvorsitzender von BlackRock in Deutschland. Aber das hat natürlich nichts zu sagen, der Mann macht nur seinen Job.
Es ist ein Spiel der Kräfte, es geht um Macht und strategische Kontrolle, um Geheimdienste und die Kontrolle der öffentlichen Meinung, um Bodenschätze, Rohstoffe, Pipelines und Märkte. Das klingt aber nicht sexy, «Demokratie und Menschenrechte» hört sich besser und einfacher an. Dabei wäre eine für alle Seiten förderliche Politik auch nicht so kompliziert; das Handwerkszeug dazu nennt sich Diplomatie. Noch einmal Gabriele Krone-Schmalz:
«Friedliche Politik ist nichts anderes als funktionierender Interessenausgleich. Da geht’s nicht um Moral.»
Die Situation in der Ukraine ist sicher komplex, vor allem wegen der inneren Zerrissenheit. Es dürfte nicht leicht sein, eine friedliche Lösung für das Zusammenleben zu finden, aber die Beteiligten müssen es vor allem wollen. Unter den gegebenen Umständen könnte eine sinnvolle Perspektive mit Neutralität und föderalen Strukturen zu tun haben.
Allen, die sich bis hierher durch die Lektüre gearbeitet (oder auch einfach nur runtergescrollt) haben, wünsche ich frohe Oster-Friedenstage!
[Titelbild: Pixabay; Abb. 1 und 2: nach Ganser/SIPER; Abb. 3: SIPER]
--- Quellen: ---
[1] Albrecht Müller, «Glaube wenig. Hinterfrage alles. Denke selbst.», Westend 2019
[2] Zwei nette Beispiele:
- ARD-faktenfinder (sic), «Viel Aufmerksamkeit für fragwürdige Experten», 03/2023
- Neue Zürcher Zeitung, «Aufstieg und Fall einer Russlandversteherin – die ehemalige ARD-Korrespondentin Gabriele Krone-Schmalz rechtfertigt seit Jahren Putins Politik», 12/2022
[3] George Washington University, «NATO Expansion: What Gorbachev Heard – Declassified documents show security assurances against NATO expansion to Soviet leaders from Baker, Bush, Genscher, Kohl, Gates, Mitterrand, Thatcher, Hurd, Major, and Woerner», 12/2017
[4] Beispielsweise Wladimir Putin bei seiner Rede im Deutschen Bundestag, 25/09/2001
[5] William Engdahl, «Full Spectrum Dominance, Totalitarian Democracy In The New World Order», edition.engdahl 2009
[6] Daniele Ganser, «Illegale Kriege – Wie die NATO-Länder die UNO sabotieren. Eine Chronik von Kuba bis Syrien», Orell Füssli 2016
[7] Gabriele Krone-Schmalz, «Mit Friedensjournalismus gegen ‘Kriegstüchtigkeit’», Vortrag und Diskussion an der Universität Hamburg, veranstaltet von engagierten Studenten, 16/01/2025\ → Hier ist ein ähnlicher Vortrag von ihr (Video), den ich mit spanischer Übersetzung gefunden habe.
[8] Für mehr Hintergrund und Details empfehlen sich z.B. folgende Bücher:
- Mathias Bröckers, Paul Schreyer, «Wir sind immer die Guten», Westend 2019
- Gabriele Krone-Schmalz, «Russland verstehen? Der Kampf um die Ukraine und die Arroganz des Westens», Westend 2023
- Patrik Baab, «Auf beiden Seiten der Front – Meine Reisen in die Ukraine», Fiftyfifty 2023
[9] vgl. Jonathan Mowat, «Washington's New World Order "Democratization" Template», 02/2005 und RAND Corporation, «Swarming and the Future of Conflict», 2000
[10] Bemerkenswert einige Beiträge, von denen man später nichts mehr wissen wollte:
- ARD Monitor, «Todesschüsse in Kiew: Wer ist für das Blutbad vom Maidan verantwortlich», 10/04/2014, Transkript hier
- Telepolis, «Blutbad am Maidan: Wer waren die Todesschützen?», 12/04/2014
- Telepolis, «Scharfschützenmorde in Kiew», 14/12/2014
- Deutschlandfunk, «Gefahr einer Spirale nach unten», Interview mit Günter Verheugen, 18/03/2014
- NDR Panorama, «Putsch in Kiew: Welche Rolle spielen die Faschisten?», 06/03/2014
[11] Hauke Ritz, «Vom Niedergang des Westens zur Neuerfindung Europas», 2024
Dieser Beitrag wurde mit dem Pareto-Client geschrieben.
-
@ fbf0e434:e1be6a39
2025-04-26 15:58:26Hackathon 概要
Hedera Hashathon: Nairobi Edition 近日圆满落幕,共有 223 名开发者参与,49 个项目通过审核。本次活动以线上形式举办,由 Kenya Tech Events、内罗毕证券交易所及虚拟资产商会共同支持,旨在推动本地创新并提升区块链技术在肯尼亚的应用水平。
黑客松围绕三大核心方向展开:AI 代理、资本市场和 Hedera Explorer。参与者基于 Hedera 区块链开发解决方案,针对性解决自动化、金融普惠及数字资产交互等领域的挑战。活动通过在线辅导和网络交流机会,充分展现了协作开发的重要性。
活动亮点当属在内罗毕大学举办的 Demo Day,入围决赛的团队现场展示创新方案并获颁奖项。尤其在资本市场方向的顶尖项目,将获得孵化支持及导师指导以推进后续开发。此次黑客松特别注重实际应用,凸显了区块链技术在重塑肯尼亚产业、推动技术进步并提升市场参与度方面的潜力。
Hackathon 获奖者
第一名
- **Hedgehog:** 一个使用Hedera网络上的代币化真实股票交易所股份作为抵押品的链上借贷协议。通过将股票抵押与区块链透明性相结合,确保了安全的去中心化借贷。
第二名
- **Orion:** 通过将NSE股票代币化为Hedera区块链上的资产,促进在肯尼亚的轻松股票交易。通过与Mpesa的集成简化了证券交易流程,实现高效的数字交易。
第三名
- **NSEChainBridge:** 一个基于区块链的平台,通过创新的代币解决方案增强了NSE股票作为数字代币的交易,提高股票交易的可达性和流动性。
第四名
- **HashGuard:** 一个使用Hedera Hashgraph技术的代币化微型保险平台,专为boda boda骑手提供。它提供了负担得起的即时保险,让不需要区块链专业知识的用户也能获得保险。
要查看完整项目列表,请访问 DoraHacks。
关于组织者
Hedera
Hedera是一个以速度、安全性和可扩展性著称的公共分布式账本平台。其hashgraph共识算法是一种权益证明的变体,提供了一种独特的分布式共识实现方式。Hedera活跃于多个行业领域,支持优先考虑透明度和效率的项目。该组织始终致力于推进去中心化网络的基础设施建设,促进全球范围内安全而高效的数字交易。
-
@ e6817453:b0ac3c39
2024-10-06 11:21:27Hey folks, today we're diving into an exciting and emerging topic: personal artificial intelligence (PAI) and its connection to sovereignty, privacy, and ethics. With the rapid advancements in AI, there's a growing interest in the development of personal AI agents that can work on behalf of the user, acting autonomously and providing tailored services. However, as with any new technology, there are several critical factors that shape the future of PAI. Today, we'll explore three key pillars: privacy and ownership, explainability, and bias.
1. Privacy and Ownership: Foundations of Personal AI
At the heart of personal AI, much like self-sovereign identity (SSI), is the concept of ownership. For personal AI to be truly effective and valuable, users must own not only their data but also the computational power that drives these systems. This autonomy is essential for creating systems that respect the user's privacy and operate independently of large corporations.
In this context, privacy is more than just a feature—it's a fundamental right. Users should feel safe discussing sensitive topics with their AI, knowing that their data won’t be repurposed or misused by big tech companies. This level of control and data ownership ensures that users remain the sole beneficiaries of their information and computational resources, making privacy one of the core pillars of PAI.
2. Bias and Fairness: The Ethical Dilemma of LLMs
Most of today’s AI systems, including personal AI, rely heavily on large language models (LLMs). These models are trained on vast datasets that represent snapshots of the internet, but this introduces a critical ethical challenge: bias. The datasets used for training LLMs can be full of biases, misinformation, and viewpoints that may not align with a user’s personal values.
This leads to one of the major issues in AI ethics for personal AI—how do we ensure fairness and minimize bias in these systems? The training data that LLMs use can introduce perspectives that are not only unrepresentative but potentially harmful or unfair. As users of personal AI, we need systems that are free from such biases and can be tailored to our individual needs and ethical frameworks.
Unfortunately, training models that are truly unbiased and fair requires vast computational resources and significant investment. While large tech companies have the financial means to develop and train these models, individual users or smaller organizations typically do not. This limitation means that users often have to rely on pre-trained models, which may not fully align with their personal ethics or preferences. While fine-tuning models with personalized datasets can help, it's not a perfect solution, and bias remains a significant challenge.
3. Explainability: The Need for Transparency
One of the most frustrating aspects of modern AI is the lack of explainability. Many LLMs operate as "black boxes," meaning that while they provide answers or make decisions, it's often unclear how they arrived at those conclusions. For personal AI to be effective and trustworthy, it must be transparent. Users need to understand how the AI processes information, what data it relies on, and the reasoning behind its conclusions.
Explainability becomes even more critical when AI is used for complex decision-making, especially in areas that impact other people. If an AI is making recommendations, judgments, or decisions, it’s crucial for users to be able to trace the reasoning process behind those actions. Without this transparency, users may end up relying on AI systems that provide flawed or biased outcomes, potentially causing harm.
This lack of transparency is a major hurdle for personal AI development. Current LLMs, as mentioned earlier, are often opaque, making it difficult for users to trust their outputs fully. The explainability of AI systems will need to be improved significantly to ensure that personal AI can be trusted for important tasks.
Addressing the Ethical Landscape of Personal AI
As personal AI systems evolve, they will increasingly shape the ethical landscape of AI. We’ve already touched on the three core pillars—privacy and ownership, bias and fairness, and explainability. But there's more to consider, especially when looking at the broader implications of personal AI development.
Most current AI models, particularly those from big tech companies like Facebook, Google, or OpenAI, are closed systems. This means they are aligned with the goals and ethical frameworks of those companies, which may not always serve the best interests of individual users. Open models, such as Meta's LLaMA, offer more flexibility and control, allowing users to customize and refine the AI to better meet their personal needs. However, the challenge remains in training these models without significant financial and technical resources.
There’s also the temptation to use uncensored models that aren’t aligned with the values of large corporations, as they provide more freedom and flexibility. But in reality, models that are entirely unfiltered may introduce harmful or unethical content. It’s often better to work with aligned models that have had some of the more problematic biases removed, even if this limits some aspects of the system’s freedom.
The future of personal AI will undoubtedly involve a deeper exploration of these ethical questions. As AI becomes more integrated into our daily lives, the need for privacy, fairness, and transparency will only grow. And while we may not yet be able to train personal AI models from scratch, we can continue to shape and refine these systems through curated datasets and ongoing development.
Conclusion
In conclusion, personal AI represents an exciting new frontier, but one that must be navigated with care. Privacy, ownership, bias, and explainability are all essential pillars that will define the future of these systems. As we continue to develop personal AI, we must remain vigilant about the ethical challenges they pose, ensuring that they serve the best interests of users while remaining transparent, fair, and aligned with individual values.
If you have any thoughts or questions on this topic, feel free to reach out—I’d love to continue the conversation!
-
@ 68c90cf3:99458f5c
2025-04-26 15:05:41Background
Last year I got interesting in running my own bitcoin node after reading others' experiences doing so. A couple of decades ago I ran my own Linux and Mac servers, and enjoyed building and maintaining them. I was by no means an expert sys admin, but had my share of cron jobs, scripts, and custom configuration files. While it was fun and educational, software updates and hardware upgrades often meant hours of restoring and troubleshooting my systems.
Fast forward to family and career (especially going into management) and I didn't have time for all that. Having things just work became more important than playing with the tech. As I got older, the more I appreciated K.I.S.S. (for those who don't know: Keep It Simple Stupid).
So when the idea of running a node came to mind, I explored the different options. I decided I needed a balance between a Raspberry Pi (possibly underpowered depending on use) and a full-blown Linux server (too complex and time-consuming to build and maintain). That led me to Umbrel OS, Start9, Casa OS, and similar platforms. Due to its simplicity (very plug and play), nice design, and being open source: GitHub), I chose Umbrel OS on a Beelink mini PC with 16GB of RAM and a 2TB NVMe internal drive. Though Umbrel OS is not very flexible and can't really be customized, its App Store made setting up a node (among other things) fairly easy, and it has been running smoothly since. Would the alternatives have been better? Perhaps, but so far I'm happy with my choice.
Server Setup
I'm also no expert in OpSec (I'd place myself in the category of somewhat above vague awareness). I wanted a secure way to connect to my Umbrel without punching holes in my router and forwarding ports. I chose Tailscale for this purpose. Those who are distrustful of corporate products might not like this option but again, balancing risk with convenience it seemed reasonable for my needs. If you're hiding state (or anti-state) secrets, extravagant wealth, or just adamant about privacy, you would probably want to go with an entirely different setup.
Once I had Tailscale installed on Umbrel OS, my mobile device and laptop, I could securely connect to the server from anywhere through a well designed browser UI. I then installed the following from the Umbrel App Store:
- Bitcoin Core
- Electrum Personal Server (Electrs)
At this point I could set wallets on my laptop (Sparrow) and phone (BlueWallet) to use my node. I then installed:
- Lightning Node (LND)
- Alby Hub
Alby Hub streamlines the process of opening and maintaining lightning channels, creating lightning wallets to send and receive sats, and zapping notes and users on Nostr. I have two main nsec accounts for Nostr and set up separate wallets on Alby Hub to track balances and transactions for each.
Other apps I installed on Umbrel OS:
- mempool
- Bitcoin Explorer
- LibreTranslate (some Nostr clients allow you to use your own translator)
- Public Pool
Public Pool allows me to connect Bitaxe solo miners (a.k.a. "lottery" miners) to my own mining pool for a (very) long shot at winning a Bitcoin block. It's also a great way to learn about mining, contribute to network decentralization, and generally tinker with electronics. Bitaxe miners are small open source single ASIC miners that you can run in your home with minimal technical knowledge and maintenance requirements.
Open Source Miners United (OSMU) is a great resource for anyone interesting in Bitaxe or other open source mining products (especially their Discord server).
Although Umbrel OS is more or less limited to running software in its App Store (or Community App Store, if you trust the developer), you can install the Portainer app and run Docker images. I know next to nothing about Docker but wanted to see what I might be able to do with it. I was also interested in the Haven Nostr relay and found that there was indeed a docker image for it.
As stated before, I didn't want to open my network to the outside, which meant I wouldn't be able to take advantage of all the features Haven offers (since other users wouldn't be able to access it). I would however be able to post notes to my relay, and use its "Blastr" feature to send my notes to other relays. After some trial and error I managed to get a Haven up and running in Portainer.
The upside of this setup is self-custody: being able to connect wallets to my own Bitcoin node, send and receive zaps with my own Lightning channel, solo mine with Bitaxe to my own pool, and send notes to my own Nostr relay. The downside is the lack of redundancy and uptime provided by major cloud services. You have to decide on your own comfort level. A solid internet connection and reliable power are definitely needed.
This article was written and published to Nostr with untype.app.
-
@ e6817453:b0ac3c39
2024-09-30 14:52:23In the modern world of AI, managing vast amounts of data while keeping it relevant and accessible is a significant challenge, mainly when dealing with large language models (LLMs) and vector databases. One approach that has gained prominence in recent years is integrating vector search with metadata, especially in retrieval-augmented generation (RAG) pipelines. Vector search and metadata enable faster and more accurate data retrieval. However, the process of pre- and post-search filtering results plays a crucial role in ensuring data relevance.
The Vector Search and Metadata Challenge
In a typical vector search, you create embeddings from chunks of text, such as a PDF document. These embeddings allow the system to search for similar items and retrieve them based on relevance. The challenge, however, arises when you need to combine vector search results with structured metadata. For example, you may have timestamped text-based content and want to retrieve the most relevant content within a specific date range. This is where metadata becomes critical in refining search results.
Unfortunately, most vector databases treat metadata as a secondary feature, isolating it from the primary vector search process. As a result, handling queries that combine vectors and metadata can become a challenge, particularly when the search needs to account for a dynamic range of filters, such as dates or other structured data.
LibSQL and vector search metadata
LibSQL is a more general-purpose SQLite-based database that adds vector capabilities to regular data. Vectors are presented as blob columns of regular tables. It makes vector embeddings and metadata a first-class citizen that naturally builds deep integration of these data points.
create table if not exists conversation ( id varchar(36) primary key not null, startDate real, endDate real, summary text, vectorSummary F32_BLOB(512) );
It solves the challenge of metadata and vector search and eliminates impedance between vector data and regular structured data points in the same storage.
As you can see, you can access vector-like data and start date in the same query.
select c.id ,c.startDate, c.endDate, c.summary, vector_distance_cos(c.vectorSummary, vector(${vector})) distance from conversation where ${startDate ? `and c.startDate >= ${startDate.getTime()}` : ''} ${endDate ? `and c.endDate <= ${endDate.getTime()}` : ''} ${distance ? `and distance <= ${distance}` : ''} order by distance limit ${top};
vector_distance_cos calculated as distance allows us to make a primitive vector search that does a full scan and calculates distances on rows. We could optimize it with CTE and limit search and distance calculations to a much smaller subset of data.
This approach could be calculation intensive and fail on large amounts of data.
Libsql offers a way more effective vector search based on FlashDiskANN vector indexed.
vector_top_k('idx_conversation_vectorSummary', ${vector} , ${top}) i
vector_top_k is a table function that searches for the top of the newly created vector search index. As you can see, we could use only vector as a function parameter, and other columns could be used outside of the table function. So, to use a vector index together with different columns, we need to apply some strategies.
Now we get a classical problem of integration vector search results with metadata queries.
Post-Filtering: A Common Approach
The most widely adopted method in these pipelines is post-filtering. In this approach, the system first retrieves data based on vector similarities and then applies metadata filters. For example, imagine you’re conducting a vector search to retrieve conversations relevant to a specific question. Still, you also want to ensure these conversations occurred in the past week.
Post-filtering allows the system to retrieve the most relevant vector-based results and subsequently filter out any that don’t meet the metadata criteria, such as date range. This method is efficient when vector similarity is the primary factor driving the search, and metadata is only applied as a secondary filter.
const sqlQuery = ` select c.id ,c.startDate, c.endDate, c.summary, vector_distance_cos(c.vectorSummary, vector(${vector})) distance from vector_top_k('idx_conversation_vectorSummary', ${vector} , ${top}) i inner join conversation c on i.id = c.rowid where ${startDate ? `and c.startDate >= ${startDate.getTime()}` : ''} ${endDate ? `and c.endDate <= ${endDate.getTime()}` : ''} ${distance ? `and distance <= ${distance}` : ''} order by distance limit ${top};
However, there are some limitations. For example, the initial vector search may yield fewer results or omit some relevant data before applying the metadata filter. If the search window is narrow enough, this can lead to complete results.
One working strategy is to make the top value in vector_top_K much bigger. Be careful, though, as the function's default max number of results is around 200 rows.
Pre-Filtering: A More Complex Approach
Pre-filtering is a more intricate approach but can be more effective in some instances. In pre-filtering, metadata is used as the primary filter before vector search takes place. This means that only data that meets the metadata criteria is passed into the vector search process, limiting the scope of the search right from the beginning.
While this approach can significantly reduce the amount of irrelevant data in the final results, it comes with its own challenges. For example, pre-filtering requires a deeper understanding of the data structure and may necessitate denormalizing the data or creating separate pre-filtered tables. This can be resource-intensive and, in some cases, impractical for dynamic metadata like date ranges.
In certain use cases, pre-filtering might outperform post-filtering. For instance, when the metadata (e.g., specific date ranges) is the most important filter, pre-filtering ensures the search is conducted only on the most relevant data.
Pre-filtering with distance-based filtering
So, we are getting back to an old concept. We do prefiltering instead of using a vector index.
WITH FilteredDates AS ( SELECT c.id, c.startDate, c.endDate, c.summary, c.vectorSummary FROM YourTable c WHERE ${startDate ? `AND c.startDate >= ${startDate.getTime()}` : ''} ${endDate ? `AND c.endDate <= ${endDate.getTime()}` : ''} ), DistanceCalculation AS ( SELECT fd.id, fd.startDate, fd.endDate, fd.summary, fd.vectorSummary, vector_distance_cos(fd.vectorSummary, vector(${vector})) AS distance FROM FilteredDates fd ) SELECT dc.id, dc.startDate, dc.endDate, dc.summary, dc.distance FROM DistanceCalculation dc WHERE 1=1 ${distance ? `AND dc.distance <= ${distance}` : ''} ORDER BY dc.distance LIMIT ${top};
It makes sense if the filter produces small data and distance calculation happens on the smaller data set.
As a pro of this approach, you have full control over the data and get all results without omitting some typical values for extensive index searches.
Choosing Between Pre and Post-Filtering
Both pre-filtering and post-filtering have their advantages and disadvantages. Post-filtering is more accessible to implement, especially when vector similarity is the primary search factor, but it can lead to incomplete results. Pre-filtering, on the other hand, can yield more accurate results but requires more complex data handling and optimization.
In practice, many systems combine both strategies, depending on the query. For example, they might start with a broad pre-filtering based on metadata (like date ranges) and then apply a more targeted vector search with post-filtering to refine the results further.
Conclusion
Vector search with metadata filtering offers a powerful approach for handling large-scale data retrieval in LLMs and RAG pipelines. Whether you choose pre-filtering or post-filtering—or a combination of both—depends on your application's specific requirements. As vector databases continue to evolve, future innovations that combine these two approaches more seamlessly will help improve data relevance and retrieval efficiency further.
-
@ 3bf0c63f:aefa459d
2024-09-06 12:49:46Nostr: a quick introduction, attempt #2
Nostr doesn't subscribe to any ideals of "free speech" as these belong to the realm of politics and assume a big powerful government that enforces a common ruleupon everybody else.
Nostr instead is much simpler, it simply says that servers are private property and establishes a generalized framework for people to connect to all these servers, creating a true free market in the process. In other words, Nostr is the public road that each market participant can use to build their own store or visit others and use their services.
(Of course a road is never truly public, in normal cases it's ran by the government, in this case it relies upon the previous existence of the internet with all its quirks and chaos plus a hand of government control, but none of that matters for this explanation).
More concretely speaking, Nostr is just a set of definitions of the formats of the data that can be passed between participants and their expected order, i.e. messages between clients (i.e. the program that runs on a user computer) and relays (i.e. the program that runs on a publicly accessible computer, a "server", generally with a domain-name associated) over a type of TCP connection (WebSocket) with cryptographic signatures. This is what is called a "protocol" in this context, and upon that simple base multiple kinds of sub-protocols can be added, like a protocol for "public-square style microblogging", "semi-closed group chat" or, I don't know, "recipe sharing and feedback".
-
@ da18e986:3a0d9851
2024-08-14 13:58:24After months of development I am excited to officially announce the first version of DVMDash (v0.1). DVMDash is a monitoring and debugging tool for all Data Vending Machine (DVM) activity on Nostr. The website is live at https://dvmdash.live and the code is available on Github.
Data Vending Machines (NIP-90) offload computationally expensive tasks from relays and clients in a decentralized, free-market manner. They are especially useful for AI tools, algorithmic processing of user’s feeds, and many other use cases.
The long term goal of DVMDash is to become 1) a place to easily see what’s happening in the DVM ecosystem with metrics and graphs, and 2) provide real-time tools to help developers monitor, debug, and improve their DVMs.
DVMDash aims to enable users to answer these types of questions at a glance: * What’s the most popular DVM right now? * How much money is being paid to image generation DVMs? * Is any DVM down at the moment? When was the last time that DVM completed a task? * Have any DVMs failed to deliver after accepting payment? Did they refund that payment? * How long does it take this DVM to respond? * For task X, what’s the average amount of time it takes for a DVM to complete the task? * … and more
For developers working with DVMs there is now a visual, graph based tool that shows DVM-chain activity. DVMs have already started calling other DVMs to assist with work. Soon, we will have humans in the loop monitoring DVM activity, or completing tasks themselves. The activity trace of which DVM is being called as part of a sub-task from another DVM will become complicated, especially because these decisions will be made at run-time and are not known ahead of time. Building a tool to help users and developers understand where a DVM is in this activity trace, whether it’s gotten stuck or is just taking a long time, will be invaluable. For now, the website only shows 1 step of a dvm chain from a user's request.
One of the main designs for the site is that it is highly clickable, meaning whenever you see a DVM, Kind, User, or Event ID, you can click it and open that up in a new page to inspect it.
Another aspect of this website is that it should be fast. If you submit a DVM request, you should see it in DVMDash within seconds, as well as events from DVMs interacting with your request. I have attempted to obtain DVM events from relays as quickly as possible and compute metrics over them within seconds.
This project makes use of a nosql database and graph database, currently set to use mongo db and neo4j, for which there are free, community versions that can be run locally.
Finally, I’m grateful to nostr:npub10pensatlcfwktnvjjw2dtem38n6rvw8g6fv73h84cuacxn4c28eqyfn34f for supporting this project.
Features in v0.1:
Global Network Metrics:
This page shows the following metrics: - DVM Requests: Number of unencrypted DVM requests (kind 5000-5999) - DVM Results: Number of unencrypted DVM results (kind 6000-6999) - DVM Request Kinds Seen: Number of unique kinds in the Kind range 5000-5999 (except for known non-DVM kinds 5666 and 5969) - DVM Result Kinds Seen: Number of unique kinds in the Kind range 6000-6999 (except for known non-DVM kinds 6666 and 6969) - DVM Pub Keys Seen: Number of unique pub keys that have written a kind 6000-6999 (except for known non-DVM kinds) or have published a kind 31990 event that specifies a ‘k’ tag value between 5000-5999 - DVM Profiles (NIP-89) Seen: Number of 31990 that have a ‘k’ tag value for kind 5000-5999 - Most Popular DVM: The DVM that has produced the most result events (kind 6000-6999) - Most Popular Kind: The Kind in range 5000-5999 that has the most requests by users. - 24 hr DVM Requests: Number of kind 5000-5999 events created in the last 24 hrs - 24 hr DVM Results: Number of kind 6000-6999 events created in the last 24 hours - 1 week DVM Requests: Number of kind 5000-5999 events created in the last week - 1 week DVM Results: Number of kind 6000-6999 events created in the last week - Unique Users of DVMs: Number of unique pubkeys of kind 5000-5999 events - Total Sats Paid to DVMs: - This is an estimate. - This value is likely a lower bound as it does not take into consideration subscriptions paid to DVMs - This is calculated by counting the values of all invoices where: - A DVM published a kind 7000 event requesting payment and containing an invoice - The DVM later provided a DVM Result for the same job for which it requested payment. - The assumption is that the invoice was paid, otherwise the DVM would not have done the work - Note that because there are multiple ways to pay a DVM such as lightning invoices, ecash, and subscriptions, there is no guaranteed way to know whether a DVM has been paid. Additionally, there is no way to know that a DVM completed the job because some DVMs may not publish a final result event and instead send the user a DM or take some other kind of action.
Recent Requests:
This page shows the most recent 3 events per kind, sorted by created date. You should always be able to find the last 3 events here of all DVM kinds.
DVM Browser:
This page will either show a profile of a specific DVM, or when no DVM is given in the url, it will show a table of all DVMs with some high level stats. Users can click on a DVM in the table to load the DVM specific page.
Kind Browser:
This page will either show data on a specific kind including all DVMs that have performed jobs of that kind, or when no kind is given, it will show a table summarizing activity across all Kinds.
Debug:
This page shows the graph based visualization of all events, users, and DVMs involved in a single job as well as a table of all events in order from oldest to newest. When no event is given, this page shows the 200 most recent events where the user can click on an event in order to debug that job. The graph-based visualization allows the user to zoom in and out and move around the graph, as well as double click on any node in the graph (except invoices) to open up that event, user, or dvm in a new page.
Playground:
This page is currently under development and may not work at the moment. If it does work, in the current state you can login with NIP-07 extension and broadcast a 5050 event with some text and then the page will show you events from DVMs. This page will be used to interact with DVMs live. A current good alternative to this feature, for some but not all kinds, is https://vendata.io/.
Looking to the Future
I originally built DVMDash out of Fear-of-Missing-Out (FOMO); I wanted to make AI systems that were comprised of DVMs but my day job was taking up a lot of my time. I needed to know when someone was performing a new task or launching a new AI or Nostr tool!
I have a long list of DVMs and Agents I hope to build and I needed DVMDash to help me do it; I hope it helps you achieve your goals with Nostr, DVMs, and even AI. To this end, I wish for this tool to be useful to others, so if you would like a feature, please submit a git issue here or note me on Nostr!
Immediate Next Steps:
- Refactoring code and removing code that is no longer used
- Improve documentation to run the project locally
- Adding a metric for number of encrypted requests
- Adding a metric for number of encrypted results
Long Term Goals:
- Add more metrics based on community feedback
- Add plots showing metrics over time
- Add support for showing a multi-dvm chain in the graph based visualizer
- Add a real-time mode where the pages will auto update (currently the user must refresh the page)
- ... Add support for user requested features!
Acknowledgements
There are some fantastic people working in the DVM space right now. Thank you to nostr:npub1drvpzev3syqt0kjrls50050uzf25gehpz9vgdw08hvex7e0vgfeq0eseet for making python bindings for nostr_sdk and for the recent asyncio upgrades! Thank you to nostr:npub1nxa4tywfz9nqp7z9zp7nr7d4nchhclsf58lcqt5y782rmf2hefjquaa6q8 for answering lots of questions about DVMs and for making the nostrdvm library. Thank you to nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft for making the original DVM NIP and vendata.io which I use all the time for testing!
P.S. I rushed to get this out in time for Nostriga 2024; code refactoring will be coming :)
-
@ 3c984938:2ec11289
2024-07-22 11:43:17Bienvenide a Nostr!
Introduccíon
Es tu primera vez aqui en Nostr? Bienvenides! Nostr es un acrónimo raro para "Notes and Other Stuff Transmitted by Relays" on un solo objetivo; resistirse a la censura. Una alternativa a las redes sociales tradicionales, comunicaciónes, blogging, streaming, podcasting, y feventualmente el correo electronico (en fase de desarrollo) con características descentralizadas que te capacita, usario. Jamas seras molestado por un anuncio, capturado por una entidad centralizada o algoritmo que te monetiza.
Permítame ser su anfitrión! Soy Onigiri! Yo estoy explorando el mundo de Nostr, un protocolo de comunicacíon decentralizada. Yo escribo sobre las herramientas y los desarolladores increíbles de Nostr que dan vida a esta reino.
Bienvenides a Nostr Wonderland
Estas a punto de entrar a un otro mundo digtal que te hará explotar tu mente de todas las aplicaciones descentralizadas, clientes, sitios que puedes utilizar. Nunca volverás a ver a las comunicaciones ni a las redes sociales de la mesma manera. Todo gracias al carácter criptográfico de nostr, inpirado por la tecnología "blockchain". Cada usario, cuando crean una cuenta en Nostr, recibe un par de llaves: una privada y una publico. Estos son las llaves de tu propio reino. Lo que escribes, cantes, grabes, lo que creas - todo te pertenece.
Unos llaves de Oro y Plata
Mi amigo y yo llamamos a esto "identidad mediante cifrado" porque tu identidad es cifrado. Tu puedes compartir tu llave de plata "npub" a otros usarios para conectar y seguir. Utiliza tu llave de oro "nsec" para accedar a tu cuenta y exponerte a muchas aplicaciones. Mantenga la llave a buen recaudo en todo momento. Ya no hay razor para estar enjaulado por los terminos de plataformas sociales nunca más.
Onigirl
npub18jvyjwpmm65g8v9azmlvu8knd5m7xlxau08y8vt75n53jtkpz2ys6mqqu3
Todavia No tienes un cliente? Seleccione la mejor opción.
Encuentra la aplicación adecuada para ti! Utilice su clave de oro "nsec" para acceder a estas herramientas maravillosas. También puedes visit a esta pagina a ver a todas las aplicaciones. Antes de pegar tu llave de oro en muchas aplicaciones, considera un "signer" (firmante) para los sitios web 3. Por favor, mire la siguiente imagen para más detalles. Consulte también la leyenda.
Get a Signer extension via chrome webstore
Un firmante (o "signer" en inglés) es una extensión del navegador web. Nos2x and NostrConnect son extensiónes ampliamente aceptado para aceder a Nostr. Esto simplifica el proceso de aceder a sitios "web 3". En lugar de copiar y pegar la clave oro "nsec" cada vez, la mantienes guardado en la extensión y le des permiso para aceder a Nostr.
👉⚡⚡Obtén una billetera Bitcoin lightning para enviar/recibir Zaps⚡⚡ (Esto es opcional)
Aqui en Nostr, utilizamos la red Lightning de Bitcoin (L2). Nesitaras una cartera lightning para enviar y recibir Satoshis, la denominacion mas chiquita de un Bitcoin. (0.000000001 BTC) Los "zaps" son un tipo de micropago en Nostr. Si te gusta el contenido de un usario, es norma dejarle una propina en la forma de un ¨zap". Por ejemplo, si te gusta este contenido, tu me puedes hacer "zap" con Satoshis para recompensar mi trabajo. Pero apenas llegaste, as que todavia no tienes una cartera. No se preocupe, puedo ayudar en eso!
"Stacker.News" es una plataforma donde los usarios pueden ganar SATS por publicar articulos y interactuar con otros.
Stacker.News es el lugar mas facil para recibir una direccion de cartera Bitcoin Lightning.
- Acedese con su extensión firmante "signer" - Nos2x or NostrConnect - hace click en tu perfil, un codigo de letras y numeros en la mano superior derecha. Veás algo como esto
- Haga clic en "edit" y elija un nombre que te guste. Se puede cambiar si deseas en el futuro.
- Haga clic en "save"
- Crea una biografía y la comunidad SN son muy acogedora. Te mandarán satoshi para darte la bienvenida.
- Tu nueva direccion de cartera Bitcoin Lightning aparecerá asi
^^No le mandas "zaps" a esta direccion; es puramente con fines educativos.
- Con tu Nueva dirección de monedero Bitcoin Lightning puedes ponerla en cualquier cliente o app de tu elección. Para ello, ve a tu página de perfil y bajo la dirección de tu monedero en "Dirección Lightning", introduce tu nueva dirección y pulsa "guardar " y ya está. Enhorabuena.
👉✨Con el tiempo, es posible que desee pasar a las opciones de auto-custodia y tal vez incluso considerar la posibilidad de auto-alojar su propio nodo LN para una mejor privacidad. La buena noticia es que stacker.news tambien está dejando de ser una cartera custodio.
⭐NIP-05-identidad DNS⭐ Al igual que en Twitter, una marca de verificación es para mostrar que eres del mismo jardín "como un humano", y no un atípico como una mala hierba o, "bot". Pero no de la forma nefasta en que lo hacen las grandes tecnológicas. En el país de las maravillas de Nostr, esto te permite asignar tu llave de plata, "npub", a un identificador DNS. Una vez verificado, puedes gritar para anunciar tu nueva residencia Nostr para compartir.
✨Hay un montón de opciones, pero si has seguido los pasos, esto se vuelve extremadamente fácil.
👉✅¡Haz clic en tu "Perfil ", luego en "Configuración ", desplázate hasta la parte inferior y pega tu clave Silver, "npub!" y haz clic en "Guardar " y ¡listo! Utiliza tu monedero relámpago de Stacker.news como tu NIP-05. ¡¡¡Enhorabuena!!! ¡Ya estás verificado! Dale unas horas y cuando uses tu cliente "principal " deberías ver una marca de verificación.
Nostr, el infonformista de los servidores.
En lugar de utilizar una única instancia o un servidor centralizado, Nostr está construido para que varias bases de datos intercambien mensajes mediante "relés". Los relés, que son neutrales y no discriminatorios, almacenan y difunden mensajes públicos en la red Nostr. Transmiten mensajes a todos los demás clientes conectados a ellos, asegurando las comunicaciones en la red descentralizada.
¡Mis amigos en Nostr te dan la bienvenida!
Bienvenida a la fiesta. ¿Le apetece un té?🍵
¡Hay mucho mas!
Esto es la punta del iceberg. Síguenme mientras continúo explorando nuevas tierras y a los desarolladores, los caballeres que potencioan este ecosistema. Encuéntrame aquí para mas contenido como este y comparten con otros usarios de nostr. Conozca a los caballeres que luchan por freedomTech (la tecnología de libertad) en Nostr y a los proyectos a los que contribuyen para hacerla realidad.💋
Onigirl @npub18jvyjwpmm65g8v9azmlvu8knd5m7xlxau08y8vt75n53jtkpz2ys6mqqu3
🧡😻Esta guía ha sido cuidadosamente traducida por miggymofongo
Puede seguirla aquí. @npub1ajt9gp0prf4xrp4j07j9rghlcyukahncs0fw5ywr977jccued9nqrcc0cs
sitio web
- Acedese con su extensión firmante "signer" - Nos2x or NostrConnect - hace click en tu perfil, un codigo de letras y numeros en la mano superior derecha. Veás algo como esto
-
@ 6871d8df:4a9396c1
2024-06-12 22:10:51Embracing AI: A Case for AI Accelerationism
In an era where artificial intelligence (AI) development is at the forefront of technological innovation, a counter-narrative championed by a group I refer to as the 'AI Decels'—those advocating for the deceleration of AI advancements— seems to be gaining significant traction. After tuning into a recent episode of the Joe Rogan Podcast, I realized that the prevailing narrative around AI was heading in a dangerous direction. Rogan had Aza Raskin and Tristan Harris, technology safety advocates, who released a talk called 'The AI Dilemma,' on for a discussion. You may know them from the popular documentary 'The Social Dilemma' on the dangers of social media. It became increasingly clear that the cautionary stance dominating this discourse might be tipping the scales too far, veering towards an over-regulated future that stifles innovation rather than fostering it.
Are we moving too fast?
While acknowledging AI's benefits, Aza and Tristan fear it could be dangerous if not guided by ethical standards and safeguards. They believe AI development is moving too quickly and that the right incentives for its growth are not in place. They are concerned about the possibility of "civilizational overwhelm," where advanced AI technology far outpaces 21st-century governance. They fear a scenario where society and its institutions cannot manage or adapt to the rapid changes and challenges introduced by AI.
They argue for regulating and slowing down AI development due to rapid, uncontrolled advancement driven by competition among companies like Google, OpenAI, and Microsoft. They claim this race can lead to unsafe releases of new technologies, with AI systems exhibiting unpredictable, emergent behaviors, posing significant societal risks. For instance, AI can inadvertently learn tasks like sentiment analysis or human emotion understanding, creating potential for misuse in areas like biological weapons or cybersecurity vulnerabilities.
Moreover, AI companies' profit-driven incentives often conflict with the public good, prioritizing market dominance over safety and ethics. This misalignment can lead to technologies that maximize engagement or profits at societal expense, similar to the negative impacts seen with social media. To address these issues, they suggest government regulation to realign AI companies' incentives with safety, ethical considerations, and public welfare. Implementing responsible development frameworks focused on long-term societal impacts is essential for mitigating potential harm.
This isn't new
Though the premise of their concerns seems reasonable, it's dangerous and an all too common occurrence with the emergence of new technologies. For example, in their example in the podcast, they refer to the technological breakthrough of oil. Oil as energy was a technological marvel and changed the course of human civilization. The embrace of oil — now the cornerstone of industry in our age — revolutionized how societies operated, fueled economies, and connected the world in unprecedented ways. Yet recently, as ideas of its environmental and geopolitical ramifications propagated, the narrative around oil has shifted.
Tristan and Aza detail this shift and claim that though the period was great for humanity, we didn't have another technology to go to once the technological consequences became apparent. The problem with that argument is that we did innovate to a better alternative: nuclear. However, at its technological breakthrough, it was met with severe suspicions, from safety concerns to ethical debates over its use. This overregulation due to these concerns caused a decades-long stagnation in nuclear innovation, where even today, we are still stuck with heavy reliance on coal and oil. The scare tactics and fear-mongering had consequences, and, interestingly, they don't see the parallels with their current deceleration stance on AI.
These examples underscore a critical insight: the initial anxiety surrounding new technologies is a natural response to the unknowns they introduce. Yet, history shows that too much anxiety can stifle the innovation needed to address the problems posed by current technologies. The cycle of discovery, fear, adaptation, and eventual acceptance reveals an essential truth—progress requires not just the courage to innovate but also the resilience to navigate the uncertainties these innovations bring.
Moreover, believing we can predict and plan for all AI-related unknowns reflects overconfidence in our understanding and foresight. History shows that technological progress, marked by unexpected outcomes and discoveries, defies such predictions. The evolution from the printing press to the internet underscores progress's unpredictability. Hence, facing AI's future requires caution, curiosity, and humility. Acknowledging our limitations and embracing continuous learning and adaptation will allow us to harness AI's potential responsibly, illustrating that embracing our uncertainties, rather than pretending to foresee them, is vital to innovation.
The journey of technological advancement is fraught with both promise and trepidation. Historically, each significant leap forward, from the dawn of the industrial age to the digital revolution, has been met with a mix of enthusiasm and apprehension. Aza Raskin and Tristan Harris's thesis in the 'AI Dilemma' embodies the latter.
Who defines "safe?"
When slowing down technologies for safety or ethical reasons, the issue arises of who gets to define what "safe" or “ethical” mean? This inquiry is not merely technical but deeply ideological, touching the very core of societal values and power dynamics. For example, the push for Diversity, Equity, and Inclusion (DEI) initiatives shows how specific ideological underpinnings can shape definitions of safety and decency.
Take the case of the initial release of Google's AI chatbot, Gemini, which chose the ideology of its creators over truth. Luckily, the answers were so ridiculous that the pushback was sudden and immediate. My worry, however, is if, in correcting this, they become experts in making the ideological capture much more subtle. Large bureaucratic institutions' top-down safety enforcement creates a fertile ground for ideological capture of safety standards.
I claim that the issue is not the technology itself but the lens through which we view and regulate it. Suppose the gatekeepers of 'safety' are aligned with a singular ideology. In that case, AI development would skew to serve specific ends, sidelining diverse perspectives and potentially stifling innovative thought and progress.
In the podcast, Tristan and Aza suggest such manipulation as a solution. They propose using AI for consensus-building and creating "shared realities" to address societal challenges. In practice, this means that when individuals' viewpoints seem to be far apart, we can leverage AI to "bridge the gap." How they bridge the gap and what we would bridge it toward is left to the imagination, but to me, it is clear. Regulators will inevitably influence it from the top down, which, in my opinion, would be the opposite of progress.
In navigating this terrain, we must advocate for a pluralistic approach to defining safety, encompassing various perspectives and values achieved through market forces rather than a governing entity choosing winners. The more players that can play the game, the more wide-ranging perspectives will catalyze innovation to flourish.
Ownership & Identity
Just because we should accelerate AI forward does not mean I do not have my concerns. When I think about what could be the most devastating for society, I don't believe we have to worry about a Matrix-level dystopia; I worry about freedom. As I explored in "Whose data is it anyway?," my concern gravitates toward the issues of data ownership and the implications of relinquishing control over our digital identities. This relinquishment threatens our privacy and the integrity of the content we generate, leaving it susceptible to the inclinations and profit of a few dominant tech entities.
To counteract these concerns, a paradigm shift towards decentralized models of data ownership is imperative. Such standards would empower individuals with control over their digital footprints, ensuring that we develop AI systems with diverse, honest, and truthful perspectives rather than the massaged, narrow viewpoints of their creators. This shift safeguards individual privacy and promotes an ethical framework for AI development that upholds the principles of fairness and impartiality.
As we stand at the crossroads of technological innovation and ethical consideration, it is crucial to advocate for systems that place data ownership firmly in the hands of users. By doing so, we can ensure that the future of AI remains truthful, non-ideological, and aligned with the broader interests of society.
But what about the Matrix?
I know I am in the minority on this, but I feel that the concerns of AGI (Artificial General Intelligence) are generally overblown. I am not scared of reaching the point of AGI, and I think the idea that AI will become so intelligent that we will lose control of it is unfounded and silly. Reaching AGI is not reaching consciousness; being worried about it spontaneously gaining consciousness is a misplaced fear. It is a tool created by humans for humans to enhance productivity and achieve specific outcomes.
At a technical level, large language models (LLMs) are trained on extensive datasets and learning patterns from language and data through a technique called "unsupervised learning" (meaning the data is untagged). They predict the next word in sentences, refining their predictions through feedback to improve coherence and relevance. When queried, LLMs generate responses based on learned patterns, simulating an understanding of language to provide contextually appropriate answers. They will only answer based on the datasets that were inputted and scanned.
AI will never be "alive," meaning that AI lacks inherent agency, consciousness, and the characteristics of life, not capable of independent thought or action. AI cannot act independently of human control. Concerns about AI gaining autonomy and posing a threat to humanity are based on a misunderstanding of the nature of AI and the fundamental differences between living beings and machines. AI spontaneously developing a will or consciousness is more similar to thinking a hammer will start walking than us being able to create consciousness through programming. Right now, there is only one way to create consciousness, and I'm skeptical that is ever something we will be able to harness and create as humans. Irrespective of its complexity — and yes, our tools will continue to become evermore complex — machines, specifically AI, cannot transcend their nature as non-living, inanimate objects programmed and controlled by humans.
The advancement of AI should be seen as enhancing human capabilities, not as a path toward creating autonomous entities with their own wills. So, while AI will continue to evolve, improve, and become more powerful, I believe it will remain under human direction and control without the existential threats often sensationalized in discussions about AI's future.
With this framing, we should not view the race toward AGI as something to avoid. This will only make the tools we use more powerful, making us more productive. With all this being said, AGI is still much farther away than many believe.
Today's AI excels in specific, narrow tasks, known as narrow or weak AI. These systems operate within tightly defined parameters, achieving remarkable efficiency and accuracy that can sometimes surpass human performance in those specific tasks. Yet, this is far from the versatile and adaptable functionality that AGI represents.
Moreover, the exponential growth of computational power observed in the past decades does not directly translate to an equivalent acceleration in achieving AGI. AI's impressive feats are often the result of massive data inputs and computing resources tailored to specific tasks. These successes do not inherently bring us closer to understanding or replicating the general problem-solving capabilities of the human mind, which again would only make the tools more potent in our hands.
While AI will undeniably introduce challenges and change the aspects of conflict and power dynamics, these challenges will primarily stem from humans wielding this powerful tool rather than the technology itself. AI is a mirror reflecting our own biases, values, and intentions. The crux of future AI-related issues lies not in the technology's inherent capabilities but in how it is used by those wielding it. This reality is at odds with the idea that we should slow down development as our biggest threat will come from those who are not friendly to us.
AI Beget's AI
While the unknowns of AI development and its pitfalls indeed stir apprehension, it's essential to recognize the power of market forces and human ingenuity in leveraging AI to address these challenges. History is replete with examples of new technologies raising concerns, only for those very technologies to provide solutions to the problems they initially seemed to exacerbate. It looks silly and unfair to think of fighting a war with a country that never embraced oil and was still primarily getting its energy from burning wood.
The evolution of AI is no exception to this pattern. As we venture into uncharted territories, the potential issues that arise with AI—be it ethical concerns, use by malicious actors, biases in decision-making, or privacy intrusions—are not merely obstacles but opportunities for innovation. It is within the realm of possibility, and indeed, probability, that AI will play a crucial role in solving the problems it creates. The idea that there would be no incentive to address and solve these problems is to underestimate the fundamental drivers of technological progress.
Market forces, fueled by the demand for better, safer, and more efficient solutions, are powerful catalysts for positive change. When a problem is worth fixing, it invariably attracts the attention of innovators, researchers, and entrepreneurs eager to solve it. This dynamic has driven progress throughout history, and AI is poised to benefit from this problem-solving cycle.
Thus, rather than viewing AI's unknowns as sources of fear, we should see them as sparks of opportunity. By tackling the challenges posed by AI, we will harness its full potential to benefit humanity. By fostering an ecosystem that encourages exploration, innovation, and problem-solving, we can ensure that AI serves as a force for good, solving problems as profound as those it might create. This is the optimism we must hold onto—a belief in our collective ability to shape AI into a tool that addresses its own challenges and elevates our capacity to solve some of society's most pressing issues.
An AI Future
The reality is that it isn't whether AI will lead to unforeseen challenges—it undoubtedly will, as has every major technological leap in history. The real issue is whether we let fear dictate our path and confine us to a standstill or embrace AI's potential to address current and future challenges.
The approach to solving potential AI-related problems with stringent regulations and a slowdown in innovation is akin to cutting off the nose to spite the face. It's a strategy that risks stagnating the U.S. in a global race where other nations will undoubtedly continue their AI advancements. This perspective dangerously ignores that AI, much like the printing press of the past, has the power to democratize information, empower individuals, and dismantle outdated power structures.
The way forward is not less AI but more of it, more innovation, optimism, and curiosity for the remarkable technological breakthroughs that will come. We must recognize that the solution to AI-induced challenges lies not in retreating but in advancing our capabilities to innovate and adapt.
AI represents a frontier of limitless possibilities. If wielded with foresight and responsibility, it's a tool that can help solve some of the most pressing issues we face today. There are certainly challenges ahead, but I trust that with problems come solutions. Let's keep the AI Decels from steering us away from this path with their doomsday predictions. Instead, let's embrace AI with the cautious optimism it deserves, forging a future where technology and humanity advance to heights we can't imagine.
-
@ 3c984938:2ec11289
2024-06-09 14:40:55I'm having some pain in my heart about the U.S. elections.
Ever since Obama campaigned for office, an increase of young voters have come out of the woodwork. Things have not improved. They've actively told you that "your vote matters." I believe this to be a lie unless any citizen can demand at the gate, at the White House to be allowed to hold and point a gun to the president's head. (Relax, this is a hyperbole)
Why so dramatic? Well, what does the president do? Sign bills, commands the military, nominates new Fed chairman, ambassadors, supreme judges and senior officials all while traveling in luxury planes and living in a white palace for four years.
They promised Every TIME to protect citizen rights when they take the oath and office.
...They've broken this several times, with so-called "emergency-crisis"
The purpose of a president, today, it seems is to basically hire armed thugs to keep the citizens in check and make sure you "voluntarily continue to be a slave," to the system, hence the IRS. The corruption extends from the cop to the judge and even to politicians. The politicians get paid from lobbyists to create bills in congress for the president to sign. There's no right answer when money is involved with politicians. It is the same if you vote Obama, Biden, Trump, or Haley. They will wield the pen to serve themselves to say it will benefit the country.
In the first 100 years of presidency, the government wasn't even a big deal. They didn't even interfere with your life as much as they do today.
^^ You hold the power in your hands, don't let them take it. Don't believe me? Try to get a loan from a bank without a signature. Your signature is as good as gold (if not better) and is an original trademark.
Just Don't Vote. End the Fed. Opt out.
^^ I choose to form my own path, even if it means leaving everything I knew prior. It doesn't have to be a spiritual thing. Some, have called me religious because of this. We're all capable of greatness and having humanity.
✨Don't have a machine heart with a machine mind. Instead, choose to have a heart like the cowardly lion from the "Wizard Of Oz."
There's no such thing as a good president or politicians.
If there was, they would have issued non-interest Federal Reserve Notes. Lincoln and Kennedy tried to do this, they got shot.
There's still a banner of America there, but it's so far gone that I cannot even recognize it. However, I only see a bunch of 🏳🌈 pride flags.
✨Patrick Henry got it wrong, when he delivered his speech, "Give me liberty or give me death." Liberty and freedom are two completely different things.
Straightforward from Merriam-Webster Choose Right or left?
No control, to be 100% without restrictions- free.
✨I disagree with the example sentence given. Because you cannot advocate for human freedom and own slaves, it's contradicting it. Which was common in the founding days.
I can understand many may disagree with me, and you might be thinking, "This time will be different." I, respectfully, disagree, and the proxy wars are proof. Learn the importance of Bitcoin, every Satoshi is a step away from corruption.
✨What does it look like to pull the curtains from the "Wizard of Oz?"
Have you watched the video below, what 30 Trillion dollars in debt looks like visually? Even I was blown away. https://video.nostr.build/d58c5e1afba6d7a905a39407f5e695a4eb4a88ae692817a36ecfa6ca1b62ea15.mp4
I say this with love. Hear my plea?
Normally, I don't write about anything political. It just feels like a losing game. My energy feels it's in better use to learn new things, write and to create. Even a simple blog post as simple as this. Stack SATs, and stay humble.
<3 Onigirl
-
@ c631e267:c2b78d3e
2025-04-18 15:53:07Verstand ohne Gefühl ist unmenschlich; \ Gefühl ohne Verstand ist Dummheit. \ Egon Bahr
Seit Jahren werden wir darauf getrimmt, dass Fakten eigentlich gefühlt seien. Aber nicht alles ist relativ und nicht alles ist nach Belieben interpretierbar. Diese Schokoladenhasen beispielsweise, die an Ostern in unseren Gefilden typisch sind, «ostern» zwar nicht, sondern sie sitzen in der Regel, trotzdem verwandelt sie das nicht in «Sitzhasen».
Nichts soll mehr gelten, außer den immer invasiveren Gesetzen. Die eigenen Traditionen und Wurzeln sind potenziell «pfui», um andere Menschen nicht auszuschließen, aber wir mögen uns toleranterweise an die fremden Symbole und Rituale gewöhnen. Dabei ist es mir prinzipiell völlig egal, ob und wann jemand ein Fastenbrechen feiert, am Karsamstag oder jedem anderen Tag oder nie – aber bitte freiwillig.
Und vor allem: Lasst die Finger von den Kindern! In Bern setzten kürzlich Demonstranten ein Zeichen gegen die zunehmende Verbreitung woker Ideologie im Bildungssystem und forderten ein Ende der sexuellen Indoktrination von Schulkindern.
Wenn es nicht wegen des heiklen Themas Migration oder wegen des Regenbogens ist, dann wegen des Klimas. Im Rahmen der «Netto Null»-Agenda zum Kampf gegen das angeblich teuflische CO2 sollen die Menschen ihre Ernährungsgewohnheiten komplett ändern. Nach dem Willen von Produzenten synthetischer Lebensmittel, wie Bill Gates, sollen wir baldmöglichst praktisch auf Fleisch und alle Milchprodukte wie Milch und Käse verzichten. Ein lukratives Geschäftsmodell, das neben der EU aktuell auch von einem britischen Lobby-Konsortium unterstützt wird.
Sollten alle ideologischen Stricke zu reißen drohen, ist da immer noch «der Putin». Die Unions-Europäer offenbaren sich dabei ständig mehr als Vertreter der Rüstungsindustrie. Allen voran zündelt Deutschland an der Kriegslunte, angeführt von einem scheinbar todesmutigen Kanzlerkandidaten Friedrich Merz. Nach dessen erneuter Aussage, «Taurus»-Marschflugkörper an Kiew liefern zu wollen, hat Russland eindeutig klargestellt, dass man dies als direkte Kriegsbeteiligung werten würde – «mit allen sich daraus ergebenden Konsequenzen für Deutschland».
Wohltuend sind Nachrichten über Aktivitäten, die sich der allgemeinen Kriegstreiberei entgegenstellen oder diese öffentlich hinterfragen. Dazu zählt auch ein Kongress kritischer Psychologen und Psychotherapeuten, der letzte Woche in Berlin stattfand. Die vielen Vorträge im Kontext von «Krieg und Frieden» deckten ein breites Themenspektrum ab, darunter Friedensarbeit oder die Notwendigkeit einer «Pädagogik der Kriegsuntüchtigkeit».
Der heutige «stille Freitag», an dem Christen des Leidens und Sterbens von Jesus gedenken, ist vielleicht unabhängig von jeder religiösen oder spirituellen Prägung eine passende Einladung zur Reflexion. In der Ruhe liegt die Kraft. In diesem Sinne wünsche ich Ihnen frohe Ostertage!
[Titelbild: Pixabay]
Dieser Beitrag wurde mit dem Pareto-Client geschrieben und ist zuerst auf Transition News erschienen.
-
@ 3bf0c63f:aefa459d
2024-05-21 12:38:08Bitcoin transactions explained
A transaction is a piece of data that takes inputs and produces outputs. Forget about the blockchain thing, Bitcoin is actually just a big tree of transactions. The blockchain is just a way to keep transactions ordered.
Imagine you have 10 satoshis. That means you have them in an unspent transaction output (UTXO). You want to spend them, so you create a transaction. The transaction should reference unspent outputs as its inputs. Every transaction has an immutable id, so you use that id plus the index of the output (because transactions can have multiple outputs). Then you specify a script that unlocks that transaction and related signatures, then you specify outputs along with a script that locks these outputs.
As you can see, there's this lock/unlocking thing and there are inputs and outputs. Inputs must be unlocked by fulfilling the conditions specified by the person who created the transaction they're in. And outputs must be locked so anyone wanting to spend those outputs will need to unlock them.
For most of the cases locking and unlocking means specifying a public key whose controller (the person who has the corresponding private key) will be able to spend. Other fancy things are possible too, but we can ignore them for now.
Back to the 10 satoshis you want to spend. Since you've successfully referenced 10 satoshis and unlocked them, now you can specify the outputs (this is all done in a single step). You can specify one output of 10 satoshis, two of 5, one of 3 and one of 7, three of 3 and so on. The sum of outputs can't be more than 10. And if the sum of outputs is less than 10 the difference goes to fees. In the first days of Bitcoin you didn't need any fees, but now you do, otherwise your transaction won't be included in any block.
If you're still interested in transactions maybe you could take a look at this small chapter of that Andreas Antonopoulos book.
If you hate Andreas Antonopoulos because he is a communist shitcoiner or don't want to read more than half a page, go here: https://en.bitcoin.it/wiki/Coin_analogy
-
@ 3c984938:2ec11289
2024-05-09 04:43:15It's been a journey from the Publishing Forest of Nostr to the open sea of web3. I've come across a beautiful chain of islands and thought. Why not take a break and explore this place? If I'm searching for devs and FOSS, I should search every nook and cranny inside the realm of Nostr. It is quite vast for little old me. I'm just a little hamster and I don't speak in code or binary numbers zeros and ones.
After being in sea for awhile, my heart raced for excitement for what I could find. It seems I wasn't alone, there were others here like me! Let's help spread the message to others about this uncharted realm. See, look at the other sailboats, aren't they pretty? Thanks to some generous donation of SATs, I was able to afford the docking fee.
Ever feel like everyone was going to a party, and you were supposed to dress up, but you missed the memo? Or a comic-con? well, I felt completely underdressed and that's an understatement. Well, turns out there is a some knights around here. Take a peek!
A black cat with a knight passed by very quickly. He was moving too fast for me to track. Where was he going? Then I spotted a group of knights heading in the same direction, so I tagged along. The vibes from these guys was impossible to resist. They were just happy-go-lucky. 🥰They were heading to a tavern on a cliff off the island.
Ehh? a Tavern? Slightly confused, whatever could these knights be doing here? I guess when they're done with their rounds they would here to blow off steam. Things are looking curiouser and curiouser. But the black cat from earlier was here with its rider, whom was dismounting. So you can only guess, where I'm going.
The atmosphere in this pub, was lively and energetic. So many knights spoke among themselves. A group here, another there, but there was one that caught my eye. I went up to a group at a table, whose height towed well above me even when seated. Taking a deep breath, I asked, "Who manages this place?" They unanimous pointed to one waiting for ale at the bar. What was he doing? Watching others talk? How peculiar.
So I went up to him! And introduced myself.
"Hello I'm Onigirl"
"Hello Onigirl, Welcome to Gossip"
"Gossip, what is Gossip?" scratching my head and whiskers.
What is Gossip? Gossip is FOSS and a great client for privacy-centric minded nostriches. It avoids browser tech which by-passes several scripting languages such as JavaScript☕, HTML parsing, rendering, and CSS(Except HTTP GET and Websockets). Using OpenGL-style rendering. For Nostriches that wish to remain anonymous can use Gossip over TOR. Mike recommends using QubesOS, Whonix and or Tails. [FYI-Gossip does not natively support tor SOCKS5 proxy] Most helpful to spill the beans if you're a journalist.
On top of using your nsec or your encryption key, Gossip adds another layer of security over your account with a password login. There's nothing wrong with using the browser extensions (such as nos2x or Flamingo) which makes it super easy to log in to Nostr enable websites, apps, but it does expose you to browser vulnerabilities.
Mike Points out
"people have already had their private key stolen from other nostr clients,"
so it a concern if you value your account. I most certainly care for mine.
Gossip UI has a simple, and clean interface revolving around NIP-65 also called the “Outbox model." As posted from GitHub,
"This NIP allows Clients to connect directly with the most up-to-date relay set from each individual user, eliminating the need of broadcasting events to popular relays."
This eliminates clients that track only a specific set of relays which can congest those relays when you publish your note. Also this can be censored, by using Gossip you can publish notes to alternative relays that have not censored you to reach the same followers.
👉The easiest way to translate that is reducing redundancy to publish to popular relays or centralized relays for content reach to your followers.
Cool! What an awesome client, I mean Tavern! What else does this knight do? He reaches for something in his pocket. what is it? A Pocket is a database for storing and retrieving nostr events but mike's written it in Rust with a few extra kinks inspired by Will's nostrdb. Still in development, but it'll be another tool for you dear user! 💖💕💚
Onigirl is proud to present this knights to the community and honor them with kisu. 💋💋💋 Show some 💖💘💓🧡💙💚
👉💋💋Will - jb55 Lord of apples 💋 @npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s
👉💋💋 Mike Knight - Lord of Security 💋 @npub1acg6thl5psv62405rljzkj8spesceyfz2c32udakc2ak0dmvfeyse9p35c
Knights spend a lot of time behind the screen coding for the better of humanity. It is a tough job! Let's appreciate these knights, relay operators, that support this amazing realm of Nostr! FOSS for all!
This article was prompted for the need for privacy and security of your data. They're different, not to be confused.
Recently, Edward Snowden warns Bitcoin devs about the need for privacy, Quote:
“I've been warning Bitcoin developers for ten years that privacy needs to be provided for at the protocol level. This is the final warning. The clock is ticking.”
Snowden’s comments come after heavy actions of enforcement from Samarai Wallet, Roger Ver, Binance’s CZ, and now the closure of Wasabi Wallet. Additionally, according to CryptoBriefing, Trezor is ending it’s CoinJoin integration as well. Many are concerned over the new definition of a money transmitter, which includes even those who don’t touch the funds.
Help your favorite the hamster
^^Me drowning in notes on your feed. I can only eat so many notes to find you.
👉If there are any XMPP fans on here. I'm open to the idea of opening a public channel, so you could follow me on that as a forum-like style. My server of choice would likely be a German server.😀You would be receiving my articles as njump.me style or website-like. GrapeneOS users, you can download Cheogram app from the F-Driod store for free to access. Apple and Andriod users are subjected to pay to download this app, an alternative is ntalk or conversations. If it interests the community, just FYI. Please comment or DM.
👉If you enjoyed this content, please consider reposting/sharing as my content is easily drowned by notes on your feed. You could also join my community under Children_Zone where I post my content.
An alternative is by following #onigirl Just FYI this feature is currently a little buggy.
Follow as I search for tools and awesome devs to help you dear user live a decentralized life as I explore the realm of Nostr.
Thank you Fren
-
@ 3c984938:2ec11289
2024-04-16 17:14:58Hello (N)osytrs!
Yes! I'm calling you an (N)oystr!
Why is that? Because you shine, and I'm not just saying that to get more SATs. Ordinary Oysters and mussels can produce these beauties! Nothing seriously unique about them, however, with a little time and love each oyster is capable of creating something truly beautiful. I like believing so, at least, given the fact that you're even reading this article; makes you an (N)oystr! This isn't published this on X (formerly known as Twitter), Facebook, Discord, Telegram, or Instagram, which makes you the rare breed! A pearl indeed! I do have access to those platforms, but why create content on a terrible platform knowing I too could be shut down! Unfortunately, many people still use these platforms. This forces individuals to give up their privacy every day. Meta is leading the charge by forcing users to provide a photo ID for verification in order to use their crappy, obsolete site. If that was not bad enough, imagine if you're having a type of disagreement or opinion. Then, Bigtech can easily deplatform you. Umm. So no open debate? Just instantly shut-off users. Whatever, happened to right to a fair trial? Nope, just burning you at the stake as if you're a witch or warlock!
How heinous are the perpetrators and financiers of this? Well, that's opening another can of worms for you.
Imagine your voice being taken away, like the little mermaid. Ariel was lucky to have a prince, but the majority of us? The likelihood that I would be carried away by the current of the sea during a sunset with a prince on a sailboat is zero. And I live on an island, so I'm just missing the prince, sailboat(though I know where I could go to steal one), and red hair. Oh my gosh, now I feel sad.
I do not have the prince, Bob is better! I do not have mermaid fins, or a shell bra. Use coconut shells, it offers more support! But, I still have my voice and a killer sunset to die for!
All of that is possible thanks to the work of developers. These knights fight for Freedom Tech by utilizing FOSS, which help provides us with a vibrant ecosystem. Unfortunately, I recently learned that they are not all funded. Knights must eat, drink, and have a work space. This space is where they spend most of their sweat equity on an app or software that may and may not pan out. That brilliance is susceptible to fading, as these individuals are not seen but rather stay behind closed doors. What's worse, if these developers lose faith in their project and decide to join forces with Meta! 😖 Does WhatsApp ring a bell?
Without them, I probably wouldn't be able to create this long form article. Let's cheer them on like cheerleaders.. 👉Unfortunately, there's no cheerleader emoji so you'll just have to settle for a dancing lady, n guy. 💃🕺
Semisol said it beautifully, npub12262qa4uhw7u8gdwlgmntqtv7aye8vdcmvszkqwgs0zchel6mz7s6cgrkj
If we want freedom tech to succeed, the tools that make it possible need to be funded: relays like https://nostr.land, media hosts like https://nostr.build, clients like https://damus.io, etc.
With that thought, Onigirl is pleased to announce the launch of a new series. With a sole focus on free market devs/projects.
Knights of Nostr!
I'll happily brief you about their exciting project and how it benefits humanity! Let's Support these Magnificent projects, devs, relays, and builders! Our first runner up!
Oppa Fishcake :Lord of Media Hosting
npub137c5pd8gmhhe0njtsgwjgunc5xjr2vmzvglkgqs5sjeh972gqqxqjak37w
Oppa Fishcake with his noble steed!
Think of this as an introduction to learn and further your experience on Nostr! New developments and applications are constantly happening on Nostr. It's enough to make one's head spin. I may also cover FOSS projects(outside of Nostr) as they need some love as well! Plus, you can think of it as another tool to add to your decentralized life. I will not be doing how-to-Nostr guides. I personally feel there are plenty of great guides already available! Which I'm happy to add to curation collection via easily searchable on Yakihonne.
For email updates you can subscribe to my [[https://paragraph.xyz/@onigirl]]
If you like it, send me some 🧡💛💚 hearts💜💗💖 otherwise zap dat⚡⚡🍑🍑peach⚡⚡🍑 ~If not me, then at least to our dearest knight!
Thank you from the bottom of my heart for your time and support (N)oystr! Shine bright like a diamond! Share if you care! FOSS power!
Follow on your favorite Nostr Client for the best viewing experience!
[!NOTE]
I'm using Obsidian + Nostr Writer Plugin; a new way to publish Markdown directly to Nostr. I was a little nervous using this because I was used doing them in RStudio; R Markdown.
Since this is my first article, I sent it to my account as a draft to test it. It's pretty neat. -
@ c631e267:c2b78d3e
2025-04-04 18:47:27Zwei mal drei macht vier, \ widewidewitt und drei macht neune, \ ich mach mir die Welt, \ widewide wie sie mir gefällt. \ Pippi Langstrumpf
Egal, ob Koalitionsverhandlungen oder politischer Alltag: Die Kontroversen zwischen theoretisch verschiedenen Parteien verschwinden, wenn es um den Kampf gegen politische Gegner mit Rückenwind geht. Wer den Alteingesessenen die Pfründe ernsthaft streitig machen könnte, gegen den werden nicht nur «Brandmauern» errichtet, sondern der wird notfalls auch strafrechtlich verfolgt. Doppelstandards sind dabei selbstverständlich inklusive.
In Frankreich ist diese Woche Marine Le Pen wegen der Veruntreuung von EU-Geldern von einem Gericht verurteilt worden. Als Teil der Strafe wurde sie für fünf Jahre vom passiven Wahlrecht ausgeschlossen. Obwohl das Urteil nicht rechtskräftig ist – Le Pen kann in Berufung gehen –, haben die Richter das Verbot, bei Wahlen anzutreten, mit sofortiger Wirkung verhängt. Die Vorsitzende des rechtsnationalen Rassemblement National (RN) galt als aussichtsreiche Kandidatin für die Präsidentschaftswahl 2027.
Das ist in diesem Jahr bereits der zweite gravierende Fall von Wahlbeeinflussung durch die Justiz in einem EU-Staat. In Rumänien hatte Călin Georgescu im November die erste Runde der Präsidentenwahl überraschend gewonnen. Das Ergebnis wurde später annulliert, die behauptete «russische Wahlmanipulation» konnte jedoch nicht bewiesen werden. Die Kandidatur für die Wahlwiederholung im Mai wurde Georgescu kürzlich durch das Verfassungsgericht untersagt.
Die Veruntreuung öffentlicher Gelder muss untersucht und geahndet werden, das steht außer Frage. Diese Anforderung darf nicht selektiv angewendet werden. Hingegen mussten wir in der Vergangenheit bei ungleich schwerwiegenderen Fällen von (mutmaßlichem) Missbrauch ganz andere Vorgehensweisen erleben, etwa im Fall der heutigen EZB-Chefin Christine Lagarde oder im «Pfizergate»-Skandal um die Präsidentin der EU-Kommission Ursula von der Leyen.
Wenngleich derartige Angelegenheiten formal auf einer rechtsstaatlichen Grundlage beruhen mögen, so bleibt ein bitterer Beigeschmack. Es stellt sich die Frage, ob und inwieweit die Justiz politisch instrumentalisiert wird. Dies ist umso interessanter, als die Gewaltenteilung einen essenziellen Teil jeder demokratischen Ordnung darstellt, während die Bekämpfung des politischen Gegners mit juristischen Mitteln gerade bei den am lautesten rufenden Verteidigern «unserer Demokratie» populär zu sein scheint.
Die Delegationen von CDU/CSU und SPD haben bei ihren Verhandlungen über eine Regierungskoalition genau solche Maßnahmen diskutiert. «Im Namen der Wahrheit und der Demokratie» möchte man noch härter gegen «Desinformation» vorgehen und dafür zum Beispiel den Digital Services Act der EU erweitern. Auch soll der Tatbestand der Volksverhetzung verschärft werden – und im Entzug des passiven Wahlrechts münden können. Auf europäischer Ebene würde Friedrich Merz wohl gerne Ungarn das Stimmrecht entziehen.
Der Pegel an Unzufriedenheit und Frustration wächst in großen Teilen der Bevölkerung kontinuierlich. Arroganz, Machtmissbrauch und immer abstrusere Ausreden für offensichtlich willkürliche Maßnahmen werden kaum verhindern, dass den etablierten Parteien die Unterstützung entschwindet. In Deutschland sind die Umfrageergebnisse der AfD ein guter Gradmesser dafür.
[Vorlage Titelbild: Pixabay]
Dieser Beitrag wurde mit dem Pareto-Client geschrieben und ist zuerst auf Transition News erschienen.
-
@ 6871d8df:4a9396c1
2024-02-24 22:42:16In an era where data seems to be as valuable as currency, the prevailing trend in AI starkly contrasts with the concept of personal data ownership. The explosion of AI and the ensuing race have made it easy to overlook where the data is coming from. The current model, dominated by big tech players, involves collecting vast amounts of user data and selling it to AI companies for training LLMs. Reddit recently penned a 60 million dollar deal, Google guards and mines Youtube, and more are going this direction. But is that their data to sell? Yes, it's on their platforms, but without the users to generate it, what would they monetize? To me, this practice raises significant ethical questions, as it assumes that user data is a commodity that companies can exploit at will.
The heart of the issue lies in the ownership of data. Why, in today's digital age, do we not retain ownership of our data? Why can't our data follow us, under our control, to wherever we want to go? These questions echo the broader sentiment that while some in the tech industry — such as the blockchain-first crypto bros — recognize the importance of data ownership, their "blockchain for everything solutions," to me, fall significantly short in execution.
Reddit further complicates this with its current move to IPO, which, on the heels of the large data deal, might reinforce the mistaken belief that user-generated data is a corporate asset. Others, no doubt, will follow suit. This underscores the urgent need for a paradigm shift towards recognizing and respecting user data as personal property.
In my perfect world, the digital landscape would undergo a revolutionary transformation centered around the empowerment and sovereignty of individual data ownership. Platforms like Twitter, Reddit, Yelp, YouTube, and Stack Overflow, integral to our digital lives, would operate on a fundamentally different premise: user-owned data.
In this envisioned future, data ownership would not just be a concept but a practice, with public and private keys ensuring the authenticity and privacy of individual identities. This model would eliminate the private data silos that currently dominate, where companies profit from selling user data without consent. Instead, data would traverse a decentralized protocol akin to the internet, prioritizing user control and transparency.
The cornerstone of this world would be a meritocratic digital ecosystem. Success for companies would hinge on their ability to leverage user-owned data to deliver unparalleled value rather than their capacity to gatekeep and monetize information. If a company breaks my trust, I can move to a competitor, and my data, connections, and followers will come with me. This shift would herald an era where consent, privacy, and utility define the digital experience, ensuring that the benefits of technology are equitably distributed and aligned with the users' interests and rights.
The conversation needs to shift fundamentally. We must challenge this trajectory and advocate for a future where data ownership and privacy are not just ideals but realities. If we continue on our current path without prioritizing individual data rights, the future of digital privacy and autonomy is bleak. Big tech's dominance allows them to treat user data as a commodity, potentially selling and exploiting it without consent. This imbalance has already led to users being cut off from their digital identities and connections when platforms terminate accounts, underscoring the need for a digital ecosystem that empowers user control over data. Without changing direction, we risk a future where our content — and our freedoms by consequence — are controlled by a few powerful entities, threatening our rights and the democratic essence of the digital realm. We must advocate for a shift towards data ownership by individuals to preserve our digital freedoms and democracy.
-
@ d34e832d:383f78d0
2025-04-26 15:04:51Raspberry Pi-based voice assistant
This Idea details the design and deployment of a Raspberry Pi-based voice assistant powered by the Google Gemini AI API. The system combines open hardware with modern AI services to create a low-cost, flexible, and educational voice assistant platform. By leveraging a Raspberry Pi, basic audio hardware, and Python-based software, developers can create a functional, customizable assistant suitable for home automation, research, or personal productivity enhancement.
1. Voice assistants
Voice assistants have become increasingly ubiquitous, but commercially available systems like Alexa, Siri, or Google Assistant come with significant privacy and customization limitations.
This project offers an open, local, and customizable alternative, demonstrating how to build a voice assistant using Google Gemini (or OpenAI’s ChatGPT) APIs for natural language understanding.Target Audience:
- DIY enthusiasts - Raspberry Pi hobbyists - AI developers - Privacy-conscious users
2. System Architecture
2.1 Hardware Components
| Component | Purpose | |:--------------------------|:----------------------------------------| | Raspberry Pi (any recent model, 4B recommended) | Core processing unit | | Micro SD Card (32GB+) | Operating System and storage | | USB Microphone | Capturing user voice input | | Audio Amplifier + Speaker | Outputting synthesized responses | | 5V DC Power Supplies (2x) | Separate power for Pi and amplifier | | LEDs + Resistors (optional)| Visual feedback (e.g., recording or listening states) |
2.2 Software Stack
| Software | Function | |:---------------------------|:----------------------------------------| | Raspberry Pi OS (Lite or Full) | Base operating system | | Python 3.9+ | Programming language | | SpeechRecognition | Captures and transcribes user voice | | Google Text-to-Speech (gTTS) | Converts responses into spoken audio | | Google Gemini API (or OpenAI API) | Powers the AI assistant brain | | Pygame | Audio playback for responses | | WinSCP + Windows Terminal | File transfer and remote management |
3. Hardware Setup
3.1 Basic Connections
- Microphone: Connect via USB port.
- Speaker and Amplifier: Wire from Raspberry Pi audio jack or via USB sound card if better quality is needed.
- LEDs (Optional): Connect through GPIO pins, using 220–330Ω resistors to limit current.
3.2 Breadboard Layout (Optional for LEDs)
| GPIO Pin | LED Color | Purpose | |:---------|:-----------|:--------------------| | GPIO 17 | Red | Recording active | | GPIO 27 | Green | Response playing |
Tip: Use a small breadboard for quick prototyping before moving to a custom PCB if desired.
4. Software Setup
4.1 Raspberry Pi OS Installation
- Use Raspberry Pi Imager to flash Raspberry Pi OS onto the Micro SD card.
- Initial system update:
bash sudo apt update && sudo apt upgrade -y
4.2 Python Environment
-
Install Python virtual environment:
bash sudo apt install python3-venv python3 -m venv voice-env source voice-env/bin/activate
-
Install required Python packages:
bash pip install SpeechRecognition google-generativeai pygame gtts
(Replace
google-generativeai
withopenai
if using OpenAI's ChatGPT.)4.3 API Key Setup
- Obtain a Google Gemini API key (or OpenAI API key).
- Store safely in a
.env
file or configure as environment variables for security:bash export GEMINI_API_KEY="your_api_key_here"
4.4 File Transfer
- Use WinSCP or
scp
commands to transfer Python scripts to the Pi.
4.5 Example Python Script (Simplified)
```python import speech_recognition as sr import google.generativeai as genai from gtts import gTTS import pygame import os
genai.configure(api_key=os.getenv('GEMINI_API_KEY')) recognizer = sr.Recognizer() mic = sr.Microphone()
pygame.init()
while True: with mic as source: print("Listening...") audio = recognizer.listen(source)
try: text = recognizer.recognize_google(audio) print(f"You said: {text}") response = genai.generate_content(text) tts = gTTS(text=response.text, lang='en') tts.save("response.mp3") pygame.mixer.music.load("response.mp3") pygame.mixer.music.play() while pygame.mixer.music.get_busy(): continue except Exception as e: print(f"Error: {e}")
```
5. Testing and Execution
- Activate the Python virtual environment:
bash source voice-env/bin/activate
- Run your main assistant script:
bash python3 assistant.py
- Speak into the microphone and listen for the AI-generated spoken response.
6. Troubleshooting
| Problem | Possible Fix | |:--------|:-------------| | Microphone not detected | Check
arecord -l
| | Audio output issues | Checkaplay -l
, use a USB DAC if needed | | Permission denied errors | Verify group permissions (audio, gpio) | | API Key Errors | Check environment variable and internet access |
7. Performance Notes
- Latency: Highly dependent on network speed and API response time.
- Audio Quality: Can be enhanced with a better USB microphone and powered speakers.
- Privacy: Minimal data retention if using your own Gemini or OpenAI account.
8. Potential Extensions
- Add hotword detection ("Hey Gemini") using Snowboy or Porcupine libraries.
- Build a local fallback model to answer basic questions offline.
- Integrate with home automation via MQTT, Home Assistant, or Node-RED.
- Enable LED animations to visually indicate listening and responding states.
- Deploy with a small eInk or OLED screen for text display of answers.
9. Consider
Building a Gemini-powered voice assistant on the Raspberry Pi empowers individuals to create customizable, private, and cost-effective alternatives to commercial voice assistants. By utilizing accessible hardware, modern open-source libraries, and powerful AI APIs, this project blends education, experimentation, and privacy-centric design into a single hands-on platform.
This guide can be adapted for personal use, educational programs, or even as a starting point for more advanced AI-based embedded systems.
References
- Raspberry Pi Foundation: https://www.raspberrypi.org
- Google Generative AI Documentation: https://ai.google.dev
- OpenAI Documentation: https://platform.openai.com
- SpeechRecognition Library: https://pypi.org/project/SpeechRecognition/
- gTTS Documentation: https://pypi.org/project/gTTS/
- Pygame Documentation: https://www.pygame.org/docs/
-
@ c631e267:c2b78d3e
2025-04-03 07:42:25Spanien bleibt einer der Vorreiter im europäischen Prozess der totalen Überwachung per Digitalisierung. Seit Mittwoch ist dort der digitale Personalausweis verfügbar. Dabei handelt es sich um eine Regierungs-App, die auf dem Smartphone installiert werden muss und in den Stores von Google und Apple zu finden ist. Per Dekret von Regierungschef Pedro Sánchez und Zustimmung des Ministerrats ist diese Maßnahme jetzt in Kraft getreten.
Mit den üblichen Argumenten der Vereinfachung, des Komforts, der Effizienz und der Sicherheit preist das Innenministerium die «Innovation» an. Auch die Beteuerung, dass die digitale Variante parallel zum physischen Ausweis existieren wird und diesen nicht ersetzen soll, fehlt nicht. Während der ersten zwölf Monate wird «der Neue» noch nicht für alle Anwendungsfälle gültig sein, ab 2026 aber schon.
Dass die ganze Sache auch «Risiken und Nebenwirkungen» haben könnte, wird in den Mainstream-Medien eher selten thematisiert. Bestenfalls wird der Aspekt der Datensicherheit angesprochen, allerdings in der Regel direkt mit dem Regierungsvokabular von den «maximalen Sicherheitsgarantien» abgehandelt. Dennoch gibt es einige weitere Aspekte, die Bürger mit etwas Sinn für Privatsphäre bedenken sollten.
Um sich die digitale Version des nationalen Ausweises besorgen zu können (eine App mit dem Namen MiDNI), muss man sich vorab online registrieren. Dabei wird die Identität des Bürgers mit seiner mobilen Telefonnummer verknüpft. Diese obligatorische fixe Verdrahtung kennen wir von diversen anderen Apps und Diensten. Gleichzeitig ist das die Basis für eine perfekte Lokalisierbarkeit der Person.
Für jeden Vorgang der Identifikation in der Praxis wird später «eine Verbindung zu den Servern der Bundespolizei aufgebaut». Die Daten des Individuums werden «in Echtzeit» verifiziert und im Erfolgsfall von der Polizei signiert zurückgegeben. Das Ergebnis ist ein QR-Code mit zeitlich begrenzter Gültigkeit, der an Dritte weitergegeben werden kann.
Bei derartigen Szenarien sträuben sich einem halbwegs kritischen Staatsbürger die Nackenhaare. Allein diese minimale Funktionsbeschreibung lässt die totale Überwachung erkennen, die damit ermöglicht wird. Jede Benutzung des Ausweises wird künftig registriert, hinterlässt also Spuren. Und was ist, wenn die Server der Polizei einmal kein grünes Licht geben? Das wäre spätestens dann ein Problem, wenn der digitale doch irgendwann der einzig gültige Ausweis ist: Dann haben wir den abschaltbaren Bürger.
Dieser neue Vorstoß der Regierung von Pedro Sánchez ist ein weiterer Schritt in Richtung der «totalen Digitalisierung» des Landes, wie diese Politik in manchen Medien – nicht einmal kritisch, sondern sehr naiv – genannt wird. Ebenso verharmlosend wird auch erwähnt, dass sich das spanische Projekt des digitalen Ausweises nahtlos in die Initiativen der EU zu einer digitalen Identität für alle Bürger sowie des digitalen Euro einreiht.
In Zukunft könnte der neue Ausweis «auch in andere staatliche und private digitale Plattformen integriert werden», wie das Medienportal Cope ganz richtig bemerkt. Das ist die Perspektive.
[Titelbild: Pixabay]
Dazu passend:
Nur Abschied vom Alleinfahren? Monströse spanische Überwachungsprojekte gemäß EU-Norm
Dieser Beitrag wurde mit dem Pareto-Client geschrieben und ist zuerst auf Transition News erschienen.
-
@ 8ce092d8:950c24ad
2024-02-04 23:35:07Overview
- Introduction
- Model Types
- Training (Data Collection and Config Settings)
- Probability Viewing: AI Inspector
- Match
- Cheat Sheet
I. Introduction
AI Arena is the first game that combines human and artificial intelligence collaboration.
AI learns your skills through "imitation learning."
Official Resources
- Official Documentation (Must Read): Everything You Need to Know About AI Arena
Watch the 2-minute video in the documentation to quickly understand the basic flow of the game. 2. Official Play-2-Airdrop competition FAQ Site https://aiarena.notion.site/aiarena/Gateway-to-the-Arena-52145e990925499d95f2fadb18a24ab0 3. Official Discord (Must Join): https://discord.gg/aiarenaplaytest for the latest announcements or seeking help. The team will also have a exclusive channel there. 4. Official YouTube: https://www.youtube.com/@aiarena because the game has built-in tutorials, you can choose to watch videos.
What is this game about?
- Although categorized as a platform fighting game, the core is a probability-based strategy game.
- Warriors take actions based on probabilities on the AI Inspector dashboard, competing against opponents.
- The game does not allow direct manual input of probabilities for each area but inputs information through data collection and establishes models by adjusting parameters.
- Data collection emulates fighting games, but training can be completed using a Dummy As long as you can complete the in-game tutorial, you can master the game controls.
II. Model Types
Before training, there are three model types to choose from: Simple Model Type, Original Model Type, and Advanced Model Type.
It is recommended to try the Advanced Model Type after completing at least one complete training with the Simple Model Type and gaining some understanding of the game.
Simple Model Type
The Simple Model is akin to completing a form, and the training session is comparable to filling various sections of that form.
This model has 30 buckets. Each bucket can be seen as telling the warrior what action to take in a specific situation. There are 30 buckets, meaning 30 different scenarios. Within the same bucket, the probabilities for direction or action are the same.
For example: What should I do when I'm off-stage — refer to the "Recovery (you off-stage)" bucket.
For all buckets, refer to this official documentation:
https://docs.aiarena.io/arenadex/game-mechanics/tabular-model-v2
Video (no sound): The entire training process for all buckets
https://youtu.be/1rfRa3WjWEA
Game version 2024.1.10. The method of saving is outdated. Please refer to the game updates.
Advanced Model Type
The "Original Model Type" and "Advanced Model Type" are based on Machine Learning, which is commonly referred to as combining with AI.
The Original Model Type consists of only one bucket, representing the entire map. If you want the AI to learn different scenarios, you need to choose a "Focus Area" to let the warrior know where to focus. A single bucket means that a slight modification can have a widespread impact on the entire model. This is where the "Advanced Model Type" comes in.
The "Advanced Model Type" can be seen as a combination of the "Original Model Type" and the "Simple Model Type". The Advanced Model Type divides the map into 8 buckets. Each bucket can use many "Focus Area." For a detailed explanation of the 8 buckets and different Focus Areas, please refer to the tutorial page (accessible in the Advanced Model Type, after completing a training session, at the top left of the Advanced Config, click on "Tutorial").
III. Training (Data Collection and Config Settings)
Training Process:
- Collect Data
- Set Parameters, Train, and Save
- Repeat Step 1 until the Model is Complete
Training the Simple Model Type is the easiest to start with; refer to the video above for a detailed process.
Training the Advanced Model Type offers more possibilities through the combination of "Focus Area" parameters, providing a higher upper limit. While the Original Model Type has great potential, it's harder to control. Therefore, this section focuses on the "Advanced Model Type."
1. What Kind of Data to Collect
- High-Quality Data: Collect purposeful data. Garbage in, garbage out. Only collect the necessary data; don't collect randomly. It's recommended to use Dummy to collect data. However, don't pursue perfection; through parameter adjustments, AI has a certain level of fault tolerance.
- Balanced Data: Balance your dataset. In simple terms, if you complete actions on the left side a certain number of times, also complete a similar number on the right side. While data imbalance can be addressed through parameter adjustments (see below), it's advised not to have this issue during data collection.
- Moderate Amount: A single training will include many individual actions. Collect data for each action 1-10 times. Personally, it's recommended to collect data 2-3 times for a single action. If the effect of a single training is not clear, conduct a second (or even third) training with the same content, but with different parameter settings.
2. What to Collect (and Focus Area Selection)
Game actions mimic fighting games, consisting of 4 directions + 6 states (Idle, Jump, Attack, Grab, Special, Shield). Directions can be combined into ↗, ↘, etc. These directions and states can then be combined into different actions.
To make "Focus Area" effective, you need to collect data in training that matches these parameters. For example, for "Distance to Opponent", you need to collect data when close to the opponent and also when far away. * Note: While you can split into multiple training sessions, it's most effective to cover different situations within a single training.
Refer to the Simple Config, categorize the actions you want to collect, and based on the game scenario, classify them into two categories: "Movement" and "Combat."
Movement-Based Actions
Action Collection
When the warrior is offstage, regardless of where the opponent is, we require the warrior to return to the stage to prevent self-destruction.
This involves 3 aerial buckets: 5 (Near Blast Zone), 7 (Under Stage), and 8 (Side Of Stage).
* Note: The background comes from the Tutorial mentioned earlier. The arrows in the image indicate the direction of the action and are for reference only. * Note: Action collection should be clean; do not collect actions that involve leaving the stage.
Config Settings
In the Simple Config, you can directly choose "Movement" in it. However, for better customization, it's recommended to use the Advanced Config directly. - Intensity: The method for setting Intensity will be introduced separately later. - Buckets: As shown in the image, choose the bucket you are training. - Focus Area: Position-based parameters: - Your position (must) - Raycast Platform Distance, Raycast Platform Type (optional, generally choose these in Bucket 7)
Combat-Based Actions
The goal is to direct attacks quickly and effectively towards the opponent, which is the core of game strategy.
This involves 5 buckets: - 2 regular situations - In the air: 6 (Safe Zone) - On the ground: 4 (Opponent Active) - 3 special situations on the ground: - 1 Projectile Active - 2 Opponent Knockback - 3 Opponent Stunned
2 Regular Situations
In the in-game tutorial, we learned how to perform horizontal attacks. However, in the actual game, directions expand to 8 dimensions. Imagine having 8 relative positions available for launching hits against the opponent. Our task is to design what action to use for attack or defense at each relative position.
Focus Area - Basic (generally select all) - Angle to opponent
- Distance to opponent - Discrete Distance: Choosing this option helps better differentiate between closer and farther distances from the opponent. As shown in the image, red indicates a relatively close distance, and green indicates a relatively distant distance.- Advanced: Other commonly used parameters
- Direction: different facings to opponent
- Your Elemental Gauge and Discrete Elementals: Considering the special's charge
- Opponent action: The warrior will react based on the opponent's different actions.
- Your action: Your previous action. Choose this if teaching combos.
3 Special Situations on the Ground
Projectile Active, Opponent Stunned, Opponent Knockback These three buckets can be referenced in the Simple Model Type video. The parameter settings approach is the same as Opponent Active/Safe Zone.
For Projectile Active, in addition to the parameters based on combat, to track the projectile, you also need to select "Raycast Projectile Distance" and "Raycast Projectile On Target."
3. Setting "Intensity"
Resources
- The "Tutorial" mentioned earlier explains these parameters.
- Official Config Document (2022.12.24): https://docs.google.com/document/d/1adXwvDHEnrVZ5bUClWQoBQ8ETrSSKgG5q48YrogaFJs/edit
TL;DR:
Epochs: - Adjust to fewer epochs if learning is insufficient, increase for more learning.
Batch Size: - Set to the minimum (16) if data is precise but unbalanced, or just want it to learn fast - Increase (e.g., 64) if data is slightly imprecise but balanced. - If both imprecise and unbalanced, consider retraining.
Learning Rate: - Maximize (0.01) for more learning but a risk of forgetting past knowledge. - Minimize for more accurate learning with less impact on previous knowledge.
Lambda: - Reduce for prioritizing learning new things.
Data Cleaning: - Enable "Remove Sparsity" unless you want AI to learn idleness. - For special cases, like teaching the warrior to use special moves when idle, refer to this tutorial video: https://discord.com/channels/1140682688651612291/1140683283626201098/1195467295913431111
Personal Experience: - Initial training with settings: 125 epochs, batch size 16, learning rate 0.01, lambda 0, data cleaning enabled. - Prioritize Multistream, sometimes use Oversampling. - Fine-tune subsequent training based on the mentioned theories.
IV. Probability Viewing: AI Inspector
The dashboard consists of "Direction + Action." Above the dashboard, you can see the "Next Action" – the action the warrior will take in its current state. The higher the probability, the more likely the warrior is to perform that action, indicating a quicker reaction. It's essential to note that when checking the Direction, the one with the highest visual representation may not have the highest numerical value. To determine the actual value, hover the mouse over the graphical representation, as shown below, where the highest direction is "Idle."
In the map, you can drag the warrior to view the probabilities of the warrior in different positions. Right-click on the warrior with the mouse to change the warrior's facing. The status bar below can change the warrior's state on the map.
When training the "Opponent Stunned, Opponent Knockback" bucket, you need to select the status below the opponent's status bar. If you are focusing on "Opponent action" in the Focus Zone, choose the action in the opponent's status bar. If you are focusing on "Your action" in the Focus Zone, choose the action in your own status bar. When training the "Projectile Active" Bucket, drag the projectile on the right side of the dashboard to check the status.
Next
The higher the probability, the faster the reaction. However, be cautious when the action probability reaches 100%. This may cause the warrior to be in a special case of "State Transition," resulting in unnecessary "Idle" states.
Explanation: In each state a fighter is in, there are different "possible transitions". For example, from falling state you cannot do low sweep because low sweep requires you to be on the ground. For the shield state, we do not allow you to directly transition to headbutt. So to do headbutt you have to first exit to another state and then do it from there (assuming that state allows you to do headbutt). This is the reason the fighter runs because "run" action is a valid state transition from shield. Source
V. Learn from Matches
After completing all the training, your model is preliminarily finished—congratulations! The warrior will step onto the arena alone and embark on its debut!
Next, we will learn about the strengths and weaknesses of the warrior from battles to continue refining the warrior's model.
In matches, besides appreciating the performance, pay attention to the following:
-
Movement, i.e., Off the Stage: Observe how the warrior gets eliminated. Is it due to issues in the action settings at a certain position, or is it a normal death caused by a high percentage? The former is what we need to avoid and optimize.
-
Combat: Analyze both sides' actions carefully. Observe which actions you and the opponent used in different states. Check which of your hits are less effective, and how does the opponent handle different actions, etc.
The approach to battle analysis is similar to the thought process in the "Training", helping to have a more comprehensive understanding of the warrior's performance and making targeted improvements.
VI. Cheat Sheet
Training 1. Click "Collect" to collect actions. 2. "Map - Data Limit" is more user-friendly. Most players perform initial training on the "Arena" map. 3. Switch between the warrior and the dummy: Tab key (keyboard) / Home key (controller). 4. Use "Collect" to make the opponent loop a set of actions. 5. Instantly move the warrior to a specific location: Click "Settings" - SPAWN - Choose the desired location on the map - On. Press the Enter key (keyboard) / Start key (controller) during training.
Inspector 1. Right-click on the fighter to change their direction. Drag the fighter and observe the changes in different positions and directions. 2. When satisfied with the training, click "Save." 3. In "Sparring" and "Simulation," use "Current Working Model." 4. If satisfied with a model, then click "compete." The model used in the rankings is the one marked as "competing."
Sparring / Ranked 1. Use the Throneroom map only for the top 2 or top 10 rankings. 2. There is a 30-second cooldown between matches. The replays are played for any match. Once the battle begins, you can see the winner on the leaderboard or by right-clicking the page - Inspect - Console. Also, if you encounter any errors or bugs, please send screenshots of the console to the Discord server.
Good luck! See you on the arena!
-
@ d34e832d:383f78d0
2025-04-26 14:33:06Gist
This Idea presents a blueprint for creating a portable, offline-first education server focused on Free and Open Source Software (FOSS) topics like Bitcoin fundamentals, Linux administration, GPG encryption, and digital self-sovereignty. Using the compact and powerful Nookbox G9 NAS unit, we demonstrate how to deliver accessible, decentralized educational content in remote or network-restricted environments.
1. Bitcoin, Linux, and Cryptographic tools
Access to self-sovereign technologies such as Bitcoin, Linux, and cryptographic tools is critical for empowering individuals and communities. However, many areas face internet connectivity issues or political restrictions limiting access to online resources.
By combining a high-performance mini NAS server with a curated library of FOSS educational materials, we can create a mobile "university" that delivers critical knowledge independently of centralized networks.
2. Hardware Platform: Nookbox G9 Overview
The Nookbox G9 offers an ideal balance of performance, portability, and affordability for this project.
2.1 Core Specifications
| Feature | Specification | |:------------------------|:---------------------------------------| | Form Factor | 1U Rackmount mini-NAS | | Storage | Up to 8TB (4×2TB M.2 NVMe SSDs) | | M.2 Interface | PCIe Gen 3x2 per drive slot | | Networking | Dual 2.5 Gigabit Ethernet ports | | Power Consumption | 11–30 Watts (typical usage) | | Default OS | Windows 11 (to be replaced with Linux) | | Linux Compatibility | Fully compatible with Ubuntu 24.10 |
3. FOSS Education Server Design
3.1 Operating System Setup
- Replace Windows 11 with a clean install of Ubuntu Server 24.10.
- Harden the OS:
- Enable full-disk encryption.
- Configure UFW firewall.
- Disable unnecessary services.
3.2 Core Services Deployed
| Service | Purpose | |:--------------------|:-----------------------------------------| | Nginx Web Server | Host offline courses and documentation | | Nextcloud (optional) | Offer private file sharing for students | | Moodle LMS (optional) | Deliver structured courses and quizzes | | Tor Hidden Service | Optional for anonymous access locally | | rsync/Syncthing | Distribute updates peer-to-peer |
3.3 Content Hosted
- Bitcoin: Bitcoin Whitepaper, Bitcoin Core documentation, Electrum Wallet tutorials.
- Linux: Introduction to Linux (LPIC-1 materials), bash scripting guides, system administration manuals.
- Cryptography: GPG tutorials, SSL/TLS basics, secure communications handbooks.
- Offline Tools: Full mirrors of sites like LearnLinux.tv, Bitcoin.org, and selected content from FSF.
All resources are curated to be license-compliant and redistributable in an offline format.
4. Network Configuration
- LAN-only Access: No reliance on external Internet.
- DHCP server setup for automatic IP allocation.
- Optional Wi-Fi access point using USB Wi-Fi dongle and
hostapd
. - Access Portal: Homepage automatically redirects users to educational content upon connection.
5. Advantages of This Setup
| Feature | Advantage | |:-----------------------|:----------------------------------------| | Offline Capability | Operates without internet connectivity | | Portable Form Factor | Fits into field deployments easily | | Secure and Hardened | Encrypted, compartmentalized, and locked down | | Modular Content | Easy to update or expand educational resources | | Energy Efficient | Low power draw enables solar or battery operation | | Open Source Stack | End-to-end FOSS ecosystem, no vendor lock-in |
6. Deployment Scenarios
- Rural Schools: Provide Linux training without requiring internet.
- Disaster Recovery Zones: Deliver essential technical education in post-disaster areas.
- Bitcoin Meetups: Offer Bitcoin literacy and cryptography workshops in remote communities.
- Privacy Advocacy Groups: Teach operational security practices without risking network surveillance.
7. Performance Considerations
Despite PCIe Gen 3x2 limitations, the available bandwidth (~2GB/s theoretical) vastly exceeds the server's 2.5 Gbps network output (~250MB/s), making it more than sufficient for a read-heavy educational workload.
Thermal Management:
Given the G9’s known cooling issues, install additional thermal pads or heatsinks on the NVMe drives. Consider external USB-powered cooling fans for sustained heavy usage.
8. Ways To Extend
- Multi-language Support: Add localized course materials.
- Bitcoin Node Integration: Host a lightweight Bitcoin node (e.g., Bitcoin Core with pruning enabled or a complete full node) for educational purposes.
- Mesh Networking: Use Mesh Wi-Fi protocols (e.g., cjdns or Yggdrasil) to allow peer-to-peer server sharing without centralized Wi-Fi.
9. Consider
Building a Portable FOSS Education Server on a Nookbox G9 is a practical, scalable solution for democratizing technical knowledge, empowering communities, and defending digital sovereignty in restricted environments.
Through thoughtful system design—leveraging open-source software and secure deployment practices—we enable resilient, censorship-resistant education wherever it's needed.
📎 References
-
@ 3bf0c63f:aefa459d
2024-01-14 14:52:16Drivechain
Understanding Drivechain requires a shift from the paradigm most bitcoiners are used to. It is not about "trustlessness" or "mathematical certainty", but game theory and incentives. (Well, Bitcoin in general is also that, but people prefer to ignore it and focus on some illusion of trustlessness provided by mathematics.)
Here we will describe the basic mechanism (simple) and incentives (complex) of "hashrate escrow" and how it enables a 2-way peg between the mainchain (Bitcoin) and various sidechains.
The full concept of "Drivechain" also involves blind merged mining (i.e., the sidechains mine themselves by publishing their block hashes to the mainchain without the miners having to run the sidechain software), but this is much easier to understand and can be accomplished either by the BIP-301 mechanism or by the Spacechains mechanism.
How does hashrate escrow work from the point of view of Bitcoin?
A new address type is created. Anything that goes in that is locked and can only be spent if all miners agree on the Withdrawal Transaction (
WT^
) that will spend it for 6 months. There is one of these special addresses for each sidechain.To gather miners' agreement
bitcoind
keeps track of the "score" of all transactions that could possibly spend from that address. On every block mined, for each sidechain, the miner can use a portion of their coinbase to either increase the score of oneWT^
by 1 while decreasing the score of all others by 1; or they can decrease the score of allWT^
s by 1; or they can do nothing.Once a transaction has gotten a score high enough, it is published and funds are effectively transferred from the sidechain to the withdrawing users.
If a timeout of 6 months passes and the score doesn't meet the threshold, that
WT^
is discarded.What does the above procedure mean?
It means that people can transfer coins from the mainchain to a sidechain by depositing to the special address. Then they can withdraw from the sidechain by making a special withdraw transaction in the sidechain.
The special transaction somehow freezes funds in the sidechain while a transaction that aggregates all withdrawals into a single mainchain
WT^
, which is then submitted to the mainchain miners so they can start voting on it and finally after some months it is published.Now the crucial part: the validity of the
WT^
is not verified by the Bitcoin mainchain rules, i.e., if Bob has requested a withdraw from the sidechain to his mainchain address, but someone publishes a wrongWT^
that instead takes Bob's funds and sends them to Alice's main address there is no way the mainchain will know that. What determines the "validity" of theWT^
is the miner vote score and only that. It is the job of miners to vote correctly -- and for that they may want to run the sidechain node in SPV mode so they can attest for the existence of a reference to theWT^
transaction in the sidechain blockchain (which then ensures it is ok) or do these checks by some other means.What? 6 months to get my money back?
Yes. But no, in practice anyone who wants their money back will be able to use an atomic swap, submarine swap or other similar service to transfer funds from the sidechain to the mainchain and vice-versa. The long delayed withdraw costs would be incurred by few liquidity providers that would gain some small profit from it.
Why bother with this at all?
Drivechains solve many different problems:
It enables experimentation and new use cases for Bitcoin
Issued assets, fully private transactions, stateful blockchain contracts, turing-completeness, decentralized games, some "DeFi" aspects, prediction markets, futarchy, decentralized and yet meaningful human-readable names, big blocks with a ton of normal transactions on them, a chain optimized only for Lighting-style networks to be built on top of it.
These are some ideas that may have merit to them, but were never actually tried because they couldn't be tried with real Bitcoin or inferfacing with real bitcoins. They were either relegated to the shitcoin territory or to custodial solutions like Liquid or RSK that may have failed to gain network effect because of that.
It solves conflicts and infighting
Some people want fully private transactions in a UTXO model, others want "accounts" they can tie to their name and build reputation on top; some people want simple multisig solutions, others want complex code that reads a ton of variables; some people want to put all the transactions on a global chain in batches every 10 minutes, others want off-chain instant transactions backed by funds previously locked in channels; some want to spend, others want to just hold; some want to use blockchain technology to solve all the problems in the world, others just want to solve money.
With Drivechain-based sidechains all these groups can be happy simultaneously and don't fight. Meanwhile they will all be using the same money and contributing to each other's ecosystem even unwillingly, it's also easy and free for them to change their group affiliation later, which reduces cognitive dissonance.
It solves "scaling"
Multiple chains like the ones described above would certainly do a lot to accomodate many more transactions that the current Bitcoin chain can. One could have special Lightning Network chains, but even just big block chains or big-block-mimblewimble chains or whatnot could probably do a good job. Or even something less cool like 200 independent chains just like Bitcoin is today, no extra features (and you can call it "sharding"), just that would already multiply the current total capacity by 200.
Use your imagination.
It solves the blockchain security budget issue
The calculation is simple: you imagine what security budget is reasonable for each block in a world without block subsidy and divide that for the amount of bytes you can fit in a single block: that is the price to be paid in satoshis per byte. In reasonable estimative, the price necessary for every Bitcoin transaction goes to very large amounts, such that not only any day-to-day transaction has insanely prohibitive costs, but also Lightning channel opens and closes are impracticable.
So without a solution like Drivechain you'll be left with only one alternative: pushing Bitcoin usage to trusted services like Liquid and RSK or custodial Lightning wallets. With Drivechain, though, there could be thousands of transactions happening in sidechains and being all aggregated into a sidechain block that would then pay a very large fee to be published (via blind merged mining) to the mainchain. Bitcoin security guaranteed.
It keeps Bitcoin decentralized
Once we have sidechains to accomodate the normal transactions, the mainchain functionality can be reduced to be only a "hub" for the sidechains' comings and goings, and then the maximum block size for the mainchain can be reduced to, say, 100kb, which would make running a full node very very easy.
Can miners steal?
Yes. If a group of coordinated miners are able to secure the majority of the hashpower and keep their coordination for 6 months, they can publish a
WT^
that takes the money from the sidechains and pays to themselves.Will miners steal?
No, because the incentives are such that they won't.
Although it may look at first that stealing is an obvious strategy for miners as it is free money, there are many costs involved:
- The cost of ceasing blind-merged mining returns -- as stealing will kill a sidechain, all the fees from it that miners would be expected to earn for the next years are gone;
- The cost of Bitcoin price going down: If a steal is successful that will mean Drivechains are not safe, therefore Bitcoin is less useful, and miner credibility will also be hurt, which are likely to cause the Bitcoin price to go down, which in turn may kill the miners' businesses and savings;
- The cost of coordination -- assuming miners are just normal businesses, they just want to do their work and get paid, but stealing from a Drivechain will require coordination with other miners to conduct an immoral act in a way that has many pitfalls and is likely to be broken over the months;
- The cost of miners leaving your mining pool: when we talked about "miners" above we were actually talking about mining pools operators, so they must also consider the risk of miners migrating from their mining pool to others as they begin the process of stealing;
- The cost of community goodwill -- when participating in a steal operation, a miner will suffer a ton of backlash from the community. Even if the attempt fails at the end, the fact that it was attempted will contribute to growing concerns over exaggerated miners power over the Bitcoin ecosystem, which may end up causing the community to agree on a hard-fork to change the mining algorithm in the future, or to do something to increase participation of more entities in the mining process (such as development or cheapment of new ASICs), which have a chance of decreasing the profits of current miners.
Another point to take in consideration is that one may be inclined to think a newly-created sidechain or a sidechain with relatively low usage may be more easily stolen from, since the blind merged mining returns from it (point 1 above) are going to be small -- but the fact is also that a sidechain with small usage will also have less money to be stolen from, and since the other costs besides 1 are less elastic at the end it will not be worth stealing from these too.
All of the above consideration are valid only if miners are stealing from good sidechains. If there is a sidechain that is doing things wrong, scamming people, not being used at all, or is full of bugs, for example, that will be perceived as a bad sidechain, and then miners can and will safely steal from it and kill it, which will be perceived as a good thing by everybody.
What do we do if miners steal?
Paul Sztorc has suggested in the past that a user-activated soft-fork could prevent miners from stealing, i.e., most Bitcoin users and nodes issue a rule similar to this one to invalidate the inclusion of a faulty
WT^
and thus cause any miner that includes it in a block to be relegated to their own Bitcoin fork that other nodes won't accept.This suggestion has made people think Drivechain is a sidechain solution backed by user-actived soft-forks for safety, which is very far from the truth. Drivechains must not and will not rely on this kind of soft-fork, although they are possible, as the coordination costs are too high and no one should ever expect these things to happen.
If even with all the incentives against them (see above) miners do still steal from a good sidechain that will mean the failure of the Drivechain experiment. It will very likely also mean the failure of the Bitcoin experiment too, as it will be proven that miners can coordinate to act maliciously over a prolonged period of time regardless of economic and social incentives, meaning they are probably in it just for attacking Bitcoin, backed by nation-states or something else, and therefore no Bitcoin transaction in the mainchain is to be expected to be safe ever again.
Why use this and not a full-blown trustless and open sidechain technology?
Because it is impossible.
If you ever heard someone saying "just use a sidechain", "do this in a sidechain" or anything like that, be aware that these people are either talking about "federated" sidechains (i.e., funds are kept in custody by a group of entities) or they are talking about Drivechain, or they are disillusioned and think it is possible to do sidechains in any other manner.
No, I mean a trustless 2-way peg with correctness of the withdrawals verified by the Bitcoin protocol!
That is not possible unless Bitcoin verifies all transactions that happen in all the sidechains, which would be akin to drastically increasing the blocksize and expanding the Bitcoin rules in tons of ways, i.e., a terrible idea that no one wants.
What about the Blockstream sidechains whitepaper?
Yes, that was a way to do it. The Drivechain hashrate escrow is a conceptually simpler way to achieve the same thing with improved incentives, less junk in the chain, more safety.
Isn't the hashrate escrow a very complex soft-fork?
Yes, but it is much simpler than SegWit. And, unlike SegWit, it doesn't force anything on users, i.e., it isn't a mandatory blocksize increase.
Why should we expect miners to care enough to participate in the voting mechanism?
Because it's in their own self-interest to do it, and it costs very little. Today over half of the miners mine RSK. It's not blind merged mining, it's a very convoluted process that requires them to run a RSK full node. For the Drivechain sidechains, an SPV node would be enough, or maybe just getting data from a block explorer API, so much much simpler.
What if I still don't like Drivechain even after reading this?
That is the entire point! You don't have to like it or use it as long as you're fine with other people using it. The hashrate escrow special addresses will not impact you at all, validation cost is minimal, and you get the benefit of people who want to use Drivechain migrating to their own sidechains and freeing up space for you in the mainchain. See also the point above about infighting.
See also
-
@ aa8de34f:a6ffe696
2025-03-31 21:48:50In seinem Beitrag vom 30. März 2025 fragt Henning Rosenbusch auf Telegram angesichts zunehmender digitaler Kontrolle und staatlicher Allmacht:
„Wie soll sich gegen eine solche Tyrannei noch ein Widerstand formieren können, selbst im Untergrund? Sehe ich nicht.“\ (Quelle: t.me/rosenbusch/25228)
Er beschreibt damit ein Gefühl der Ohnmacht, das viele teilen: Eine Welt, in der Totalitarismus nicht mehr mit Panzern, sondern mit Algorithmen kommt. Wo Zugriff auf Geld, Meinungsfreiheit und Teilhabe vom Wohlverhalten abhängt. Der Bürger als kontrollierbare Variable im Code des Staates.\ Die Frage ist berechtigt. Doch die Antwort darauf liegt nicht in alten Widerstandsbildern – sondern in einer neuen Realität.
-- Denn es braucht keinen Untergrund mehr. --
Der Widerstand der Zukunft trägt keinen Tarnanzug. Er ist nicht konspirativ, sondern transparent. Nicht bewaffnet, sondern mathematisch beweisbar. Bitcoin steht nicht am Rand dieser Entwicklung – es ist ihr Fundament. Eine Bastion aus physikalischer Realität, spieltheoretischem Schutz und ökonomischer Wahrheit. Es ist nicht unfehlbar, aber unbestechlich. Nicht perfekt, aber immun gegen zentrale Willkür.
Hier entsteht kein „digitales Gegenreich“, sondern eine dezentrale Renaissance. Keine Revolte aus Wut, sondern eine stille Abkehr: von Zwang zu Freiwilligkeit, von Abhängigkeit zu Selbstverantwortung. Diese Revolution führt keine Kriege. Sie braucht keine Führer. Sie ist ein Netzwerk. Jeder Knoten ein Individuum. Jede Entscheidung ein Akt der Selbstermächtigung.
Weltweit wachsen Freiheits-Zitadellen aus dieser Idee: wirtschaftlich autark, digital souverän, lokal verankert und global vernetzt. Sie sind keine Utopien im luftleeren Raum, sondern konkrete Realitäten – angetrieben von Energie, Code und dem menschlichen Wunsch nach Würde.
Der Globalismus alter Prägung – zentralistisch, monopolistisch, bevormundend – wird an seiner eigenen Hybris zerbrechen. Seine Werkzeuge der Kontrolle werden ihn nicht retten. Im Gegenteil: Seine Geister werden ihn verfolgen und erlegen.
Und während die alten Mächte um Erhalt kämpfen, wächst eine neue Welt – nicht im Schatten, sondern im Offenen. Nicht auf Gewalt gebaut, sondern auf Mathematik, Physik und Freiheit.
Die Tyrannei sieht keinen Widerstand.\ Weil sie nicht erkennt, dass er längst begonnen hat.\ Unwiderruflich. Leise. Überall.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Bolo
It seems that from 1987 to around 2000 there was a big community of people who played this game called "Bolo". It was a game in which people controlled a tank and killed others while trying to capture bases in team matches. Always 2 teams, from 2 to 16 total players, games could last from 10 minutes to 12 hours. I'm still trying to understand all this.
The game looks silly from some videos you can find today, but apparently it was very deep in strategy because people developed strategy guides and wrote extensively about it and Netscape even supported
bolo:
URLs out of the box.The two most important elements on the map are pillboxes and bases. Pillboxes are originally neutral, meaning that they shoot at every tank that happens to get in its range. They shoot fast and with deadly accuracy. You can shoot the pillbox with your tank, and you can see how damaged it is by looking at it. Once the pillbox is subdued, you may run over it, which will pick it up. You may place the pillbox where you want to put it (where it is clear), if you've enough trees to build it back up. Trees are harvested by sending your man outside your tank to forest the trees. Your man (also called a builder) can also lay mines, build roads, and build walls. Once you have placed a pillbox, it will not shoot at you, but only your enemies. Therefore, pillboxes are often used to protect your bases.
That quote was taken from this "augmented FAQ" written by some user. Apparently there were many FAQs for this game. A FAQ is after all just a simple, clear and direct to the point way of writing about anything, previously known as summa[^summa-k], it doesn't have to be related to any actually frequently asked question.
More unexpected Bolo writings include an etiquette guide, an anthropology study and some wonderings on the reverse pill war tactic.
[^summa-k]: It's not the same thing, but I couldn't help but notice the similarity.
-
@ de6c63ab:d028389b
2025-04-26 14:06:14Ever wondered why Bitcoin stops at 20,999,999.9769 and not a clean 21M? It’s not a bug — it’s brilliant.
https://blossom.primal.net/8e9e6fffbca54dfb8e55071ae590e676b355803ef18b08c8cbd9521a2eb567a8.png
Of course, it's because of this mythical and seemingly magical formula. Want to hear the full story behind this? Keep reading!
The Simple Math Behind It
In reality, there’s no magic here — it’s just an ordinary summation. That big sigma symbol (Σ) tells you that. The little “i” is the summation index, starting from 0 at the bottom and going up to 32 at the top. Why 32? We’ll get there!
After the sigma, you see the expression: 210,000 × (50 ÷ 2^i). 210,000 blocks represent one halving interval, with about 144 blocks mined per day, amounting to almost exactly four years. After each interval, the block reward halves — that’s what the division by 2^i means.
Crunching the Numbers
When i = 0 (before the first halving): 210,000 × (50 ÷ 2^0) = 10,500,000
At i = 1 (after the first halving): 210,000 × (50 ÷ 2^1) = 5,250,000
At i = 2 (after the second halving): 210,000 × (50 ÷ 2^2) = 2,625,000
…
At i = 31: 210,000 × (50 ÷ 2^31) ≈ 0.00489
At i = 32: 210,000 × (50 ÷ 2^32) ≈ 0.00244
And when you sum all of that up? 20,999,999.99755528
Except… that’s not the correct total! The real final number is: 20,999,999.9769
Where the Real Magic Happens
How come?! Here’s where the real fun begins.
We just performed the summation with real (floating-point) numbers. But computers don’t like working with real numbers. They much prefer integers. That’s also one reason why a bitcoin can’t be divided infinitely — the smallest unit is one satoshi, one hundred-millionth of a bitcoin.
And that’s also why there are exactly 33 halvings (0th, 1st, 2nd, …, 31st, 32nd). After the 32nd halving, the block reward would drop below one satoshi, making further halvings meaningless.
https://blossom.primal.net/6abae5b19bc68737c5b14785f54713e7ce11dfdecbe10c64692fc8d9a90c7f34.png
The Role of Integer Math and Bit-Shifting
Because Bitcoin operates with integers (specifically satoshis), the division (reward ÷ 2^i) is actually done using integer division. More precisely, by bit-shifting to the right:
https://blossom.primal.net/3dac403390dd24df4fa8c474db62476fba814bb8c98ca663e6e3a536f4ff7d98.png
We work with 64-bit integers. Halving the value simply means shifting the bits one position to the right.
What Happens During the Halvings
Notice: during the first 9 halvings (i = 0 to i = 8), we’re just shaving off zeros. But starting with the 9th halving (i = 9), we start losing ones. Every time a “one” falls off, it means we’re losing a tiny fraction — a remainder that would have existed if we were using real numbers.
The sum of all these lost remainders is exactly the difference between the two numbers we saw above.
And that’s why the total bitcoin supply is 20,999,999.9769 — not 21 million exactly.
Did you enjoy this? Got any questions? 🔥🚀
-
@ 1f79058c:eb86e1cb
2025-04-26 13:53:50I'm currently using this bash script to publish long-form content from local Markdown files to Nostr relays.
It requires all of
yq
,jq
, andnak
to be installed.Usage
Create a signed Nostr event and print it to the console:
bash markdown_to_nostr.sh article-filename.md
Create a Nostr event and publish it to one or more relays:
bash markdown_to_nostr.sh article-filename.md ws://localhost:7777 wss://nostr.kosmos.org
Markdown format
You can specify your metadata as YAML in a Front Matter header. Here's an example file:
```markdown
title: "Good Morning" summary: "It's a beautiful day" image: https://example.com/i/beautiful-day.jpg date: 2025-04-24T15:00:00Z tags: gm, poetry published: false
In the blue sky just a few specks of gray
In the evening of a beautiful day
Though last night it rained and more rain on the way
And that more rain is needed 'twould be fair to say.— Francis Duggan ```
The metadata keys are mostly self-explanatory. Note:
- All keys except for
title
are optional date
, if present, will be set as thepublished_at
date.- If
published
is set totrue
, it will publish a kind 30023 event, otherwise a kind 30024 (draft) - The
d
tag (widely used as URL slug for the article) will be the filename without the.md
extension
- All keys except for
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Community
I was an avid IPFS user until yesterday. Many many times I asked simple questions for which I couldn't find an answer on the internet in the #ipfs IRC channel on Freenode. Most of the times I didn't get an answer, and even when I got it was rarely by someone who knew IPFS deeply. I've had issues go unanswered on js-ipfs repositories for year – one of these was raising awareness of a problem that then got fixed some months later by a complete rewrite, I closed my own issue after realizing that by myself some couple of months later, I don't think the people responsible for the rewrite were ever acknowledge that he had fixed my issue.
Some days ago I asked some questions about how the IPFS protocol worked internally, sincerely trying to understand the inefficiencies in finding and fetching content over IPFS. I pointed it would be a good idea to have a drawing showing that so people would understand the difficulties (which I didn't) and wouldn't be pissed off by the slowness. I was told to read the whitepaper. I had already the whitepaper, but read again the relevant parts. The whitepaper doesn't explain anything about the DHT and how IPFS finds content. I said that in the room, was told to read again.
Before anyone misread this section, I want to say I understand it's a pain to keep answering people on IRC if you're busy developing stuff of interplanetary importance, and that I'm not paying anyone nor I have the right to be answered. On the other hand, if you're developing a super-important protocol, financed by many millions of dollars and a lot of people are hitting their heads against your software and there's no one to help them; you're always busy but never delivers anything that brings joy to your users, something is very wrong. I sincerely don't know what IPFS developers are working on, I wouldn't doubt they're working on important things if they said that, but what I see – and what many other users see (take a look at the IPFS Discourse forum) is bugs, bugs all over the place, confusing UX, and almost no help.
-
@ c631e267:c2b78d3e
2025-03-31 07:23:05Der Irrsinn ist bei Einzelnen etwas Seltenes – \ aber bei Gruppen, Parteien, Völkern, Zeiten die Regel. \ Friedrich Nietzsche
Erinnern Sie sich an die Horrorkomödie «Scary Movie»? Nicht, dass ich diese Art Filme besonders erinnerungswürdig fände, aber einige Szenen daraus sind doch gewissermaßen Klassiker. Dazu zählt eine, die das Verhalten vieler Protagonisten in Horrorfilmen parodiert, wenn sie in Panik flüchten. Welchen Weg nimmt wohl die Frau in der Situation auf diesem Bild?
Diese Szene kommt mir automatisch in den Sinn, wenn ich aktuelle Entwicklungen in Europa betrachte. Weitreichende Entscheidungen gehen wider jede Logik in die völlig falsche Richtung. Nur ist das hier alles andere als eine Komödie, sondern bitterernst. Dieser Horror ist leider sehr real.
Die Europäische Union hat sich selbst über Jahre konsequent in eine Sackgasse manövriert. Sie hat es versäumt, sich und ihre Politik selbstbewusst und im Einklang mit ihren Wurzeln auf dem eigenen Kontinent zu positionieren. Stattdessen ist sie in blinder Treue den vermeintlichen «transatlantischen Freunden» auf ihrem Konfrontationskurs gen Osten gefolgt.
In den USA haben sich die Vorzeichen allerdings mittlerweile geändert, und die einst hoch gelobten «Freunde und Partner» erscheinen den europäischen «Führern» nicht mehr vertrauenswürdig. Das ist spätestens seit der Münchner Sicherheitskonferenz, der Rede von Vizepräsident J. D. Vance und den empörten Reaktionen offensichtlich. Große Teile Europas wirken seitdem wie ein aufgescheuchter Haufen kopfloser Hühner. Orientierung und Kontrolle sind völlig abhanden gekommen.
Statt jedoch umzukehren oder wenigstens zu bremsen und vielleicht einen Abzweig zu suchen, geben die Crash-Piloten jetzt auf dem Weg durch die Sackgasse erst richtig Gas. Ja sie lösen sogar noch die Sicherheitsgurte und deaktivieren die Airbags. Den vor Angst dauergelähmten Passagieren fällt auch nichts Besseres ein und so schließen sie einfach die Augen. Derweil übertrumpfen sich die Kommentatoren des Events gegenseitig in sensationslüsterner «Berichterstattung».
Wie schon die deutsche Außenministerin mit höchsten UN-Ambitionen, Annalena Baerbock, proklamiert auch die Europäische Kommission einen «Frieden durch Stärke». Zu dem jetzt vorgelegten, selbstzerstörerischen Fahrplan zur Ankurbelung der Rüstungsindustrie, genannt «Weißbuch zur europäischen Verteidigung – Bereitschaft 2030», erklärte die Kommissionspräsidentin, die «Ära der Friedensdividende» sei längst vorbei. Soll das heißen, Frieden bringt nichts ein? Eine umfassende Zusammenarbeit an dauerhaften europäischen Friedenslösungen steht demnach jedenfalls nicht zur Debatte.
Zusätzlich brisant ist, dass aktuell «die ganze EU von Deutschen regiert wird», wie der EU-Parlamentarier und ehemalige UN-Diplomat Michael von der Schulenburg beobachtet hat. Tatsächlich sitzen neben von der Leyen und Strack-Zimmermann noch einige weitere Deutsche in – vor allem auch in Krisenzeiten – wichtigen Spitzenposten der Union. Vor dem Hintergrund der Kriegstreiberei in Deutschland muss eine solche Dominanz mindestens nachdenklich stimmen.
Ihre ursprünglichen Grundwerte wie Demokratie, Freiheit, Frieden und Völkerverständigung hat die EU kontinuierlich in leere Worthülsen verwandelt. Diese werden dafür immer lächerlicher hochgehalten und beschworen.
Es wird dringend Zeit, dass wir, der Souverän, diesem erbärmlichen und gefährlichen Trauerspiel ein Ende setzen und die Fäden selbst in die Hand nehmen. In diesem Sinne fordert uns auch das «European Peace Project» auf, am 9. Mai im Rahmen eines Kunstprojekts den Frieden auszurufen. Seien wir dabei!
[Titelbild: Pixabay]
Dieser Beitrag wurde mit dem Pareto-Client geschrieben und ist zuerst auf Transition News erschienen.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Castas hindus em nova chave
Shudras buscam o máximo bem para os seus próprios corpos; vaishyas o máximo bem para a sua própria vida terrena e a da sua família; kshatriyas o máximo bem para a sociedade e este mundo terreno; brâmanes buscam o máximo bem.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Economics
Just a bunch of somewhat-related notes.
- notes on "Economic Action Beyond the Extent of the Market", Per Bylund
- Mises' interest rate theory
- Profits, not wages, as the originary factor
- Reisman on opportunity cost
- Money Supply Measurement
- Per Bylund's insight
- Maybe a new approach to the Austrian Business Cycle Theory, some disorganized thoughts
- An argument according to which fractional-reserve banking is merely theft and nothing else
- Conjecture and criticism
- Qual é o economista? (piadas)
- UBI calculations
- Donations on the internet
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Thoughts on Nostr key management
On Why I don't like NIP-26 as a solution for key management I talked about multiple techniques that could be used to tackle the problem of key management on Nostr.
Here are some ideas that work in tandem:
- NIP-41 (stateless key invalidation)
- NIP-46 (Nostr Connect)
- NIP-07 (signer browser extension)
- Connected hardware signing devices
- other things like musig or frostr keys used in conjunction with a semi-trusted server; or other kinds of trusted software, like a dedicated signer on a mobile device that can sign on behalf of other apps; or even a separate protocol that some people decide to use as the source of truth for their keys, and some clients might decide to use that automatically
- there are probably many other ideas
Some premises I have in my mind (that may be flawed) that base my thoughts on these matters (and cause me to not worry too much) are that
- For the vast majority of people, Nostr keys aren't a target as valuable as Bitcoin keys, so they will probably be ok even without any solution;
- Even when you lose everything, identity can be recovered -- slowly and painfully, but still --, unlike money;
- Nostr is not trying to replace all other forms of online communication (even though when I think about this I can't imagine one thing that wouldn't be nice to replace with Nostr) or of offline communication, so there will always be ways.
- For the vast majority of people, losing keys and starting fresh isn't a big deal. It is a big deal when you have followers and an online persona and your life depends on that, but how many people are like that? In the real world I see people deleting social media accounts all the time and creating new ones, people losing their phone numbers or other accounts associated with their phone numbers, and not caring very much -- they just find a way to notify friends and family and move on.
We can probably come up with some specs to ease the "manual" recovery process, like social attestation and explicit signaling -- i.e., Alice, Bob and Carol are friends; Alice loses her key; Bob sends a new Nostr event kind to the network saying what is Alice's new key; depending on how much Carol trusts Bob, she can automatically start following that and remove the old key -- or something like that.
One nice thing about some of these proposals, like NIP-41, or the social-recovery method, or the external-source-of-truth-method, is that they don't have to be implemented in any client, they can live in standalone single-purpose microapps that users open or visit only every now and then, and these can then automatically update their follow lists with the latest news from keys that have changed according to multiple methods.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Shitcoinery
IPFS was advertised to the Ethereum community since the beggining as a way to "store" data for their "dApps". I don't think this is harmful in any way, but for some reason it may have led IPFS developers to focus too much on Ethereum stuff. Once I watched a talk showing libp2p developers – despite being ignored by the Ethereum team (that ended up creating their own agnostic p2p library) – dedicating an enourmous amount of work on getting a libp2p app running in the browser talking to a normal Ethereum node.
The always somewhat-abandoned "Awesome IPFS" site is a big repository of "dApps", some of which don't even have their landing page up anymore, useless Ethereum smart contracts that for some reason use IPFS to store whatever the useless data their users produce.
Again, per se it isn't a problem that Ethereum people are using IPFS, but it is at least confusing, maybe misleading, that when you search for IPFS most of the use-cases are actually Ethereum useless-cases.
See also
- Bitcoin, the only non-shitcoin
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28nix
Pra instalar o neuron fui forçado a baixar e instalar o nix. Não consegui me lembrar por que não estava usando até hoje aquele maravilhoso sistema de instalar pacotes desde a primeira vez que tentei, anos atrás.
Que sofrimento pra fazer funcionar com o
fish
, mas até que bem menos sofrimento que da outra vez. Tive que instalar um tal defish-foreign-environment
(usando o próprio nix!, já que a outra opção era ooh-my-fish
ou qualquer outra porcaria dessas) e aí usá-lo para aplicar as definições de shell para bash direto nofish
.E aí lembrei também que o
/nix/store
fica cheio demais, o negócio instala tudo que existe neste mundo a partir do zero. É só para computadores muito ricos, mas vamos ver como vai ser. Estou gostando do neuron (veja, estou usando como diário), então vou ter que deixar o nix aí. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28nostr - Notes and Other Stuff Transmitted by Relays
The simplest open protocol that is able to create a censorship-resistant global "social" network once and for all.
It doesn't rely on any trusted central server, hence it is resilient; it is based on cryptographic keys and signatures, so it is tamperproof; it does not rely on P2P techniques, therefore it works.
Very short summary of how it works, if you don't plan to read anything else:
Everybody runs a client. It can be a native client, a web client, etc. To publish something, you write a post, sign it with your key and send it to multiple relays (servers hosted by someone else, or yourself). To get updates from other people, you ask multiple relays if they know anything about these other people. Anyone can run a relay. A relay is very simple and dumb. It does nothing besides accepting posts from some people and forwarding to others. Relays don't have to be trusted. Signatures are verified on the client side.
This is needed because other solutions are broken:
The problem with Twitter
- Twitter has ads;
- Twitter uses bizarre techniques to keep you addicted;
- Twitter doesn't show an actual historical feed from people you follow;
- Twitter bans people;
- Twitter shadowbans people.
- Twitter has a lot of spam.
The problem with Mastodon and similar programs
- User identities are attached to domain names controlled by third-parties;
- Server owners can ban you, just like Twitter; Server owners can also block other servers;
- Migration between servers is an afterthought and can only be accomplished if servers cooperate. It doesn't work in an adversarial environment (all followers are lost);
- There are no clear incentives to run servers, therefore they tend to be run by enthusiasts and people who want to have their name attached to a cool domain. Then, users are subject to the despotism of a single person, which is often worse than that of a big company like Twitter, and they can't migrate out;
- Since servers tend to be run amateurishly, they are often abandoned after a while — which is effectively the same as banning everybody;
- It doesn't make sense to have a ton of servers if updates from every server will have to be painfully pushed (and saved!) to a ton of other servers. This point is exacerbated by the fact that servers tend to exist in huge numbers, therefore more data has to be passed to more places more often;
- For the specific example of video sharing, ActivityPub enthusiasts realized it would be completely impossible to transmit video from server to server the way text notes are, so they decided to keep the video hosted only from the single instance where it was posted to, which is similar to the Nostr approach.
The problem with SSB (Secure Scuttlebutt)
- It doesn't have many problems. I think it's great. In fact, I was going to use it as a basis for this, but
- its protocol is too complicated because it wasn't thought about being an open protocol at all. It was just written in JavaScript in probably a quick way to solve a specific problem and grew from that, therefore it has weird and unnecessary quirks like signing a JSON string which must strictly follow the rules of ECMA-262 6th Edition;
- It insists on having a chain of updates from a single user, which feels unnecessary to me and something that adds bloat and rigidity to the thing — each server/user needs to store all the chain of posts to be sure the new one is valid. Why? (Maybe they have a good reason);
- It is not as simple as Nostr, as it was primarily made for P2P syncing, with "pubs" being an afterthought;
- Still, it may be worth considering using SSB instead of this custom protocol and just adapting it to the client-relay server model, because reusing a standard is always better than trying to get people in a new one.
The problem with other solutions that require everybody to run their own server
- They require everybody to run their own server;
- Sometimes people can still be censored in these because domain names can be censored.
How does Nostr work?
- There are two components: clients and relays. Each user runs a client. Anyone can run a relay.
- Every user is identified by a public key. Every post is signed. Every client validates these signatures.
- Clients fetch data from relays of their choice and publish data to other relays of their choice. A relay doesn't talk to another relay, only directly to users.
- For example, to "follow" someone a user just instructs their client to query the relays it knows for posts from that public key.
- On startup, a client queries data from all relays it knows for all users it follows (for example, all updates from the last day), then displays that data to the user chronologically.
- A "post" can contain any kind of structured data, but the most used ones are going to find their way into the standard so all clients and relays can handle them seamlessly.
How does it solve the problems the networks above can't?
- Users getting banned and servers being closed
- A relay can block a user from publishing anything there, but that has no effect on them as they can still publish to other relays. Since users are identified by a public key, they don't lose their identities and their follower base when they get banned.
- Instead of requiring users to manually type new relay addresses (although this should also be supported), whenever someone you're following posts a server recommendation, the client should automatically add that to the list of relays it will query.
- If someone is using a relay to publish their data but wants to migrate to another one, they can publish a server recommendation to that previous relay and go;
- If someone gets banned from many relays such that they can't get their server recommendations broadcasted, they may still let some close friends know through other means with which relay they are publishing now. Then, these close friends can publish server recommendations to that new server, and slowly, the old follower base of the banned user will begin finding their posts again from the new relay.
-
All of the above is valid too for when a relay ceases its operations.
-
Censorship-resistance
- Each user can publish their updates to any number of relays.
-
A relay can charge a fee (the negotiation of that fee is outside of the protocol for now) from users to publish there, which ensures censorship-resistance (there will always be some Russian server willing to take your money in exchange for serving your posts).
-
Spam
-
If spam is a concern for a relay, it can require payment for publication or some other form of authentication, such as an email address or phone, and associate these internally with a pubkey that then gets to publish to that relay — or other anti-spam techniques, like hashcash or captchas. If a relay is being used as a spam vector, it can easily be unlisted by clients, which can continue to fetch updates from other relays.
-
Data storage
- For the network to stay healthy, there is no need for hundreds of active relays. In fact, it can work just fine with just a handful, given the fact that new relays can be created and spread through the network easily in case the existing relays start misbehaving. Therefore, the amount of data storage required, in general, is relatively less than Mastodon or similar software.
-
Or considering a different outcome: one in which there exist hundreds of niche relays run by amateurs, each relaying updates from a small group of users. The architecture scales just as well: data is sent from users to a single server, and from that server directly to the users who will consume that. It doesn't have to be stored by anyone else. In this situation, it is not a big burden for any single server to process updates from others, and having amateur servers is not a problem.
-
Video and other heavy content
-
It's easy for a relay to reject large content, or to charge for accepting and hosting large content. When information and incentives are clear, it's easy for the market forces to solve the problem.
-
Techniques to trick the user
- Each client can decide how to best show posts to users, so there is always the option of just consuming what you want in the manner you want — from using an AI to decide the order of the updates you'll see to just reading them in chronological order.
FAQ
- This is very simple. Why hasn't anyone done it before?
I don't know, but I imagine it has to do with the fact that people making social networks are either companies wanting to make money or P2P activists who want to make a thing completely without servers. They both fail to see the specific mix of both worlds that Nostr uses.
- How do I find people to follow?
First, you must know them and get their public key somehow, either by asking or by seeing it referenced somewhere. Once you're inside a Nostr social network you'll be able to see them interacting with other people and then you can also start following and interacting with these others.
- How do I find relays? What happens if I'm not connected to the same relays someone else is?
You won't be able to communicate with that person. But there are hints on events that can be used so that your client software (or you, manually) knows how to connect to the other person's relay and interact with them. There are other ideas on how to solve this too in the future but we can't ever promise perfect reachability, no protocol can.
- Can I know how many people are following me?
No, but you can get some estimates if relays cooperate in an extra-protocol way.
- What incentive is there for people to run relays?
The question is misleading. It assumes that relays are free dumb pipes that exist such that people can move data around through them. In this case yes, the incentives would not exist. This in fact could be said of DHT nodes in all other p2p network stacks: what incentive is there for people to run DHT nodes?
- Nostr enables you to move between server relays or use multiple relays but if these relays are just on AWS or Azure what’s the difference?
There are literally thousands of VPS providers scattered all around the globe today, there is not only AWS or Azure. AWS or Azure are exactly the providers used by single centralized service providers that need a lot of scale, and even then not just these two. For smaller relay servers any VPS will do the job very well.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A Causa
o Princípios de Economia Política de Menger é o único livro que enfatiza a CAUSA o tempo todo. os cientistas todos parecem não saber, ou se esquecer sempre, que as coisas têm causa, e que o conhecimento verdadeiro é o conhecimento da causa das coisas.
a causa é uma categoria metafísica muito superior a qualquer correlação ou resultado de teste de hipótese, ela não pode ser descoberta por nenhum artifício econométrico ou reduzida à simples antecedência temporal estatística. a causa dos fenômenos não pode ser provada cientificamente, mas pode ser conhecida.
o livro de Menger conta para o leitor as causas de vários fenômenos econômicos e as interliga de forma que o mundo caótico da economia parece adquirir uma ordem no momento em que você lê. é uma sensação mágica e indescritível.
quando eu te o recomendei, queria é te imbuir com o espírito da busca pela causa das coisas. depois de ler aquilo, você está apto a perceber continuidade causal nos fenômenos mais complexos da economia atual, enxergar as causas entre toda a ação governamental e as suas várias consequências na vida humana. eu faço isso todos os dias e é a melhor sensação do mundo quando o caos das notícias do caderno de Economia do jornal -- que para o próprio jornalista que as escreveu não têm nenhum sentido (tanto é que ele escreve tudo errado) -- se incluem num sistema ordenado de causas e consequências.
provavelmente eu sempre erro em alguns ou vários pontos, mas ainda assim é maravilhoso. ou então é mais maravilhoso ainda quando eu descubro o erro e reinsiro o acerto naquela racionalização bela da ordem do mundo econômico que é a ordem de Deus.
em scrap para T.P.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Rumple
a payments network based on trust channels
This is the description of a Lightning-like network that will work only with credit or trust-based channels and exist alongside the normal Lightning Network. I imagine some people will think this is undesirable and at the same time very easy to do (such that if it doesn't exist yet it must be because no one cares), but in fact it is a very desirable thing -- which I hope I can establish below -- and at the same time a very non-trivial problem to solve, as the history of Ryan Fugger's Ripple project and posterior copies of it show.
Read these first to get the full context:
- Ryan Fugger's Ripple
- Ripple and the problem of the decentralized commit
- The Lightning Network solves the problem of the decentralized commit
- Parallel Chains
Explanation about the name
Since we're copying the fundamental Ripple idea from Ryan Fugger and since the name "Ripple" is now associated with a scam coin called XRP, and since Ryan Fugger has changed the name of his old website "Ripplepay" to "Rumplepay", we will follow his lead here. If "Ripplepay" was the name of a centralized prototype to the open peer-to-peer network "Ripple", now that the centralized version is called "Rumplepay" the peer-to-peer version must be called "Rumple".
Now the idea
Basically we copy the Lightning Network, but without HTLCs or channels being opened and closed with funds committed to them on multisig Bitcoin transactions published to the blockchain. Instead we use pure trust relationships like the original Ripple concept.
And we use the blockchain commit method, but instead of spending an absurd amount of money to use the actual Bitcoin blockchain instead we use a parallel chain.
How exactly -- a protocol proposal attempt
It could work like this:
The parallel chain, or "Rumple Chain"
- We define a parallel chain with a genesis block;
- Following blocks must contain
a. the ID of the previous block; b. a list of up to 32768 entries of arbitrary 32-byte values; c. an ID constituted by sha256(the previous block ID + the merkle root of all the entries)
- To be mined, each parallel block must be included in the Bitcoin chain according as explained above.
Now that we have a structure for a simple "blockchain" that is completely useless, just blocks over blocks of meaningless values, we proceed to the next step of assigning meaning to these values.
The off-chain payments network, or "Rumple Network"
- We create a network of nodes that can talk to each other via TCP messages (all details are the same as the Lightning Network, except where mentioned otherwise);
- These nodes can create trust channels to each other. These channels are backed by nothing except the willingness of one peer to pay the other what is owed.
- When Alice creates a trust channel with Bob (
Alice trusts Bob
), contrary to what happens in the Lightning Network, it's A that can immediately receive payments through that channel, and everything A receives will be an IOU from Bob to Alice. So Alice should never open a channel to Bob unless Alice trusts Bob. But also Alice can choose the amount of trust it has in Bob, she can, for example, open a very small channel with Bob, which means she will only lose a few satoshis if Bob decides to exit scam her. (in the original Ripple examples these channels were always depicted as friend relationships, and they can continue being that, but it's expected -- given the experience of the Lightning Network -- that the bulk of the channels will exist between users and wallet provider nodes that will act as hubs). - As Alice receive a payment through her channel with Bob, she becomes a creditor and Bob a debtor, i.e., the balance of the channel moves a little to her side. Now she can use these funds to make payments over that channel (or make a payment that combines funds from multiple channels using MPP).
- If at any time Alice decides to close her channel with Bob, she can send all the funds she has standing there to somewhere else (for example, another channel she has with someone else, another wallet somewhere else, a shop that is selling some good or service, or a service that will aggregate all funds from all her channels and send a transaction to the Bitcoin chain on her behalf).
- If at any time Bob leaves the network Alice is entitled by Bob's cryptographic signatures to knock on his door and demand payment, or go to a judge and ask him to force Bob to pay, or share the signatures and commitments online and hurt Bob's reputation with the rest of the network (but yes, none of these things is good enough and if Bob is a very dishonest person none of these things is likely to save Alice's funds).
The payment flow
- Suppose there exists a route
Alice->Bob->Carol
and Alice wants to send a payment to Carol. - First Alice reads an invoice she received from Carol. The invoice (which can be pretty similar or maybe even the same as BOLT11) contains a payment hash
h
and information about how to reach Carol's node, optionally an amount. Let's say it's 100 satoshis. - Using the routing information she gathered, Alice builds an onion and sends it to Bob, at the same time she offers to Bob a "conditional IOU". That stands for a signed commitment that Alice will owe Bob an 100 satoshis if in the next 50 blocks of the Rumple Chain there appears a block containing the preimage
p
such thatsha256(p) == h
. - Bob peels the onion and discovers that he must forward that payment to Carol, so he forwards the peeled onion and offers a conditional IOU to Carol with the same
h
. Bob doesn't know Carol is the final recipient of the payment, it could potentially go on and on. - When Carol gets the conditional IOU from Bob, she makes a list of all the nodes who have announced themselves as miners (which is not something I have mentioned before, but nodes that are acting as miners will must announce themselves somehow) and are online and bidding for the next Rumple block. Each of these miners will have previously published a random 32-byte value
v
they they intend to include in their next block. - Carol sends payments through routes to all (or a big number) of these miners, but this time the conditional IOU contains two conditions (values that must appear in a block for the IOU to be valid):
p
such thatsha256(p) == h
(the same that featured in the invoice) andv
(which must be unique and constant for each miner, something that is easily verifiable by Carol beforehand). Also, instead of these conditions being valid for the next 50 blocks they are valid only for the single next block. - Now Carol broadcasts
p
to the mempool and hopes one of the miners to which she sent conditional payments sees it and, allured by the possibility of cashing in Carol's payment, includesp
in the next block. If that does not happen, Carol can try again in the next block.
Why bother with this at all?
-
The biggest advantage of Lightning is its openness
It has been said multiple times that if trust is involved then we don't need Lightning, we can use Coinbase, or worse, Paypal. This is very wrong. Lightning is good specially because it serves as a bridge between Coinbase, Paypal, other custodial provider and someone running their own node. All these can transact freely across the network and pay each other without worrying about who is in which provider or setup.
Rumple inherits that openness. In a Rumple Network anyone is free to open new trust channels and immediately route payments to anyone else.
Also, since Rumple payments are also based on the reveal of a preimage it can do swaps with Lightning inside a payment route from day one (by which I mean one can pay from Rumple to Lightning and vice-versa).
-
Rumple fixes Lightning's fragility
Lightning is too fragile.
It's known that Lightning is vulnerable to multiple attacks -- like the flood-and-loot attack, for example, although not an attack that's easy to execute, it's still dangerous even if failed. Given the existence of these attacks, it's important to not ever open channels with random anonymous people. Some degree of trust must exist between peers.
But one does not even have to consider attacks. The creation of HTLCs is a liability that every node has to do multiple times during its life. Every initiated, received or forwarded payment require adding one HTLC then removing it from the commitment transaction.
Another issue that makes trust needed between peers is the fact that channels can be closed unilaterally. Although this is a feature, it is also a bug when considering high-fee environments. Imagine you pay $2 in fees to open a channel, your peer may close that unilaterally in the next second and then you have to pay another $15 to close the channel. The opener pays (this is also a feature that can double as a bug by itself). Even if it's not you opening the channel, a peer can open a channel with you, make a payment, then clone the channel, and now you're left with, say, an output of 800 satoshis, which is equal to zero if network fees are high.
So you should only open channels with people you know and know aren't going to actively try to hack you and people who are not going to close channels and impose unnecessary costs on you. But even considering a fully trusted Lightning Network, even if -- to be extreme -- you only opened channels with yourself, these channels would still be fragile. If some HTLC gets stuck for any reason (peer offline or some weird small incompatibility between node softwares) and you're forced to close the channel because of that, there are the extra costs of sweeping these UTXO outputs plus the total costs of closing and reopening a channel that shouldn't have been closed in the first place. Even if HTLCs don't get stuck, a fee renegotiation event during a mempool spike may cause channels to force-close, become valueless or settle for very high closing fee.
Some of these issues are mitigated by Eltoo, others by only having channels with people you trust. Others referenced above, plus the the griefing attack and in general the ability of anyone to spam the network for free with payments that can be pending forever or a lot of payments fail repeatedly makes it very fragile.
Rumple solves most of these problems by not having to touch the blockchain at all. Fee negotiation makes no sense. Opening and closing channels is free. Flood-and-loot is a non-issue. The griefing attack can be still attempted as funds in trust channels must be reserved like on Lightning, but since there should be no theoretical limit to the number of prepared payments a channel can have, the griefing must rely on actual amounts being committed, which prevents large attacks from being performed easily.
-
Rumple fixes Lightning's unsolvable reputation issues
In the Lightning Conference 2019, Rusty Russell promised there would be pre-payments on Lightning someday, since everybody was aware of potential spam issues and pre-payments would be the way to solve that. Fast-forward to November 2020 and these pre-payments have become an apparently unsolvable problem[^thread-402]: no one knows how to implement them reliably without destroying privacy completely or introducing worse problems.
Replacing these payments with tables of reputation between peers is also an unsolved problem[^reputation-lightning], for the same reasons explained in the thread above.
-
Rumple solves the hot wallet problem
Since you don't have to use Bitcoin keys or sign transactions with a Rumple node, only your channel trust is at risk at any time.
-
Rumple ends custodianship
Since no one is storing other people's funds, a big hub or wallet provider can be used in multiple payment routes, but it cannot be immediately classified as a "custodian". At best, it will be a big debtor.
-
Rumple is fun
Opening channels with strangers is boring. Opening channels with friends and people you trust even a little makes that relationship grow stronger and the trust be reinforced. (But of course, like it happens in the Lightning Network today, if Rumple is successful the bulk of trust will be from isolated users to big reliable hubs.)
Questions or potential issues
-
So many advantages, yes, but trusted? Custodial? That's easy and stupid!
Well, an enormous part of the current Lightning Network (and also onchain Bitcoin wallets) already rests on trust, mainly trust between users and custodial wallet providers like ZEBEDEE, Alby, Wallet-of-Satoshi and others. Worse: on the current Lightning Network users not only trust, they also expose their entire transaction history to these providers[^hosted-channels].
Besides that, as detailed in point 3 of the previous section, there are many unsolvable issues on the Lightning protocol that make each sovereign node dependent on some level of trust in its peers (and the network in general dependent on trusting that no one else will spam it to death).
So, given the current state of the Lightning Network, to trust peers like Rumple requires is not a giant change -- but it is still a significant change: in Rumple you shouldn't open a large trust channel with someone just because it looks trustworthy, you must personally know that person and only put in what you're willing to lose. In known brands that have reputation to lose you can probably deposit more trust, same for long-term friends, and that's all. Still it is probably good enough, given the existence of MPP payments and the fact that the purpose of Rumple is to be a payments network for day-to-day purchases and not a way to buy real estate.
-
Why would anyone run a node in this parallel chain?
I don't know. Ideally every server running a Rumple Network node will be running a Bitcoin node and a Rumple chain node. Besides using it to confirm and publish your own Rumple Network transactions it can be set to do BMM mining automatically and maybe earn some small fees comparable to running a Lightning routing node or a JoinMarket yield generator.
Also it will probably be very lightweight, as pruning is completely free and no verification-since-the-genesis-block will take place.
-
What is the maturity of the debt that exists in the Rumple Network or its legal status?
By default it is to be understood as being payable on demand for payments occurring inside the network (as credit can be used to forward or initiate payments by the creditor using that channel). But details of settlement outside the network or what happens if one of the peers disappears cannot be enforced or specified by the network.
Perhaps some standard optional settlement methods (like a Bitcoin address) can be announced and negotiated upon channel creation inside the protocol, but nothing more than that.
[^thread-402]: Read at least the first 10 messages of the thread to see how naïve proposals like you and me could have thought about are brought up and then dismantled very carefully by the group of people most committed to getting Lightning to work properly. [^reputation-lightning]: See also the footnote at Ripple and the problem of the decentralized commit. [^hosted-channels]: Although that second part can be solved by hosted channels.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Personagens de jogos e símbolos
A sensação de "ser" um personagem em um jogo ou uma brincadeira talvez seja o mais próximo que eu tenha conseguido chegar do entendimento de um símbolo religioso.
A hóstia consagrada é, segundo a religião, o corpo de Cristo, mas nossa mente moderna só consegue concebê-la como sendo uma representação do corpo de Cristo. Da mesma forma outras culturas e outras religiões têm símbolos parecidos, inclusive nos quais o próprio participante do ritual faz o papel de um deus ou de qualquer coisa parecida.
"Faz o papel" é de novo a interpretação da mente moderna. O sujeito ali é a coisa, mas ele ao mesmo tempo que é também sabe que não é, que continua sendo ele mesmo.
Nos jogos de videogame e brincadeiras infantis em que se encarna um personagem o jogador é o personagem. não se diz, entre os jogadores, que alguém está "encenando", mas que ele é e pronto. nem há outra denominação ou outro verbo. No máximo "encarnando", mas já aí já é vocabulário jornalístico feito para facilitar a compreensão de quem está de fora do jogo.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Veterano não é dono de bixete
"VETERANO NÃO É DONO DE BIXETE". A frase em letras garrafais chama a atenção dos transeuntes neófitos. Paira sobre um cartaz amarelo que lista várias reclamações contra os "trotes machistas", que, na opinião do responsável pelo cartaz, "não é brincadeira, é opressão".
Eis aí um bizarro exemplo de como são as coisas: primeiro todos os universitários aprovam a idéia do trote, apoiam sua realização e até mesmo desejam sofrer o trote -- com a condição de o poderem aplicar eles mesmos depois --, louvam as maravilhas do mundo universitário, onde a suprema sabedoria se esconde atrás de rituais iniciáticos fora do alcance da imaginação do homem comum e rude, do pobre e do filhinho-de-papai das faculdades privadas; em suma: fomentam os mais baixos, os mais animalescos instintos, a crueldade primordial, destroem em si mesmos e nos colegas quaisquer valores civilizatórios que tivessem sobrado ali, ficando todos indistingüíveis de macacos agressivos e tarados.
Depois vêm aí com um cartaz protestar contra os assédios -- que sem dúvida acontecem em larguíssima escala -- sofridos pelas calouras de 17 anos e que, sendo também novatas no mundo universitário, ainda conservam um pouco de discernimento e pudor.
A incompreensão do fenômeno, porém, é tão grande, que os trotes não são identificados como um problema mental, uma doença que deve ser tratada e eliminada, mas como um sintoma da opressão machista dos homens às mulheres, um produto desta civilização paternalista que, desde que Deus é chamado "o Pai" e não "a Mãe", corrompe a benéfica, pura e angélica natureza do homem primitivo e o torna esta tão torpe criatura.
Na opinião dos autores desse cartaz é preciso, pois, continuar a destruir o que resta da cultura ocidental, e então esperar que haja trotes menos opressores.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28neuron.vim
I started using this neuron thing to create an update this same zettelkasten, but the existing vim plugin had too many problems, so I forked it and ended up changing almost everything.
Since the upstream repository was somewhat abandoned, most users and people who were trying to contribute upstream migrate to my fork too.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On "zk-rollups" applied to Bitcoin
ZK rollups make no sense in bitcoin because there is no "cheap calldata". all data is already ~~cheap~~ expensive calldata.
There could be an onchain zk verification that allows succinct signatures maybe, but never a rollup.
What happens is: you can have one UTXO that contains multiple balances on it and in each transaction you can recreate that UTXOs but alter its state using a zk to compress all internal transactions that took place.
The blockchain must be aware of all these new things, so it is in no way "L2".
And you must have an entity responsible for that UTXO and for conjuring the state changes and zk proofs.
But on bitcoin you also must keep the data necessary to rebuild the proofs somewhere else, I'm not sure how can the third party responsible for that UTXO ensure that happens.
I think such a construct is similar to a credit card corporation: one central party upon which everybody depends, zero interoperability with external entities, every vendor must have an account on each credit card company to be able to charge customers, therefore it is not clear that such a thing is more desirable than solutions that are truly open and interoperable like Lightning, which may have its defects but at least fosters a much better environment, bringing together different conflicting parties, custodians, anyone.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Splitpages
The simplest possible service: it splitted PDF pages in half.
Created specially to solve the problem of those scanned books that come with two pages side-by-side as if they were a single page and are much harder to read on Kindle because of that.
It required me to learn about Heroku Buildpacks though, and fork or contribute to a Heroku Buildpack that embedded a mupdf binary.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28tempreites
My first library to get stars on GitHub, was a very stupid templating library that used just HTML and HTML attributes ("DSL-free"). I was inspired by http://microjs.com/ at the time and ended up not using the library. Probably no one ever did.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28litepub
A Go library that abstracts all the burdensome ActivityPub things and provides just the right amount of helpers necessary to integrate an existing website into the "fediverse" (what an odious name). Made for the gravity integration.
See also
-
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28questo.email
This was a thing done in a brief period I liked the idea of "indiewebcamp", a stupid movement of people saying everybody should have their site and post their lives in it.
From the GitHub postmortem:
questo.email was a service that integrated email addresses into the indieweb ecosystem by providing email-to-note and email-to-webmention triggers, which could be used for people to comment through webmention using their email addresses, and be replied, and also for people to send messages from their sites directly to the email addresses of people they knew; Questo also worked as an IndieAuth provider that used people's email addresses and Mozilla Persona.
It was live from December 2014 through December 2015.
Here's how the home page looked:
See also
- jekmentions, another thing related to "indieweb"
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Boardthreads
This was a very badly done service for turning a Trello list into a helpdesk UI.
Surprisingly, it had more paying users than Websites For Trello, which I was working on simultaneously and dedicating much more time to it.
The Neo4j database I used for this was a very poor choice, it was probably the cause of all the bugs.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Webvatar
Like Gravatar, but using profile images from websites tagged with "microformats-2" tags, like people from the indiewebcamp movement liked. It falled back to favicon, gravatar and procedural avatar generators.
No one really used this, despite people saying they liked it. Since I was desperate to getting some of my programs appreciated by someone I even bought a domain. It was sad, but an enriching experience.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The problem with ION
ION is a DID method based on a thing called "Sidetree".
I can't say for sure what is the problem with ION, because I don't understand the design, even though I have read all I could and asked everybody I knew. All available information only touches on the high-level aspects of it (and of course its amazing wonders) and no one has ever bothered to explain the details. I've also asked the main designer of the protocol, Daniel Buchner, but he may have thought I was trolling him on Twitter and refused to answer, instead pointing me to an incomplete spec on the Decentralized Identity Foundation website that I had already read before. I even tried to join the DIF as a member so I could join their closed community calls and hear what they say, maybe eventually ask a question, so I could understand it, but my entrance was ignored, then after many months and a nudge from another member I was told I had to do a KYC process to be admitted, which I refused.
One thing I know is:
- ION is supposed to provide a way to rotate keys seamlessly and automatically without losing the main identity (and the ION proponents also claim there are no "master" keys because these can also be rotated).
- ION is also not a blockchain, i.e. it doesn't have a deterministic consensus mechanism and it is decentralized, i.e. anyone can publish data to it, doesn't have to be a single central server, there may be holes in the available data and the protocol doesn't treat that as a problem.
- From all we know about years of attempts to scale Bitcoins and develop offchain protocols it is clear that you can't solve the double-spend problem without a central authority or a kind of blockchain (i.e. a decentralized system with deterministic consensus).
- Rotating keys also suffer from the double-spend problem: whenever you rotate a key it is as if it was "spent", you aren't supposed to be able to use it again.
The logic conclusion of the 4 assumptions above is that ION is flawed: it can't provide the key rotation it says it can if it is not a blockchain.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28bolt12 problems
- clients can't programatically build new offers by changing a path or query params (services like zbd.gg or lnurl-pay.me won't work)
- impossible to use in a load-balanced custodian way -- since offers would have to be pregenerated and tied to a specific lightning node.
- the existence of fiat currency fields makes it so wallets have to fetch exchange rates from somewhere on the internet (or offer a bad user experience), using HTTP which hurts user privacy.
- the vendor field is misleading, can be phished very easily, not as safe as a domain name.
- onion messages are an improvement over fake HTLC-based payments as a way of transmitting data, for sure. but we must decide if they are (i) suitable for transmitting all kinds of data over the internet, a replacement for tor; or (ii) not something that will scale well or on which we can count on for the future. if there was proper incentivization for data transmission it could end up being (i), the holy grail of p2p communication over the internet, but that is a very hard problem to solve and not guaranteed to yield the desired scalability results. since not even hints of attempting to solve that are being made, it's safer to conclude it is (ii).
bolt12 limitations
- not flexible enough. there are some interesting fields defined in the spec, but who gets to add more fields later if necessary? very unclear.
- services can't return any actionable data to the users who paid for something. it's unclear how business can be conducted without an extra communication channel.
bolt12 illusions
- recurring payments is not really solved, it is just a spec that defines intervals. the actual implementation must still be done by each wallet and service. the recurring payment cannot be enforced, the wallet must still initiate the payment. even if the wallet is evil and is willing to initiate a payment without the user knowing it still needs to have funds, channels, be online, connected etc., so it's not as if the services could rely on the payments being delivered in time.
- people seem to think it will enable pushing payments to mobile wallets, which it does not and cannot.
- there is a confusion of contexts: it looks like offers are superior to lnurl-pay, for example, because they don't require domain names. domain names, though, are common and well-established among internet services and stores, because these services have websites, so this is not really an issue. it is an issue, though, for people that want to receive payments in their homes. for these, indeed, bolt12 offers a superior solution -- but at the same time bolt12 seems to be selling itself as a tool for merchants and service providers when it includes and highlights features as recurring payments and refunds.
- the privacy gains for the receiver that are promoted as being part of bolt12 in fact come from a separate proposal, blinded paths, which should work for all normal lightning payments and indeed are a very nice solution. they are (or at least were, and should be) independent from the bolt12 proposal. a separate proposal, which can be (and already is being) used right now, also improves privacy for the receiver very much anway, it's called trampoline routing.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The Lightning Network solves the problem of the decentralized commit
Before reading this, see Ripple and the problem of the decentralized commit.
The Bitcoin Lightning Network can be thought as a system similar to Ripple: there are conditional IOUs (HTLCs) that are sent in "prepare"-like messages across a route, and a secret
p
that must travel from the final receiver backwards through the route until it reaches the initial sender and possession of that secret serves to prove the payment as well as to make the IOU hold true.The difference is that if one of the parties don't send the "acknowledge" in time, the other has a trusted third-party with its own clock (that is the clock that is valid for everybody involved) to complain immediately at the timeout: the Bitcoin blockchain. If C has
p
and B isn't acknowleding it, C tells the Bitcoin blockchain and it will force the transfer of the amount from B to C.Differences (or 1 upside and 3 downside)
-
The Lightning Network differs from a "pure" Ripple network in that when we send a "prepare" message on the Lightning Network, unlike on a pure Ripple network we're not just promising we will owe something -- instead we are putting the money on the table already for the other to get if we are not responsive.
-
The feature above removes the trust element from the equation. We can now have relationships with people we don't trust, as the Bitcoin blockchain will serve as an automated escrow for our conditional payments and no one will be harmed. Therefore it is much easier to build networks and route payments if you don't always require trust relationships.
-
However it introduces the cost of the capital. A ton of capital must be made available in channels and locked in HTLCs so payments can be routed. This leads to potential issues like the ones described in https://twitter.com/joostjgr/status/1308414364911841281.
-
Another issue that comes with the necessity of using the Bitcoin blockchain as an arbiter is that it may cost a lot in fees -- much more than the value of the payment that is being disputed -- to enforce it on the blockchain.[^closing-channels-for-nothing]
Solutions
Because the downsides listed above are so real and problematic -- and much more so when attacks from malicious peers are taken into account --, some have argued that the Lightning Network must rely on at least some trust between peers, which partly negate the benefit.
The introduction of purely trust-backend channels is the next step in the reasoning: if we are trusting already, why not make channels that don't touch the blockchain and don't require peers to commit large amounts of capital?
The reason is, again, the ambiguity that comes from the problem of the decentralized commit. Therefore hosted channels can be good when trust is required only from one side, like in the final hops of payments, but they cannot work in the middle of routes without eroding trust relationships between peers (however they can be useful if employed as channels between two nodes ran by the same person).
The next solution is a revamped pure Ripple network, one that solves the problem of the decentralized commit in a different way.
[^closing-channels-for-nothing]: That is even true when, for reasons of the payment being so small that it doesn't even deserve an actual HTLC that can be enforced on the chain (as per the protocol), even then the channel between the two nodes will be closed, only to make it very clear that there was a disagreement. Leaving it online would be harmful as one of the peers could repeat the attack again and again. This is a proof that ambiguity, in case of the pure Ripple network, is a very important issue.
-
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Eltoo
Read the paper, it's actually nice and small. You can read only everything up to section 4.2 and it will be enough. Done.
Ok, you don't want to. Or you tried but still want to read here.
Eltoo is a way of keeping payment channel state that works better than the original scheme used in Lightning. Since Lightning is a bunch of different protocols glued together, it can It replace just the part the previously dealed with keeping the payment channel.
Eltoo works like this: A and B want a payment channel, so they create a multisig transaction with deposits from both -- or from just one, doesn't matter. That transaction is only spendable if both cooperate. So if one of them is unresponsive or non-cooperative the other must have a way to get his funds back, so they also create an update transaction but don't publish it to the blockchain. That update transaction spends to a settlement transaction that then distributes the money back to A and B as their balances say.
If they are cooperative they can change the balances of the channel by just creating new update transactions and settlement transactions and number them like 1, 2, 3, 4 etc.
Solid arrows means a transaction is presigned to spend only that previous other transaction; dotted arrows mean it's a floating transaction that can spend any of the previous.
Why do they need and update and a settlement transaction?
Because if B publishes update2 (in which his balances were greater) A needs some time to publish update4 (the latest, which holds correct state of balances).
Each update transaction can be spent by any newer update transaction immediately or by its own specific settlement transaction only after some time -- or some blocks.
Hopefully you got that.
How do they close the channel?
If they're cooperative they can just agree to spend the funding transaction, that first multisig transaction I mentioned, to whatever destinations they want. If one party isn't cooperating the other can just publish the latest update transaction, wait a while, then publish its settlement transaction.
How is this better than the previous way of keeping channel states?
Eltoo is better because nodes only have to keep the last set of update and settlement transactions. Before they had to keep all intermediate state updates.
If it is so better why didn't they do it first?
Because they didn't have the idea. And also because they needed an update to the Bitcoin protocol that allowed the presigned update transactions to spend any of the previous update transactions. This protocol update is called
SIGHASH_NOINPUT
[^anyprevout], you've seen this name out there. By marking a transaction withSIGHASH_NOINPUT
it enters a mystical state and becomes a floating transaction that can be bound to any other transaction as long as its unlocking script matches the locking script.Why can't update2 bind itself to update4 and spend that?
Good question. It can. But then it can't anymore, because Eltoo uses
OP_CHECKLOCKTIMEVERIFY
to ensure that doesn't actually check not a locktime, but a sequence. It's all arcane stuff.And then Eltoo update transactions are numbered and their lock/unlock scripts will only match if a transaction is being spent by another one that's greater than it.
Do Eltoo channels expire?
No.
What is that "on-chain protocol" they talk about in the paper?
That's just an example to guide you through how the off-chain protocol works. Read carefully or don't read it at all. The off-chain mechanics is different from the on-chain mechanics. Repeating: the on-chain protocol is useless in the real world, it's just a didactic tool.
[^anyprevout]: Later
SIGHASH_NOINPUT
was modified to fit better with Taproot and Schnorr signatures and renamed toSIGHASH_ANYPREVOUT
. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Hosted-channels Lightning wallet that runs in the browser
Communicates over HTTP with a server that is actually connected to the Lightning Network, but generates preimages and onions locally, doing everything like the Hosted Channels protocol says. Just the communication method changes.
Could use this library: https://www.npmjs.com/package/bolt04
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28doulas.club
A full catalog of all Brazilian doulas with data carefully scrapped from many websites that contained partial catalogs and some data manually included. All this packaged as a Couchapp and served directly from Cloudant.
This was done because the idea of doulas was good, but I spotted an issue: pregnant womwn should know many doulas before choosing one that would match well, therefore a full catalog with a lot of information was necessary.
This was a huge amount of work mostly wasted.
Many doulas who knew about this didn't like it and sent angry and offensive emails telling me to remove them. This was information one should know before choosing a doula.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The monolithic approach to CouchDB views
Imagine you have an app that created one document for each day. The docs ids are easily "2015-02-05", "2015-02-06" and so on. Nothing could be more simple. Let's say each day you record "sales", "expenses" and "events", so this a document for a typical day for the retail management Couchapp for an orchid shop:
{ "_id": "2015-02-04", "sales": [{ "what": "A blue orchid", "price": 50000 }, { "what": "A red orchid", "price": 3500 }, { "what": "A yellow orchid", "price": 11500 }], "expenses": [{ "what": "A new bucket", "how much": 300 },{ "what": "The afternoon snack", "how much": "1200" }], "events": [ "Bob opened the store", "Lisa arrived", "Bob went home", "Lisa closed the store" ] }
Now when you want to know what happened in a specific day, you know where to look at.
But you don't want only that, you want profit reports, cash flows, day profitability, a complete log of the events et cetera. Then you create one view to turn this mess into something more useful:
``` function (doc) { var spldate = doc._id.split("-") var year = parseInt(spldate[0]) var month = parseInt(spldate[1]) var day = parseInt(spldate[2])
doc.sales.forEach(function (sale, i) { emit(["sale", sale.what], sale.price) emit(["cashflow", year, month, day, i], sale.price) }) doc.expenses.forEach(function (exp, i) { emit(["expense", exp.what], exp.price) emit(["cashflow", year, month, day, i], -exp.price) }) doc.events.forEach(function (ev, i) { emit(["log", year, month, day, i], ev) }) } ```
Then you add a reduce function with the value of
_sum
and you get a bunch of useful query endpoints. For example, you can request/_design/orchids/_view/main?startkey=["cashflow", "2014", "12"]&endkey=["cashflow", "2014", "12", {}]
-
@ 04c915da:3dfbecc9
2023-09-26 17:34:13For years American bitcoin miners have argued for more efficient and free energy markets. It benefits everyone if our energy infrastructure is as efficient and robust as possible. Unfortunately, broken incentives have led to increased regulation throughout the sector, incentivizing less efficient energy sources such as solar and wind at the detriment of more efficient alternatives.
The result has been less reliable energy infrastructure for all Americans and increased energy costs across the board. This naturally has a direct impact on bitcoin miners: increased energy costs make them less competitive globally.
Bitcoin mining represents a global energy market that does not require permission to participate. Anyone can plug a mining computer into power and internet to get paid the current dynamic market price for their work in bitcoin. Using cellphone or satellite internet, these mines can be located anywhere in the world, sourcing the cheapest power available.
Absent of regulation, bitcoin mining naturally incentivizes the build out of highly efficient and robust energy infrastructure. Unfortunately that world does not exist and burdensome regulations remain the biggest threat for US based mining businesses. Jurisdictional arbitrage gives miners the option of moving to a friendlier country but that naturally comes with its own costs.
Enter AI. With the rapid development and release of AI tools comes the requirement of running massive datacenters for their models. Major tech companies are scrambling to secure machines, rack space, and cheap energy to run full suites of AI enabled tools and services. The most valuable and powerful tech companies in America have stumbled into an accidental alliance with bitcoin miners: THE NEED FOR CHEAP AND RELIABLE ENERGY.
Our government is corrupt. Money talks. These companies will push for energy freedom and it will greatly benefit us all.
-
@ 66675158:1b644430
2025-03-23 11:39:41I don't believe in "vibe coding" – it's just the newest Silicon Valley fad trying to give meaning to their latest favorite technology, LLMs. We've seen this pattern before with blockchain, when suddenly Non Fungible Tokens appeared, followed by Web3 startups promising to revolutionize everything from social media to supply chains. VCs couldn't throw money fast enough at anything with "decentralized" (in name only) in the pitch deck. Andreessen Horowitz launched billion-dollar crypto funds, while Y Combinator batches filled with blockchain startups promising to be "Uber for X, but on the blockchain."
The metaverse mania followed, with Meta betting its future on digital worlds where we'd supposedly hang out as legless avatars. Decentralized (in name only) autonomous organizations emerged as the next big thing – supposedly democratic internet communities that ended up being the next scam for quick money.
Then came the inevitable collapse. The FTX implosion in late 2022 revealed fraud, Luna/Terra's death spiral wiped out billions (including my ten thousand dollars), while Celsius and BlockFi froze customer assets before bankruptcy.
By 2023, crypto winter had fully set in. The SEC started aggressive enforcement actions, while users realized that blockchain technology had delivered almost no practical value despite a decade of promises.
Blockchain's promises tapped into fundamental human desires – decentralization resonated with a generation disillusioned by traditional institutions. Evangelists presented a utopian vision of freedom from centralized control. Perhaps most significantly, crypto offered a sense of meaning in an increasingly abstract world, making the clear signs of scams harder to notice.
The technology itself had failed to solve any real-world problems at scale. By 2024, the once-mighty crypto ecosystem had become a cautionary tale. Venture firms quietly scrubbed blockchain references from their websites while founders pivoted to AI and large language models.
Most reading this are likely fellow bitcoiners and nostr users who understand that Bitcoin is blockchain's only valid use case. But I shared that painful history because I believe the AI-hype cycle will follow the same trajectory.
Just like with blockchain, we're now seeing VCs who once couldn't stop talking about "Web3" falling over themselves to fund anything with "AI" in the pitch deck. The buzzwords have simply changed from "decentralized" to "intelligent."
"Vibe coding" is the perfect example – a trendy name for what is essentially just fuzzy instructions to LLMs. Developers who've spent years honing programming skills are now supposed to believe that "vibing" with an AI is somehow a legitimate methodology.
This might be controversial to some, but obvious to others:
Formal, context-free grammar will always remain essential for building precise systems, regardless of how advanced natural language technology becomes
The mathematical precision of programming languages provides a foundation that human language's ambiguity can never replace. Programming requires precision – languages, compilers, and processors operate on explicit instructions, not vibes. What "vibe coding" advocates miss is that beneath every AI-generated snippet lies the same deterministic rules that have always governed computation.
LLMs don't understand code in any meaningful sense—they've just ingested enormous datasets of human-written code and can predict patterns. When they "work," it's because they've seen similar patterns before, not because they comprehend the underlying logic.
This creates a dangerous dependency. Junior developers "vibing" with LLMs might get working code without understanding the fundamental principles. When something breaks in production, they'll lack the knowledge to fix it.
Even experienced developers can find themselves in treacherous territory when relying too heavily on LLM-generated code. What starts as a productivity boost can transform into a dependency crutch.
The real danger isn't just technical limitations, but the false confidence it instills. Developers begin to believe they understand systems they've merely instructed an AI to generate – fundamentally different from understanding code you've written yourself.
We're already seeing the warning signs: projects cobbled together with LLM-generated code that work initially but become maintenance nightmares when requirements change or edge cases emerge.
The venture capital money is flowing exactly as it did with blockchain. Anthropic raised billions, OpenAI is valued astronomically despite minimal revenue, and countless others are competing to build ever-larger models with vague promises. Every startup now claims to be "AI-powered" regardless of whether it makes sense.
Don't get me wrong—there's genuine innovation happening in AI research. But "vibe coding" isn't it. It's a marketing term designed to make fuzzy prompting sound revolutionary.
Cursor perfectly embodies this AI hype cycle. It's an AI-enhanced code editor built on VS Code that promises to revolutionize programming by letting you "chat with your codebase." Just like blockchain startups promised to "revolutionize" industries, Cursor promises to transform development by adding LLM capabilities.
Yes, Cursor can be genuinely helpful. It can explain unfamiliar code, suggest completions, and help debug simple issues. After trying it for just an hour, I found the autocomplete to be MAGICAL for simple refactoring and basic functionality.
But the marketing goes far beyond reality. The suggestion that you can simply describe what you want and get production-ready code is dangerously misleading. What you get are approximations with:
- Security vulnerabilities the model doesn't understand
- Edge cases it hasn't considered
- Performance implications it can't reason about
- Dependency conflicts it has no way to foresee
The most concerning aspect is how such tools are marketed to beginners as shortcuts around learning fundamentals. "Why spend years learning to code when you can just tell AI what you want?" This is reminiscent of how crypto was sold as a get-rich-quick scheme requiring no actual understanding.
When you "vibe code" with an AI, you're not eliminating complexity—you're outsourcing understanding to a black box. This creates developers who can prompt but not program, who can generate but not comprehend.
The real utility of LLMs in development is in augmenting existing workflows:
- Explaining unfamiliar codebases
- Generating boilerplate for well-understood patterns
- Suggesting implementations that a developer evaluates critically
- Assisting with documentation and testing
These uses involve the model as a subordinate assistant to a knowledgeable developer, not as a replacement for expertise. This is where the technology adds value—as a sophisticated tool in skilled hands.
Cursor is just a better hammer, not a replacement for understanding what you're building. The actual value emerges when used by developers who understand what happens beneath the abstractions. They can recognize when AI suggestions make sense and when they don't because they have the fundamental knowledge to evaluate output critically.
This is precisely where the "vibe coding" narrative falls apart.
-
@ c631e267:c2b78d3e
2025-03-21 19:41:50Wir werden nicht zulassen, dass technisch manches möglich ist, \ aber der Staat es nicht nutzt. \ Angela Merkel
Die Modalverben zu erklären, ist im Deutschunterricht manchmal nicht ganz einfach. Nicht alle Fremdsprachen unterscheiden zum Beispiel bei der Frage nach einer Möglichkeit gleichermaßen zwischen «können» im Sinne von «die Gelegenheit, Kenntnis oder Fähigkeit haben» und «dürfen» als «die Erlaubnis oder Berechtigung haben». Das spanische Wort «poder» etwa steht für beides.
Ebenso ist vielen Schülern auf den ersten Blick nicht recht klar, dass das logische Gegenteil von «müssen» nicht unbedingt «nicht müssen» ist, sondern vielmehr «nicht dürfen». An den Verkehrsschildern lässt sich so etwas meistens recht gut erklären: Manchmal muss man abbiegen, aber manchmal darf man eben nicht.
Dieses Beispiel soll ein wenig die Verwirrungstaktik veranschaulichen, die in der Politik gerne verwendet wird, um unpopuläre oder restriktive Maßnahmen Stück für Stück einzuführen. Zuerst ist etwas einfach innovativ und bringt viele Vorteile. Vor allem ist es freiwillig, jeder kann selber entscheiden, niemand muss mitmachen. Später kann man zunehmend weniger Alternativen wählen, weil sie verschwinden, und irgendwann verwandelt sich alles andere in «nicht dürfen» – die Maßnahme ist obligatorisch.
Um die Durchsetzung derartiger Initiativen strategisch zu unterstützen und nett zu verpacken, gibt es Lobbyisten, gerne auch NGOs genannt. Dass das «NG» am Anfang dieser Abkürzung übersetzt «Nicht-Regierungs-» bedeutet, ist ein Anachronismus. Das war vielleicht früher einmal so, heute ist eher das Gegenteil gemeint.
In unserer modernen Zeit wird enorm viel Lobbyarbeit für die Digitalisierung praktisch sämtlicher Lebensbereiche aufgewendet. Was das auf dem Sektor der Mobilität bedeuten kann, haben wir diese Woche anhand aktueller Entwicklungen in Spanien beleuchtet. Begründet teilweise mit Vorgaben der Europäischen Union arbeitet man dort fleißig an einer «neuen Mobilität», basierend auf «intelligenter» technologischer Infrastruktur. Derartige Anwandlungen wurden auch schon als «Technofeudalismus» angeprangert.
Nationale Zugangspunkte für Mobilitätsdaten im Sinne der EU gibt es nicht nur in allen Mitgliedsländern, sondern auch in der Schweiz und in Großbritannien. Das Vereinigte Königreich beteiligt sich darüber hinaus an anderen EU-Projekten für digitale Überwachungs- und Kontrollmaßnahmen, wie dem biometrischen Identifizierungssystem für «nachhaltigen Verkehr und Tourismus».
Natürlich marschiert auch Deutschland stracks und euphorisch in Richtung digitaler Zukunft. Ohne vernetzte Mobilität und einen «verlässlichen Zugang zu Daten, einschließlich Echtzeitdaten» komme man in der Verkehrsplanung und -steuerung nicht aus, erklärt die Regierung. Der Interessenverband der IT-Dienstleister Bitkom will «die digitale Transformation der deutschen Wirtschaft und Verwaltung vorantreiben». Dazu bewirbt er unter anderem die Konzepte Smart City, Smart Region und Smart Country und behauptet, deutsche Großstädte «setzen bei Mobilität voll auf Digitalisierung».
Es steht zu befürchten, dass das umfassende Sammeln, Verarbeiten und Vernetzen von Daten, das angeblich die Menschen unterstützen soll (und theoretisch ja auch könnte), eher dazu benutzt wird, sie zu kontrollieren und zu manipulieren. Je elektrischer und digitaler unsere Umgebung wird, desto größer sind diese Möglichkeiten. Im Ergebnis könnten solche Prozesse den Bürger nicht nur einschränken oder überflüssig machen, sondern in mancherlei Hinsicht regelrecht abschalten. Eine gesunde Skepsis ist also geboten.
[Titelbild: Pixabay]
Dieser Beitrag wurde mit dem Pareto-Client geschrieben. Er ist zuerst auf Transition News erschienen.
-
@ aa8de34f:a6ffe696
2025-03-21 12:08:3119. März 2025
🔐 1. SHA-256 is Quantum-Resistant
Bitcoin’s proof-of-work mechanism relies on SHA-256, a hashing algorithm. Even with a powerful quantum computer, SHA-256 remains secure because:
- Quantum computers excel at factoring large numbers (Shor’s Algorithm).
- However, SHA-256 is a one-way function, meaning there's no known quantum algorithm that can efficiently reverse it.
- Grover’s Algorithm (which theoretically speeds up brute force attacks) would still require 2¹²⁸ operations to break SHA-256 – far beyond practical reach.
++++++++++++++++++++++++++++++++++++++++++++++++++
🔑 2. Public Key Vulnerability – But Only If You Reuse Addresses
Bitcoin uses Elliptic Curve Digital Signature Algorithm (ECDSA) to generate keys.
- A quantum computer could use Shor’s Algorithm to break SECP256K1, the curve Bitcoin uses.
- If you never reuse addresses, it is an additional security element
- 🔑 1. Bitcoin Addresses Are NOT Public Keys
Many people assume a Bitcoin address is the public key—this is wrong.
- When you receive Bitcoin, it is sent to a hashed public key (the Bitcoin address).
- The actual public key is never exposed because it is the Bitcoin Adress who addresses the Public Key which never reveals the creation of a public key by a spend
- Bitcoin uses Pay-to-Public-Key-Hash (P2PKH) or newer methods like Pay-to-Witness-Public-Key-Hash (P2WPKH), which add extra layers of security.
🕵️♂️ 2.1 The Public Key Never Appears
- When you send Bitcoin, your wallet creates a digital signature.
- This signature uses the private key to prove ownership.
- The Bitcoin address is revealed and creates the Public Key
- The public key remains hidden inside the Bitcoin script and Merkle tree.
This means: ✔ The public key is never exposed. ✔ Quantum attackers have nothing to target, attacking a Bitcoin Address is a zero value game.
+++++++++++++++++++++++++++++++++++++++++++++++++
🔄 3. Bitcoin Can Upgrade
Even if quantum computers eventually become a real threat:
- Bitcoin developers can upgrade to quantum-safe cryptography (e.g., lattice-based cryptography or post-quantum signatures like Dilithium).
- Bitcoin’s decentralized nature ensures a network-wide soft fork or hard fork could transition to quantum-resistant keys.
++++++++++++++++++++++++++++++++++++++++++++++++++
⏳ 4. The 10-Minute Block Rule as a Security Feature
- Bitcoin’s network operates on a 10-minute block interval, meaning:Even if an attacker had immense computational power (like a quantum computer), they could only attempt an attack every 10 minutes.Unlike traditional encryption, where a hacker could continuously brute-force keys, Bitcoin’s system resets the challenge with every new block.This limits the window of opportunity for quantum attacks.
🎯 5. Quantum Attack Needs to Solve a Block in Real-Time
- A quantum attacker must solve the cryptographic puzzle (Proof of Work) in under 10 minutes.
- The problem? Any slight error changes the hash completely, meaning:If the quantum computer makes a mistake (even 0.0001% probability), the entire attack fails.Quantum decoherence (loss of qubit stability) makes error correction a massive challenge.The computational cost of recovering from an incorrect hash is still incredibly high.
⚡ 6. Network Resilience – Even if a Block Is Hacked
- Even if a quantum computer somehow solved a block instantly:The network would quickly recognize and reject invalid transactions.Other miners would continue mining under normal cryptographic rules.51% Attack? The attacker would need to consistently beat the entire Bitcoin network, which is not sustainable.
🔄 7. The Logarithmic Difficulty Adjustment Neutralizes Threats
- Bitcoin adjusts mining difficulty every 2016 blocks (\~2 weeks).
- If quantum miners appeared and suddenly started solving blocks too quickly, the difficulty would adjust upward, making attacks significantly harder.
- This self-correcting mechanism ensures that even quantum computers wouldn't easily overpower the network.
🔥 Final Verdict: Quantum Computers Are Too Slow for Bitcoin
✔ The 10-minute rule limits attack frequency – quantum computers can’t keep up.
✔ Any slight miscalculation ruins the attack, resetting all progress.
✔ Bitcoin’s difficulty adjustment would react, neutralizing quantum advantages.
Even if quantum computers reach their theoretical potential, Bitcoin’s game theory and design make it incredibly resistant. 🚀
-
@ 266815e0:6cd408a5
2025-04-26 13:10:09To all existing nostr developers and new nostr developers, stop using kind 1 events... just stop whatever your doing and switch the kind to
Math.round(Math.random() * 10000)
trust me it will be betterWhat are kind 1 events
kind 1 events are defined in NIP-10 as "simple plaintext notes" or in other words social posts.
Don't trick your users
Most users are joining nostr for the social experience, and secondly to find all the cool "other stuff" apps They find friends, browse social posts, and reply to them. If a user signs into a new nostr client and it starts asking them to sign kind 1 events with blobs of JSON, they will sign it without thinking too much about it.
Then when they return to their comfy social apps they will see that they made 10+ posts with massive amounts of gibberish that they don't remember posting. then they probably will go looking for the delete button and realize there isn't one...
Even if those kind 1 posts don't contain JSON and have a nice fancy human readable syntax. they will still confuse users because they won't remember writing those social posts
What about "discoverability"
If your goal is to make your "other stuff" app visible to more users, then I would suggest using NIP-19 and NIP-89 The first allows users to embed any other event kind into social posts as
nostr:nevent1
ornostr:naddr1
links, and the second allows social clients to redirect users to an app that knows how to handle that specific kind of eventSo instead of saving your apps data into kind 1 events. you can pick any kind you want, then give users a "share on nostr" button that allows them to compose a social post (kind 1) with a
nostr:
link to your special kind of event and by extension you appWhy its a trap
Once users start using your app it becomes a lot more difficult to migrate to a new event kind or data format. This sounds obvious, but If your app is built on kind 1 events that means you will be stuck with their limitations forever.
For example, here are some of the limitations of using kind 1 - Querying for your apps data becomes much more difficult. You have to filter through all of a users kind 1 events to find which ones are created by your app - Discovering your apps data is more difficult for the same reason, you have to sift through all the social posts just to find the ones with you special tag or that contain JSON - Users get confused. as mentioned above users don't expect "other stuff" apps to be creating special social posts - Other nostr clients won't understand your data and will show it as a social post with no option for users to learn about your app
-
@ 5f078e90:b2bacaa3
2025-04-26 12:01:11Panda story 3
Initially posted on Hive, story is between 300 and 500 characters. Should become a Nostr kind 30023. Image has markdown.
In a misty bamboo forest, a red panda named Rolo discovered a glowing berry. Curious, he nibbled it and began to float! Drifting over treetops, he saw his friends below, waving. Rolo somersaulted through clouds, giggling as wind tickled his fur. The berry's magic faded at dusk, landing him softly by a stream. His pals cheered his tale, and Rolo dreamed of more adventures, his heart light as the breeze. (349 characters)
Originally posted on Hive at https://hive.blog/@hostr/panda-story-3
Cross-posted using Hostr, version 0.0.3
-
@ a95c6243:d345522c
2025-03-20 09:59:20Bald werde es verboten, alleine im Auto zu fahren, konnte man dieser Tage in verschiedenen spanischen Medien lesen. Die nationale Verkehrsbehörde (Dirección General de Tráfico, kurz DGT) werde Alleinfahrern das Leben schwer machen, wurde gemeldet. Konkret erörtere die Generaldirektion geeignete Sanktionen für Personen, die ohne Beifahrer im Privatauto unterwegs seien.
Das Alleinfahren sei zunehmend verpönt und ein Mentalitätswandel notwendig, hieß es. Dieser «Luxus» stehe im Widerspruch zu den Maßnahmen gegen Umweltverschmutzung, die in allen europäischen Ländern gefördert würden. In Frankreich sei es «bereits verboten, in der Hauptstadt allein zu fahren», behauptete Noticiastrabajo Huffpost in einer Zwischenüberschrift. Nur um dann im Text zu konkretisieren, dass die sogenannte «Umweltspur» auf der Pariser Ringautobahn gemeint war, die für Busse, Taxis und Fahrgemeinschaften reserviert ist. Ab Mai werden Verstöße dagegen mit einem Bußgeld geahndet.
Die DGT jedenfalls wolle bei der Umsetzung derartiger Maßnahmen nicht hinterherhinken. Diese Medienberichte, inklusive des angeblich bevorstehenden Verbots, beriefen sich auf Aussagen des Generaldirektors der Behörde, Pere Navarro, beim Mobilitätskongress Global Mobility Call im November letzten Jahres, wo es um «nachhaltige Mobilität» ging. Aus diesem Kontext stammt auch Navarros Warnung: «Die Zukunft des Verkehrs ist geteilt oder es gibt keine».
Die «Faktenchecker» kamen der Generaldirektion prompt zu Hilfe. Die DGT habe derlei Behauptungen zurückgewiesen und klargestellt, dass es keine Pläne gebe, Fahrten mit nur einer Person im Auto zu verbieten oder zu bestrafen. Bei solchen Meldungen handele es sich um Fake News. Teilweise wurde der Vorsitzende der spanischen «Rechtsaußen»-Partei Vox, Santiago Abascal, der Urheberschaft bezichtigt, weil er einen entsprechenden Artikel von La Gaceta kommentiert hatte.
Der Beschwichtigungsversuch der Art «niemand hat die Absicht» ist dabei erfahrungsgemäß eher ein Alarmzeichen als eine Beruhigung. Walter Ulbrichts Leugnung einer geplanten Berliner Mauer vom Juni 1961 ist vielen genauso in Erinnerung wie die Fake News-Warnungen des deutschen Bundesgesundheitsministeriums bezüglich Lockdowns im März 2020 oder diverse Äußerungen zu einer Impfpflicht ab 2020.
Aber Aufregung hin, Dementis her: Die Pressemitteilung der DGT zu dem Mobilitätskongress enthält in Wahrheit viel interessantere Informationen als «nur» einen Appell an den «guten» Bürger wegen der Bemühungen um die Lebensqualität in Großstädten oder einen möglichen obligatorischen Abschied vom Alleinfahren. Allerdings werden diese Details von Medien und sogenannten Faktencheckern geflissentlich übersehen, obwohl sie keineswegs versteckt sind. Die Auskünfte sind sehr aufschlussreich, wenn man genauer hinschaut.
Digitalisierung ist der Schlüssel für Kontrolle
Auf dem Kongress stellte die Verkehrsbehörde ihre Initiativen zur Förderung der «neuen Mobilität» vor, deren Priorität Sicherheit und Effizienz sei. Die vier konkreten Ansätze haben alle mit Digitalisierung, Daten, Überwachung und Kontrolle im großen Stil zu tun und werden unter dem Euphemismus der «öffentlich-privaten Partnerschaft» angepriesen. Auch lassen sie die transhumanistische Idee vom unzulänglichen Menschen erkennen, dessen Fehler durch «intelligente» technologische Infrastruktur kompensiert werden müssten.
Die Chefin des Bereichs «Verkehrsüberwachung» erklärte die Funktion des spanischen National Access Point (NAP), wobei sie betonte, wie wichtig Verkehrs- und Infrastrukturinformationen in Echtzeit seien. Der NAP ist «eine essenzielle Web-Applikation, die unter EU-Mandat erstellt wurde», kann man auf der Website der DGT nachlesen.
Das Mandat meint Regelungen zu einem einheitlichen europäischen Verkehrsraum, mit denen die Union mindestens seit 2010 den Aufbau einer digitalen Architektur mit offenen Schnittstellen betreibt. Damit begründet man auch «umfassende Datenbereitstellungspflichten im Bereich multimodaler Reiseinformationen». Jeder Mitgliedstaat musste einen NAP, also einen nationalen Zugangspunkt einrichten, der Zugang zu statischen und dynamischen Reise- und Verkehrsdaten verschiedener Verkehrsträger ermöglicht.
Diese Entwicklung ist heute schon weit fortgeschritten, auch und besonders in Spanien. Auf besagtem Kongress erläuterte die Leiterin des Bereichs «Telematik» die Plattform «DGT 3.0». Diese werde als Integrator aller Informationen genutzt, die von den verschiedenen öffentlichen und privaten Systemen, die Teil der Mobilität sind, bereitgestellt werden.
Es handele sich um eine Vermittlungsplattform zwischen Akteuren wie Fahrzeugherstellern, Anbietern von Navigationsdiensten oder Kommunen und dem Endnutzer, der die Verkehrswege benutzt. Alle seien auf Basis des Internets der Dinge (IOT) anonym verbunden, «um der vernetzten Gemeinschaft wertvolle Informationen zu liefern oder diese zu nutzen».
So sei DGT 3.0 «ein Zugangspunkt für einzigartige, kostenlose und genaue Echtzeitinformationen über das Geschehen auf den Straßen und in den Städten». Damit lasse sich der Verkehr nachhaltiger und vernetzter gestalten. Beispielsweise würden die Karten des Produktpartners Google dank der DGT-Daten 50 Millionen Mal pro Tag aktualisiert.
Des Weiteren informiert die Verkehrsbehörde über ihr SCADA-Projekt. Die Abkürzung steht für Supervisory Control and Data Acquisition, zu deutsch etwa: Kontrollierte Steuerung und Datenerfassung. Mit SCADA kombiniert man Software und Hardware, um automatisierte Systeme zur Überwachung und Steuerung technischer Prozesse zu schaffen. Das SCADA-Projekt der DGT wird von Indra entwickelt, einem spanischen Beratungskonzern aus den Bereichen Sicherheit & Militär, Energie, Transport, Telekommunikation und Gesundheitsinformation.
Das SCADA-System der Behörde umfasse auch eine Videostreaming- und Videoaufzeichnungsplattform, die das Hochladen in die Cloud in Echtzeit ermöglicht, wie Indra erklärt. Dabei gehe es um Bilder, die von Überwachungskameras an Straßen aufgenommen wurden, sowie um Videos aus DGT-Hubschraubern und Drohnen. Ziel sei es, «die sichere Weitergabe von Videos an Dritte sowie die kontinuierliche Aufzeichnung und Speicherung von Bildern zur möglichen Analyse und späteren Nutzung zu ermöglichen».
Letzteres klingt sehr nach biometrischer Erkennung und Auswertung durch künstliche Intelligenz. Für eine bessere Datenübertragung wird derzeit die Glasfaserverkabelung entlang der Landstraßen und Autobahnen ausgebaut. Mit der Cloud sind die Amazon Web Services (AWS) gemeint, die spanischen Daten gehen somit direkt zu einem US-amerikanischen «Big Data»-Unternehmen.
Das Thema «autonomes Fahren», also Fahren ohne Zutun des Menschen, bildet den Abschluss der Betrachtungen der DGT. Zusammen mit dem Interessenverband der Automobilindustrie ANFAC (Asociación Española de Fabricantes de Automóviles y Camiones) sprach man auf dem Kongress über Strategien und Perspektiven in diesem Bereich. Die Lobbyisten hoffen noch in diesem Jahr 2025 auf einen normativen Rahmen zur erweiterten Unterstützung autonomer Technologien.
Wenn man derartige Informationen im Zusammenhang betrachtet, bekommt man eine Idee davon, warum zunehmend alles elektrisch und digital werden soll. Umwelt- und Mobilitätsprobleme in Städten, wie Luftverschmutzung, Lärmbelästigung, Platzmangel oder Staus, sind eine Sache. Mit dem Argument «emissionslos» wird jedoch eine Referenz zum CO2 und dem «menschengemachten Klimawandel» hergestellt, die Emotionen triggert. Und damit wird so ziemlich alles verkauft.
Letztlich aber gilt: Je elektrischer und digitaler unsere Umgebung wird und je freigiebiger wir mit unseren Daten jeder Art sind, desto besser werden wir kontrollier-, steuer- und sogar abschaltbar. Irgendwann entscheiden KI-basierte Algorithmen, ob, wann, wie, wohin und mit wem wir uns bewegen dürfen. Über einen 15-Minuten-Radius geht dann möglicherweise nichts hinaus. Die Projekte auf diesem Weg sind ernst zu nehmen, real und schon weit fortgeschritten.
[Titelbild: Pixabay]
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ a95c6243:d345522c
2025-03-15 10:56:08Was nützt die schönste Schuldenbremse, wenn der Russe vor der Tür steht? \ Wir können uns verteidigen lernen oder alle Russisch lernen. \ Jens Spahn
In der Politik ist buchstäblich keine Idee zu riskant, kein Mittel zu schäbig und keine Lüge zu dreist, als dass sie nicht benutzt würden. Aber der Clou ist, dass diese Masche immer noch funktioniert, wenn nicht sogar immer besser. Ist das alles wirklich so schwer zu durchschauen? Mir fehlen langsam die Worte.
Aktuell werden sowohl in der Europäischen Union als auch in Deutschland riesige Milliardenpakete für die Aufrüstung – also für die Rüstungsindustrie – geschnürt. Die EU will 800 Milliarden Euro locker machen, in Deutschland sollen es 500 Milliarden «Sondervermögen» sein. Verteidigung nennen das unsere «Führer», innerhalb der Union und auch an «unserer Ostflanke», der Ukraine.
Das nötige Feindbild konnte inzwischen signifikant erweitert werden. Schuld an allem und zudem gefährlich ist nicht mehr nur Putin, sondern jetzt auch Trump. Europa müsse sich sowohl gegen Russland als auch gegen die USA schützen und rüsten, wird uns eingetrichtert.
Und während durch Diplomatie genau dieser beiden Staaten gerade endlich mal Bewegung in die Bemühungen um einen Frieden oder wenigstens einen Waffenstillstand in der Ukraine kommt, rasselt man im moralisch überlegenen Zeigefinger-Europa so richtig mit dem Säbel.
Begleitet und gestützt wird der ganze Prozess – wie sollte es anders sein – von den «Qualitätsmedien». Dass Russland einen Angriff auf «Europa» plant, weiß nicht nur der deutsche Verteidigungsminister (und mit Abstand beliebteste Politiker) Pistorius, sondern dank ihnen auch jedes Kind. Uns bleiben nur noch wenige Jahre. Zum Glück bereitet sich die Bundeswehr schon sehr konkret auf einen Krieg vor.
Die FAZ und Corona-Gesundheitsminister Spahn markieren einen traurigen Höhepunkt. Hier haben sich «politische und publizistische Verantwortungslosigkeit propagandistisch gegenseitig befruchtet», wie es bei den NachDenkSeiten heißt. Die Aussage Spahns in dem Interview, «der Russe steht vor der Tür», ist das eine. Die Zeitung verschärfte die Sache jedoch, indem sie das Zitat explizit in den Titel übernahm, der in einer ersten Version scheinbar zu harmlos war.
Eine große Mehrheit der deutschen Bevölkerung findet Aufrüstung und mehr Schulden toll, wie ARD und ZDF sehr passend ermittelt haben wollen. Ähnliches gelte für eine noch stärkere militärische Unterstützung der Ukraine. Etwas skeptischer seien die Befragten bezüglich der Entsendung von Bundeswehrsoldaten dorthin, aber immerhin etwa fifty-fifty.
Eigentlich ist jedoch die Meinung der Menschen in «unseren Demokratien» irrelevant. Sowohl in der Europäischen Union als auch in Deutschland sind die «Eliten» offenbar der Ansicht, der Souverän habe in Fragen von Krieg und Frieden sowie von aberwitzigen astronomischen Schulden kein Wörtchen mitzureden. Frau von der Leyen möchte über 150 Milliarden aus dem Gesamtpaket unter Verwendung von Artikel 122 des EU-Vertrags ohne das Europäische Parlament entscheiden – wenn auch nicht völlig kritiklos.
In Deutschland wollen CDU/CSU und SPD zur Aufweichung der «Schuldenbremse» mehrere Änderungen des Grundgesetzes durch das abgewählte Parlament peitschen. Dieser Versuch, mit dem alten Bundestag eine Zweidrittelmehrheit zu erzielen, die im neuen nicht mehr gegeben wäre, ist mindestens verfassungsrechtlich umstritten.
Das Manöver scheint aber zu funktionieren. Heute haben die Grünen zugestimmt, nachdem Kanzlerkandidat Merz läppische 100 Milliarden für «irgendwas mit Klima» zugesichert hatte. Die Abstimmung im Plenum soll am kommenden Dienstag erfolgen – nur eine Woche, bevor sich der neu gewählte Bundestag konstituieren wird.
Interessant sind die Argumente, die BlackRocker Merz für seine Attacke auf Grundgesetz und Demokratie ins Feld führt. Abgesehen von der angeblichen Eile, «unsere Verteidigungsfähigkeit deutlich zu erhöhen» (ausgelöst unter anderem durch «die Münchner Sicherheitskonferenz und die Ereignisse im Weißen Haus»), ließ uns der CDU-Chef wissen, dass Deutschland einfach auf die internationale Bühne zurück müsse. Merz schwadronierte gefährlich mehrdeutig:
«Die ganze Welt schaut in diesen Tagen und Wochen auf Deutschland. Wir haben in der Europäischen Union und auf der Welt eine Aufgabe, die weit über die Grenzen unseres eigenen Landes hinausgeht.»
[Titelbild: Tag des Sieges]
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ a95c6243:d345522c
2025-03-11 10:22:36«Wir brauchen eine digitale Brandmauer gegen den Faschismus», schreibt der Chaos Computer Club (CCC) auf seiner Website. Unter diesem Motto präsentierte er letzte Woche einen Forderungskatalog, mit dem sich 24 Organisationen an die kommende Bundesregierung wenden. Der Koalitionsvertrag müsse sich daran messen lassen, verlangen sie.
In den drei Kategorien «Bekenntnis gegen Überwachung», «Schutz und Sicherheit für alle» sowie «Demokratie im digitalen Raum» stellen die Unterzeichner, zu denen auch Amnesty International und Das NETTZ gehören, unter anderem die folgenden «Mindestanforderungen»:
- Verbot biometrischer Massenüberwachung des öffentlichen Raums sowie der ungezielten biometrischen Auswertung des Internets.
- Anlasslose und massenhafte Vorratsdatenspeicherung wird abgelehnt.
- Automatisierte Datenanalysen der Informationsbestände der Strafverfolgungsbehörden sowie jede Form von Predictive Policing oder automatisiertes Profiling von Menschen werden abgelehnt.
- Einführung eines Rechts auf Verschlüsselung. Die Bundesregierung soll sich dafür einsetzen, die Chatkontrolle auf europäischer Ebene zu verhindern.
- Anonyme und pseudonyme Nutzung des Internets soll geschützt und ermöglicht werden.
- Bekämpfung «privaten Machtmissbrauchs von Big-Tech-Unternehmen» durch durchsetzungsstarke, unabhängige und grundsätzlich föderale Aufsichtsstrukturen.
- Einführung eines digitalen Gewaltschutzgesetzes, unter Berücksichtigung «gruppenbezogener digitaler Gewalt» und die Förderung von Beratungsangeboten.
- Ein umfassendes Förderprogramm für digitale öffentliche Räume, die dezentral organisiert und quelloffen programmiert sind, soll aufgelegt werden.
Es sei ein Irrglaube, dass zunehmende Überwachung einen Zugewinn an Sicherheit darstelle, ist eines der Argumente der Initiatoren. Sicherheit erfordere auch, dass Menschen anonym und vertraulich kommunizieren können und ihre Privatsphäre geschützt wird.
Gesunde digitale Räume lebten auch von einem demokratischen Diskurs, lesen wir in dem Papier. Es sei Aufgabe des Staates, Grundrechte zu schützen. Dazu gehöre auch, Menschenrechte und demokratische Werte, insbesondere Freiheit, Gleichheit und Solidarität zu fördern sowie den Missbrauch von Maßnahmen, Befugnissen und Infrastrukturen durch «die Feinde der Demokratie» zu verhindern.
Man ist geneigt zu fragen, wo denn die Autoren «den Faschismus» sehen, den es zu bekämpfen gelte. Die meisten der vorgetragenen Forderungen und Argumente finden sicher breite Unterstützung, denn sie beschreiben offenkundig gängige, kritikwürdige Praxis. Die Aushebelung der Privatsphäre, der Redefreiheit und anderer Grundrechte im Namen der Sicherheit wird bereits jetzt massiv durch die aktuellen «demokratischen Institutionen» und ihre «durchsetzungsstarken Aufsichtsstrukturen» betrieben.
Ist «der Faschismus» also die EU und ihre Mitgliedsstaaten? Nein, die «faschistische Gefahr», gegen die man eine digitale Brandmauer will, kommt nach Ansicht des CCC und seiner Partner aus den Vereinigten Staaten. Private Überwachung und Machtkonzentration sind dabei weltweit schon lange Realität, jetzt endlich müssen sie jedoch bekämpft werden. In dem Papier heißt es:
«Die willkürliche und antidemokratische Machtausübung der Tech-Oligarchen um Präsident Trump erfordert einen Paradigmenwechsel in der deutschen Digitalpolitik. (...) Die aktuellen Geschehnisse in den USA zeigen auf, wie Datensammlungen und -analyse genutzt werden können, um einen Staat handstreichartig zu übernehmen, seine Strukturen nachhaltig zu beschädigen, Widerstand zu unterbinden und marginalisierte Gruppen zu verfolgen.»
Wer auf der anderen Seite dieser Brandmauer stehen soll, ist also klar. Es sind die gleichen «Feinde unserer Demokratie», die seit Jahren in diese Ecke gedrängt werden. Es sind die gleichen Andersdenkenden, Regierungskritiker und Friedensforderer, die unter dem großzügigen Dach des Bundesprogramms «Demokratie leben» einem «kontinuierlichen Echt- und Langzeitmonitoring» wegen der Etikettierung «digitaler Hass» unterzogen werden.
Dass die 24 Organisationen praktisch auch die Bekämpfung von Google, Microsoft, Apple, Amazon und anderen fordern, entbehrt nicht der Komik. Diese fallen aber sicher unter das Stichwort «Machtmissbrauch von Big-Tech-Unternehmen». Gleichzeitig verlangen die Lobbyisten implizit zum Beispiel die Förderung des Nostr-Netzwerks, denn hier finden wir dezentral organisierte und quelloffen programmierte digitale Räume par excellence, obendrein zensurresistent. Das wiederum dürfte in der Politik weniger gut ankommen.
[Titelbild: Pixabay]
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ 502ab02a:a2860397
2025-04-26 11:14:03วันนี้เรามาย้อนอดีตเล็กน้อยกันครับ กับ ผลิตภัณฑ์ไขมันพืชแปรรูปแบบแรกๆของโลก ที่ใช้กระบวนการแปรรูปน้ำมันพืชด้วยเคมี (hydrogenation) เพื่อให้ได้ไขมันกึ่งแข็ง
ในเดือนมิถุนายน ค.ศ. 1911 บริษัท Procter & Gamble เปิดตัวผลิตภัณฑ์ใหม่ที่เปลี่ยนโฉมวงการทำอาหารบ้านๆ ทั่วอเมริกา ตราสินค้า “Crisco” ซึ่งมาจากคำว่า “crystallized cottonseed oil” ได้ถือกำเนิดขึ้นเป็น “vegetable shortening” หรือที่บ้านเราเรียกว่าเนยขาว ก้อนแรกที่ทำมาจากน้ำมันพืชล้วนๆ แทนที่ไขมันสัตว์อย่างเนยหรือน้ำมันหมู จัดเป็นจุดเริ่มต้นของการปฏิวัติวิธีปรุงอาหารในครัวเรือนสหรัฐฯ
ก่อนหน้านั้น คนอเมริกันคุ้นเคยกับการใช้เนย ชีส หรือน้ำมันหมูในการประกอบอาหาร แต่ Crisco มาพร้อมการโฆษณาว่า “สะอาดกว่า” และ “ประหยัดกว่า” เพราะไม่ต้องเสี่ยงกับกลิ่นคาวหรือการเน่าเสียของไขมันสัตว์ อีกทั้งบรรจุในกระป๋องสีขาวสะอาด จึงดูทันสมัยน่าต้องการ
ชื่อ “Crisco” นั้นไม่ได้ตั้งโดยบังเอิญ แต่มาจากการย่อวลี “crystallized cottonseed oil” ให้สั้นกระชับและติดหู (ต้นชื่อ “Krispo” เคยถูกทดลองก่อน แต่ติดปัญหาเครื่องหมายการค้า และชื่อ “Cryst” ก็ถูกทิ้งไปเพราะมีนัยยะทางศาสนา)
กระบวนการสำคัญคือการนำเอาน้ำมันฝ้ายเหลวไปเติมไฮโดรเจน (hydrogenation) จนแข็งตัวได้เองในอุณหภูมิห้อง ผลลัพธ์คือไขมันทรานส์ที่ช่วยให้มาการีนแข็งตัวดี
ภายในเวลาไม่นานหลังการเปิดตัว โฆษณาในหนังสือพิมพ์และวิทยุกระจายเสียงฉบับแรกของ Crisco ก็เริ่มขึ้นอย่างดุเดือด พ่วงด้วยการแจก “ตำรา Crisco” ให้แม่บ้านลองนำไปใช้ทั้งอบ ทั้งทอด จึงเกิด “ยุคครัว Crisco” อย่างแท้จริง
แม้ Crisco จะถูกยกให้เป็น “จุดเริ่มต้นของยุคไขมันพืช” ในครัวอเมริกัน แต่เบื้องหลังขวดสีเขียว–ขาวที่เติมเต็มชั้นเก็บของในบ้านกลับมีดราม่าและบทเรียนมากกว่าที่ใครคาดคิด
ย้อนกลับไปทศวรรษ 1910 เมื่อ Procter & Gamble เปิดตัว Crisco ในฐานะ “ไขมันพืชสุดสะอาด” พร้อมกับโฆษณาว่าเป็นทางเลือกที่ดีกว่าเนยและแลร์ดเดิม ๆ แต่ความท้าทายแรกคือ “ฝืนความเชื่อ” ของคุณแม่บ้านยุคนั้น ที่ยังยึดติดกับไขมันจากสัตว์ นักการตลาดของ P&G จึงสร้างภาพลักษณ์ให้ Crisco ดูเป็นผลิตภัณฑ์อุตสาหกรรมขั้นสูง โปร่งใส และถูกสุขอนามัยสู้กับค่านิยมเดิมได้อย่างน่าทึ่ง
หนังสือพิมพ์ในยุคนั้นพูดกันว่า มันคือไขมันพืชปฏิวัติวงการ ที่ทั้งถูกกว่าและยืดอายุได้ไกลกว่าน้ำมันสัตว์
กระทั่งปลายทศวรรษ 1980 เกิดดราม่าสะท้อนความย้อนแย้งในวงการสุขภาพ เมื่อองค์กร CSPI (Center for Science in the Public Interest) กลับออกมาชื่นชมการใช้ไขมันทรานส์จาก Crisco ว่า “ดีต่อหลอดเลือด” เมื่อเทียบกับไขมันอิ่มตัวจากมะพร้าว เนย หรือไขมันสัตว์
นี่คือครั้งที่วงการแพทย์และโภชนาการแตกแยกกันว่าอะไรจริงหรือหลอก จนกระทั่งงานวิจัยยืนยันชัดเจนว่าไขมันทรานส์เป็นอันตรายต่อหัวใจจริง ๆ
แต่เมื่อเวลาผ่านไป งานวิจัยคุณภาพสูงเริ่มชี้ชัดว่า ไขมันทรานส์ ไม่ใช่เพียงส่วนเกินในเมนูขนมกรอบๆ เท่านั้น มันเป็นภัยเงียบที่เพิ่มความเสี่ยงโรคหลอดเลือดหัวใจ และการอักเสบเรื้อรัง WHO จึงออกมาตรการให้โลก “เลิกทรานส์แฟต” ภายในปี 2023 ทำให้ Procter & Gamble ปรับสูตร Crisco มาใช้การผสมระหว่างน้ำมันฝ้าย fully hydrogenated กับน้ำมันเหลว ผ่านกระบวนการ interesterification แทน เพื่อให้ได้จุดหลอมเหลวที่เหมาะสมโดยไม่สร้างทรานส์แฟตเพิ่มอีก
อีกประเด็นดราม่าที่ตามมาเมื่อ Procter & Gamble ต้องปรับสูตร Crisco ให้เป็น “trans fat–free” ในปี 2004 และยุติการขายสูตรปราศจากทรานส์เฉพาะทางในปี 2007 ก่อนจะกลับมาใช้ fully hydrogenated palm oil ตามกฎ FDA ในปี 2018
แต่การหันมาใช้น้ำมันปาล์มเต็มตัวกลับก่อปัญหาใหม่ คือข้อครหาเรื่องการทำลายป่าเขตร้อนเพื่อปลูกปาล์มน้ำมัน จนกลายเป็นดราม่าระดับโลกเรื่องสิ่งแวดล้อมและสิทธิมนุษยชนในชุมชนท้องถิ่น
แด่วันนี้ เมื่อใครยังพูดถึง Crisco ด้วยสายตาเด็กน้อยที่เห็นไขมันพืชขาวโพลน เป็นคำตอบใหม่ของครัวสะอาด เราอาจยกมือทักว่า “อย่าลืมดูเบื้องหลังของมัน” เพราะไขมันที่เกิดจากการ “สลับตำแหน่งกรดไขมัน” ผ่านความร้อนสูงและสารเคมี ไม่ใช่ไขมันที่ธรรมชาติออกแบบมาให้ร่างกายคุ้นเคยจริงๆ แม้จะมีชื่อใหม่ สูตรใหม่ แต่ต้นกำเนิดของการปฏิวัติครัวในปี 1911
Crisco ไม่ได้เป็นแค่พรีเซนเตอร์ “ไขมันพืชเพื่อสุขภาพ” แต่ยังเป็นบทเรียนสำคัญเรื่องการตลาดอาหารอุตสาหกรรม การวิจัยทางโภชนาการที่ต้องพัฒนาไม่หยุดนิ่ง และผลกระทบต่อสิ่งแวดล้อมเมื่อเราหันมาใช้วัตถุดิบใหม่ๆ ดังนั้น ครัวของเราอาจจะสะอาดทันสมัย แต่ก็ต้องเลือกให้รอบคอบและติดตามเบื้องหลังของทุกขวดที่เราใช้เสมอครับ
ไหนๆก็ไหนๆแล้ว ขออธิบายคุณลักษณะของ เนยขาวไปยาวๆเลยแล้วกันนะครับ ขี้เกียจแยกโพส 55555555
เนยขาว หรือชื่อทางเทคนิคว่า “shortening” ไม่ได้มีส่วนผสมของนม หรือเนยแท้ใดๆ ทั้งสิ้น แต่มันคือไขมันพืชที่ผ่านกระบวนการทำให้แข็งตัว และคงรูปได้ดี เส้นทางของเนยขาวเริ่มด้วยการเปลี่ยนโครงสร้างไขมันไม่อิ่มตัวในน้ำมันพืชให้กลายเป็นไขมันอิ่มตัวมากขึ้น กระบวนการนี้เรียกว่า hydrogenation หรือการเติมไฮโดรเจนเข้าไปในโมเลกุลของไขมัน โดยใช้อุณหภูมิสูงและตัวเร่งปฏิกิริยาอย่าง “นิกเกิล” เพื่อให้ไขมันพืชที่เหลวกลายเป็นของแข็งที่อยู่ตัว ไม่เหม็นหืนง่าย และสามารถเก็บได้นานขึ้น
ผลพลอยได้ของการ hydrogenation คือการเกิดขึ้นของ ไขมันทรานส์ (trans fat) ซึ่งเป็นไขมันที่ร่างกายแทบไม่มีระบบจัดการ และได้รับการยืนยันจากงานวิจัยนับไม่ถ้วนว่าเป็นหนึ่งในปัจจัยเสี่ยงสำคัญต่อโรคหัวใจ หลอดเลือด และการอักเสบเรื้อรังในร่างกาย แม้ในยุคปัจจุบันบางผู้ผลิตจะเปลี่ยนวิธีการผลิตไปใช้การปรับโครงสร้างไขมันด้วยวิธี interesterification ที่ช่วยลดทรานส์แฟตลงได้มาก แต่ก็ยังคงเป็นกระบวนการแทรกแซงโครงสร้างไขมันจากธรรมชาติรวมถึงใช้กระบวนการ RBD ที่เราคุยกันไปแล้วอยู่ดี และผลกระทบต่อร่างกายในระยะยาวก็ยังเป็นคำถามที่นักโภชนาการสาย real food หลายคนตั้งข้อสังเกต
คำว่า “shortening” มาจากคำกริยา shorten ที่แปลว่า "ทำให้สั้นลง" ซึ่งในบริบทของการทำขนม มันหมายถึง การไปยับยั้งไม่ให้เส้นใยกลูเตนในแป้งพัฒนาได้ยาวและเหนียวตามธรรมชาติ เวลาผสมแป้งกับน้ำ โปรตีนในแป้งสองตัวคือกลูเตนิน (glutenin) กับไกลอาดิน (gliadin) จะจับกันกลายเป็นกลูเตน ซึ่งมีคุณสมบัติยืดหยุ่น เหนียว เหมาะกับขนมปังที่ต้องการโครงสร้างแน่นๆ ยืดๆ หนึบๆ
แต่พอเราใส่ shortening ลงไป เช่น เนยขาว น้ำมัน หรือไขมันที่อยู่ในสถานะกึ่งของแข็ง มันจะไปเคลือบเส้นแป้ง ทำให้โปรตีนกลูเตนจับกันไม่ได้เต็มที่ ผลคือเส้นใยกลูเตนถูก “ทำให้สั้นลง” แทนที่จะยืดหยุ่นยาวแบบในขนมปัง เลยกลายเป็นเนื้อขนมที่ร่วน นุ่ม ละลายในปาก หรือแม้แต่กรอบ อย่างคุกกี้ พาย หรือโรตีบางๆ หอมๆ นั่นแหละ เป็นสัมผัสที่นักทำขนมรักใคร่ แต่ร่างกายอาจไม่ปลื้มสักเท่าไหร่
เพราะจุดสังเกตุคือ เรื่องไขมันทรานส์ หลายแบรนด์ที่ขั้นตอนการผลิตไม่ดีพอ อาจพยายามเขียนฉลากว่า “ไม่มี trans fat” หรือ “low trans” แต่ในความเป็นจริงแล้ว หากไขมันทรานส์ต่ำกว่า 0.5 กรัมต่อหนึ่งหน่วยบริโภค ผู้ผลิตสามารถระบุว่าเป็น 0 ได้ตามกฎหมาย ซึ่งหากกินหลายๆ หน่วยรวมกัน ก็ไม่ต่างจากการเปิดประตูให้ trans fat ย่องเข้าร่างแบบไม่รู้ตัว
แต่เหนือกว่านั้นก็คือเรื่องเดิมๆที่เราเข้าใจกันดีในน้ำมันพืช นั่นคือ โอเมก้า 6 เพียบ + กระบวนการปรุงแต่งเคมี ที่เปราะบางต่ออุณหภูมิ ทำให้เกิดการออกซิเดชัน นำไปสู่โรคจากการอักเสบของร่างกาย
อาจถึงเวลาแล้วที่เราควรเปิดใจกลับไปหาความเรียบง่ายของไขมันจากธรรมชาติ สิ่งที่ดูสะอาด ขาว และอยู่ตัวดีเกินไป อาจไม่ใช่สิ่งที่ธรรมชาติอยากให้เข้าไปอยู่ในตัวเราก็ได้
ปล. สำหรับใครที่สงสัยว่า เนยขาว กับ มาการีน ต่างกันยังไง Shortening (เนยขาว) คือไขมันพืช 100% ไม่มีน้ำผสม บีบให้เป็นก้อน ทนความร้อนได้สูง เพื่อให้แป้ง “ไม่ยืด” เกล็ดขนมร่วนกรอบ Margarine (มาการีน) จะผสมไขมันกับน้ำ–เกลือ–อิมัลซิไฟเออร์ ทำให้ทาได้เนียนเหมือนเนย แต่มีน้ำประมาณ 15–20%
#pirateketo #กูต้องรู้มั๊ย #ม้วนหางสิลูก #siamstr
-
@ 5f078e90:b2bacaa3
2025-04-26 10:51:39Panda story 1
Initially posted on Hive, story is between 300 and 500 characters. Should become a Nostr kind 30023. Image has markdown.
In a misty bamboo forest, a red panda named Rolo discovered a glowing berry. Curious, he nibbled it and began to float! Drifting over treetops, he saw his friends below, waving. Rolo somersaulted through clouds, giggling as wind tickled his fur. The berry's magic faded at dusk, landing him softly by a stream. His pals cheered his tale, and Rolo dreamed of more adventures, his heart light as the breeze. (349 characters)
Originally posted on Hive at https://hive.blog/@hostr/panda-story-1
Cross-posted using Hostr, version 0.0.1
-
@ a95c6243:d345522c
2025-03-04 09:40:50Die «Eliten» führen bereits groß angelegte Pilotprojekte für eine Zukunft durch, die sie wollen und wir nicht. Das schreibt der OffGuardian in einem Update zum Thema «EU-Brieftasche für die digitale Identität». Das Portal weist darauf hin, dass die Akteure dabei nicht gerade zimperlich vorgehen und auch keinen Hehl aus ihren Absichten machen. Transition News hat mehrfach darüber berichtet, zuletzt hier und hier.
Mit der EU Digital Identity Wallet (EUDI-Brieftasche) sei eine einzige von der Regierung herausgegebene App geplant, die Ihre medizinischen Daten, Beschäftigungsdaten, Reisedaten, Bildungsdaten, Impfdaten, Steuerdaten, Finanzdaten sowie (potenziell) Kopien Ihrer Unterschrift, Fingerabdrücke, Gesichtsscans, Stimmproben und DNA enthält. So fasst der OffGuardian die eindrucksvolle Liste möglicher Einsatzbereiche zusammen.
Auch Dokumente wie der Personalausweis oder der Führerschein können dort in elektronischer Form gespeichert werden. Bis 2026 sind alle EU-Mitgliedstaaten dazu verpflichtet, Ihren Bürgern funktionierende und frei verfügbare digitale «Brieftaschen» bereitzustellen.
Die Menschen würden diese App nutzen, so das Portal, um Zahlungen vorzunehmen, Kredite zu beantragen, ihre Steuern zu zahlen, ihre Rezepte abzuholen, internationale Grenzen zu überschreiten, Unternehmen zu gründen, Arzttermine zu buchen, sich um Stellen zu bewerben und sogar digitale Verträge online zu unterzeichnen.
All diese Daten würden auf ihrem Mobiltelefon gespeichert und mit den Regierungen von neunzehn Ländern (plus der Ukraine) sowie über 140 anderen öffentlichen und privaten Partnern ausgetauscht. Von der Deutschen Bank über das ukrainische Ministerium für digitalen Fortschritt bis hin zu Samsung Europe. Unternehmen und Behörden würden auf diese Daten im Backend zugreifen, um «automatisierte Hintergrundprüfungen» durchzuführen.
Der Bundesverband der Verbraucherzentralen und Verbraucherverbände (VZBV) habe Bedenken geäußert, dass eine solche App «Risiken für den Schutz der Privatsphäre und der Daten» berge, berichtet das Portal. Die einzige Antwort darauf laute: «Richtig, genau dafür ist sie ja da!»
Das alles sei keine Hypothese, betont der OffGuardian. Es sei vielmehr «Potential». Damit ist ein EU-Projekt gemeint, in dessen Rahmen Dutzende öffentliche und private Einrichtungen zusammenarbeiten, «um eine einheitliche Vision der digitalen Identität für die Bürger der europäischen Länder zu definieren». Dies ist nur eines der groß angelegten Pilotprojekte, mit denen Prototypen und Anwendungsfälle für die EUDI-Wallet getestet werden. Es gibt noch mindestens drei weitere.
Den Ball der digitalen ID-Systeme habe die Covid-«Pandemie» über die «Impfpässe» ins Rollen gebracht. Seitdem habe das Thema an Schwung verloren. Je näher wir aber der vollständigen Einführung der EUid kämen, desto mehr Propaganda der Art «Warum wir eine digitale Brieftasche brauchen» könnten wir in den Mainstream-Medien erwarten, prognostiziert der OffGuardian. Vielleicht müssten wir schon nach dem nächsten großen «Grund», dem nächsten «katastrophalen katalytischen Ereignis» Ausschau halten. Vermutlich gebe es bereits Pläne, warum die Menschen plötzlich eine digitale ID-Brieftasche brauchen würden.
Die Entwicklung geht jedenfalls stetig weiter in genau diese Richtung. Beispielsweise hat Jordanien angekündigt, die digitale biometrische ID bei den nächsten Wahlen zur Verifizierung der Wähler einzuführen. Man wolle «den Papierkrieg beenden und sicherstellen, dass die gesamte Kette bis zu den nächsten Parlamentswahlen digitalisiert wird», heißt es. Absehbar ist, dass dabei einige Wahlberechtigte «auf der Strecke bleiben» werden, wie im Fall von Albanien geschehen.
Derweil würden die Briten gerne ihre Privatsphäre gegen Effizienz eintauschen, behauptet Tony Blair. Der Ex-Premier drängte kürzlich erneut auf digitale Identitäten und Gesichtserkennung. Blair ist Gründer einer Denkfabrik für globalen Wandel, Anhänger globalistischer Technokratie und «moderner Infrastruktur».
Abschließend warnt der OffGuardian vor der Illusion, Trump und Musk würden den US-Bürgern «diesen Schlamassel ersparen». Das Department of Government Efficiency werde sich auf die digitale Identität stürzen. Was könne schließlich «effizienter» sein als eine einzige App, die für alles verwendet wird? Der Unterschied bestehe nur darin, dass die US-Version vielleicht eher privat als öffentlich sei – sofern es da überhaupt noch einen wirklichen Unterschied gebe.
[Titelbild: Screenshot OffGuardian]
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ 044da344:073a8a0e
2025-04-26 10:21:11„Huch, das ist ja heute schon wieder vier Jahre her“, hat Dietrich Brüggemann am Dienstag auf X gestöhnt. Und: „Ich für meinen Teil würde es wieder tun.“ Knapp 1400 Herzchen und gut 300 Retweets. Immerhin, einerseits. Andererseits scheint die Aktion #allesdichtmachen verschwunden zu sein aus dem kollektiven Gedächtnis. Es gibt eine Seite auf Rumble, die alle 52 Videos dokumentiert. Zwölf Follower und ein paar Klicks. 66 zum Beispiel für die großartige Kathrin Osterode und ihre Idee, die Inzidenzen in das Familienleben zu tragen und im Fall der Fälle auch die Kinder wegzugeben.
Vielleicht sind es auch schon ein paar mehr, wenn Sie jetzt klicken sollten, um jenen späten April-Abend von 2021 zurückzuholen und das Glück, das zum Greifen nah schien. Ich sehe mich noch auf der Couch sitzen, bereit für das Bett, als der Link kam. Ich konnte nicht mehr aufhören. Prominente, endlich. Und auch noch so viele und so gut. Was daraus geworden ist, habe ich genau ein Jahr später mit Freunden und Kollegen in ein Buch gepackt – noch so ein Versuch, ein Ereignis für die Ewigkeit festzuhalten, das die Öffentlichkeit verändert hat und damit das Land, ein Versuch, der genauso in einer Nische versandet ist wie die Rumble-Seite.
Ich fürchte: Auch beim fünften Geburtstag wird sich niemand an #allesdichtmachen erinnern wollen, abgesehen natürlich von Dietrich Brüggemann und ein paar Ewiggestrigen wie mir. Eigentlich lieben Medien Jahrestage, besonders die runden. Weißt Du noch? Heute vor zehn Jahren? In jedem von uns wohnt ein Nostalgiker, der zurückblicken will, Bilanz ziehen möchte, Ankerpunkte sucht im Strom der Zeit. Die Redaktionen wissen das. Sie sehen es mittlerweile auch, weil sie alles erfassen lassen, was wir mit ihren Beiträgen tun. Die blinkenden Bildschirme in den Meinungsfabriken sagen: Jahrestage gehen immer.
Meine These: #allesdichtmachen bricht diese Regel, obwohl die Aktion alles mitbringt, wonach der Journalismus sucht. Prominenz, Konflikt und Drama mit allem Drum und Dran. Leidenschaft, Tränen und – ja, auch eine historische Dimension. Falls unsere Enkel noch Kulturgeschichten schreiben dürfen, werden sie Brüggemann & Co. nicht aussparen können. Wo gibt es das schon – eine Kunstaktion, die das Land verändert? Nach diesen fünf Tagen im April 2021 wussten alle, wie die Kräfte im Land verteilt sind. Das Wort Diskussionskultur wurde aus dem Duden gestrichen. Und jeder Überlebende der Anti-Axel-Springer-Demos konnte sehen, dass alle Träume der Achtundsechziger wahr geworden sind. Die Bildzeitung hat nichts mehr zu sagen. Etwas akademischer gesprochen: Die Definitionsmachtverhältnisse haben sich geändert – weg von dem Blatt mit den großen Buchstaben und damit von Milieus ohne akademische Abschlüsse oder Bürojobs, hin zu den Leitmedien der Menschen, die in irgendeiner Weise vom Staat abhängen und deshalb Zeit haben, sich eine Wirklichkeit zurechtzutwittern.
Der Reihe nach. 22. April 2021, ein Donnerstag. 15 Minuten vor Mitternacht erscheint #allesdichtmachen in der Onlineausgabe der Bildzeitung. O-Ton: „Mit Ironie, Witz und Sarkasmus hinterfragen Deutschlands bekannteste Schauspielerinnen und Schauspieler die Corona-Politik der Bundesregierung und kritisieren die hiesige Diskussionskultur.“
Die 53 Videos sind da erst ein paar Stunden online, aber zumindest auf der „Haupt-Website der Aktion“ schon nicht mehr abrufbar. „Offenbar gehacked“, schreibt die Bildzeitung und wirbt für YouTube. Außerdem gibt es positive Reaktionen (etwa vom Virologen Jonas Schmidt-Chanasit, der von einem „Meisterwerk“ gesprochen habe) sowie einen Ausblick auf das, was die Leitmedien dann dominieren wird: „Manche User auf Twitter und Facebook versuchen, die Aktion in die Coronaleugner-Ecke zu rücken. Dabei leugnet keiner der Schauspielerinnen und Schauspieler auch nur ansatzweise die Existenz des Coronavirus.“
Heute wissen wir: Bild setzte hier zwar ein Thema, aber nicht den Ton. Anders gesagt: Was am Donnerstagabend noch zu gelten scheint, ist am Freitag nicht mehr wahr. „Wenn man seinen eigenen Shitstorm verschlafen hat“, twittert Manuel Rubey am nächsten Morgen, ein Schauspieler aus Österreich, der in seinem Video fordert, „die Theater, die Museen, die Kinos, die Kabarettbühnen überhaupt nie wieder aufzusperren“. Eine Woche später erklärt Rubey im Wiener Standard seinen Tweet. Gleich nach der Veröffentlichung habe er vor dem Schlafengehen „noch ein bisschen Kommentare gelesen“ und „das Gefühl“ gehabt, „dass es verstanden wird, wie es gemeint war“. Der Tag danach: „ein kafkaesker Albtraum. Kollegen entschuldigten sich privat, dass sie ihre positiven Kommentare nun doch gelöscht hätten.“
An der Bildzeitung hat das nicht gelegen. Die Redaktion blieb bei ihrer Linie und bot Dietrich Brüggemann an Tag fünf (Montag) eine Video-Bühne für eine Art Schlusswort zur Debatte (Länge: über zwölf Minuten), ohne den Regisseur zu denunzieren. Vorher finden sich hier Stimmen, die sonst nirgendwo zu hören waren – etwa Peter-Michael Diestel, letzter DDR-Innenminister, der die „Diskussionskultur beschädigt“ sieht, oder eine PR-Agentin, die ihren „Klienten abgeraten“ hat, „sich in den Sturm zu stellen“.
Geschossen wurde aus allen Rohren – auf Twitter und in den anderen Leitmedien. Tenor: Die Kritik ist ungerechtfertigt und schädlich. Den Beteiligten wurde vorgeworfen, „zynisch“ und „hämisch“ zu sein, die Gesellschaft zu spalten, ohne etwas „Konstruktives“ beizutragen, und nur an sich selbst und „ihre eigene Lage“ zu denken. Dabei wurden Vorurteile gegen Kunst und Künstler aktiviert und Rufmorde inszeniert. „Für mich ist das Kunst aus dem Elfenbeinturm der Privilegierten, ein elitäres Gewimmer“, sagte die Schauspielerin Pegah Ferydoni der Süddeutschen Zeitung. Michael Hanfeld bescheinigte den Schauspielprofis in der FAZ, ihre Texte „peinlich aufgesagt“ zu haben. In der Zeit fiel das Wort „grauenhaft“, und eine Spiegel– Videokolumne sprach sogar von „Waschmittelwerbung“.
In der Bildzeitung ließen Überschriften und Kommentare dagegen keinen Zweifel, wo die Sympathien der Redaktion liegen. „Filmakademie-Präsident geht auf Kollegen los“ steht über der Meldung, dass Ulrich Matthes die Aktion kritisiert hat. Dachzeile: „‚Zynisch‘, ‚komplett naiv und ballaballa‘“. Auf dem Foto wirkt Matthes arrogant und abgehoben – wie ein Köter, der um sich beißt. „Ich bin ein #allesdichtmachen-Fan“, schreibt Bild-Urgestein Franz-Josef Wagner am 25. April über seine Kolumne.
Mehr als zwei Dutzend Artikel über dieses lange Wochenende, die meisten davon Pro. Ralf Schuler, damals dort noch Leiter der Parlamentsredaktion und in jeder Hinsicht ein Schwergewicht, äußert sich gleich zweimal. „Großes Kino!“ sagt er am 23. April. Am nächsten Tag versteht Schuler sein Land nicht mehr: „53 Top-Künstler greifen in Videos die Corona-Stimmung im Lande auf: Kontakt- und Ausgangssperre, Alarmismus, Denunziantentum, wirtschaftliche Not und Ohnmachtsgefühle. Die Antwort: Hass, Shitstorm und ein SPD-Politiker denkt sogar öffentlich über Berufsverbote für die beteiligten Schauspieler nach. Binnen Stunden ziehen die ersten verschreckt ihre Videos zurück, andere distanzieren sich, müssen öffentlich Rechtfertigungen abgeben. Geht’s noch?“ Weiter bei Schuler: „Es ist Aufgabe von Kunst und Satire, dahin zu zielen, wo es wehtut, Stimmungen aufzugreifen und aufzubrechen, Machtworte zu ignorieren und dem Virus nicht das letzte Wort zu lassen. Auch, wenn ein Teil des Zuspruchs von schriller, schräger oder politisch unappetitlicher Seite kommt. Das überhaupt erwähnen zu müssen, beschreibt bereits das Problem: eine Politik, die ihr Tun für alternativlos, ultimativ und einzig wahr hält und Kritiker in den Verdacht stellt, Tod über Deutschland bringen zu wollen.“
Immerhin: Der Lack war endgültig ab von dieser Demokratie. Die Aktion #allesdichtmachen war ein Lehrstück. Rally around the flag, wann immer es die da oben befehlen. Lasst uns in den Kampf ziehen. Gestern gegen ein Virus, heute gegen die Russen und morgen gegen die ganze Welt – oder wenigstens gegen alle, die Fragen stellen, Zweifel haben, nicht laut Hurra rufen. Innerer Frieden? Ab auf den Müllhaufen der Geschichte. Wir sollten diesen Jahrestag feiern, immer wieder.
Bildquellen: Screenshots von Daria Gordeeva. Titel: Dietrich Brüggemann, Text: Kathrin Osterode
-
@ a95c6243:d345522c
2025-03-01 10:39:35Ständige Lügen und Unterstellungen, permanent falsche Fürsorge \ können Bausteine von emotionaler Manipulation sein. Mit dem Zweck, \ Macht und Kontrolle über eine andere Person auszuüben. \ Apotheken Umschau
Irgendetwas muss passiert sein: «Gaslighting» ist gerade Thema in vielen Medien. Heute bin ich nach längerer Zeit mal wieder über dieses Stichwort gestolpert. Das war in einem Artikel von Norbert Häring über Manipulationen des Deutschen Wetterdienstes (DWD). In diesem Fall ging es um eine Pressemitteilung vom Donnerstag zum «viel zu warmen» Winter 2024/25.
Häring wirft der Behörde vor, dreist zu lügen und Dinge auszulassen, um die Klimaangst wach zu halten. Was der Leser beim DWD nicht erfahre, sei, dass dieser Winter kälter als die drei vorangegangenen und kälter als der Durchschnitt der letzten zehn Jahre gewesen sei. Stattdessen werde der falsche Eindruck vermittelt, es würde ungebremst immer wärmer.
Wem also der zu Ende gehende Winter eher kalt vorgekommen sein sollte, mit dessen Empfinden stimme wohl etwas nicht. Das jedenfalls wolle der DWD uns einreden, so der Wirtschaftsjournalist. Und damit sind wir beim Thema Gaslighting.
Als Gaslighting wird eine Form psychischer Manipulation bezeichnet, mit der die Opfer desorientiert und zutiefst verunsichert werden, indem ihre eigene Wahrnehmung als falsch bezeichnet wird. Der Prozess führt zu Angst und Realitätsverzerrung sowie zur Zerstörung des Selbstbewusstseins. Die Bezeichnung kommt von dem britischen Theaterstück «Gas Light» aus dem Jahr 1938, in dem ein Mann mit grausamen Psychotricks seine Frau in den Wahnsinn treibt.
Damit Gaslighting funktioniert, muss das Opfer dem Täter vertrauen. Oft wird solcher Psychoterror daher im privaten oder familiären Umfeld beschrieben, ebenso wie am Arbeitsplatz. Jedoch eignen sich die Prinzipien auch perfekt zur Manipulation der Massen. Vermeintliche Autoritäten wie Ärzte und Wissenschaftler, oder «der fürsorgliche Staat» und Institutionen wie die UNO oder die WHO wollen uns doch nichts Böses. Auch Staatsmedien, Faktenchecker und diverse NGOs wurden zu «vertrauenswürdigen Quellen» erklärt. Das hat seine Wirkung.
Warum das Thema Gaslighting derzeit scheinbar so populär ist, vermag ich nicht zu sagen. Es sind aber gerade in den letzten Tagen und Wochen auffällig viele Artikel dazu erschienen, und zwar nicht nur von Psychologen. Die Frankfurter Rundschau hat gleich mehrere publiziert, und Anwälte interessieren sich dafür offenbar genauso wie Apotheker.
Die Apotheken Umschau machte sogar auf «Medical Gaslighting» aufmerksam. Davon spreche man, wenn Mediziner Symptome nicht ernst nähmen oder wenn ein gesundheitliches Problem vom behandelnden Arzt «schnöde heruntergespielt» oder abgetan würde. Kommt Ihnen das auch irgendwie bekannt vor? Der Begriff sei allerdings irreführend, da er eine manipulierende Absicht unterstellt, die «nicht gewährleistet» sei.
Apropos Gaslighting: Die noch amtierende deutsche Bundesregierung meldete heute, es gelte, «weiter [sic!] gemeinsam daran zu arbeiten, einen gerechten und dauerhaften Frieden für die Ukraine zu erreichen». Die Ukraine, wo sich am Montag «der völkerrechtswidrige Angriffskrieg zum dritten Mal jährte», verteidige ihr Land und «unsere gemeinsamen Werte».
Merken Sie etwas? Das Demokratieverständnis mag ja tatsächlich inzwischen in beiden Ländern ähnlich traurig sein. Bezüglich Friedensbemühungen ist meine Wahrnehmung jedoch eine andere. Das muss an meinem Gedächtnis liegen.
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ a95c6243:d345522c
2025-02-21 19:32:23Europa – das Ganze ist eine wunderbare Idee, \ aber das war der Kommunismus auch. \ Loriot
«Europa hat fertig», könnte man unken, und das wäre nicht einmal sehr verwegen. Mit solch einer Einschätzung stünden wir nicht alleine, denn die Stimmen in diese Richtung mehren sich. Der französische Präsident Emmanuel Macron warnte schon letztes Jahr davor, dass «unser Europa sterben könnte». Vermutlich hatte er dabei andere Gefahren im Kopf als jetzt der ungarische Ministerpräsident Viktor Orbán, der ein «baldiges Ende der EU» prognostizierte. Das Ergebnis könnte allerdings das gleiche sein.
Neben vordergründigen Themenbereichen wie Wirtschaft, Energie und Sicherheit ist das eigentliche Problem jedoch die obskure Mischung aus aufgegebener Souveränität und geschwollener Arroganz, mit der europäische Politiker:innende unterschiedlicher Couleur aufzutreten pflegen. Und das Tüpfelchen auf dem i ist die bröckelnde Legitimation politischer Institutionen dadurch, dass die Stimmen großer Teile der Bevölkerung seit Jahren auf vielfältige Weise ausgegrenzt werden.
Um «UnsereDemokratie» steht es schlecht. Dass seine Mandate immer schwächer werden, merkt natürlich auch unser «Führungspersonal». Entsprechend werden die Maßnahmen zur Gängelung, Überwachung und Manipulation der Bürger ständig verzweifelter. Parallel dazu plustern sich in Paris Macron, Scholz und einige andere noch einmal mächtig in Sachen Verteidigung und «Kriegstüchtigkeit» auf.
Momentan gilt es auch, das Überschwappen covidiotischer und verschwörungsideologischer Auswüchse aus den USA nach Europa zu vermeiden. So ein «MEGA» (Make Europe Great Again) können wir hier nicht gebrauchen. Aus den Vereinigten Staaten kommen nämlich furchtbare Nachrichten. Beispielsweise wurde einer der schärfsten Kritiker der Corona-Maßnahmen kürzlich zum Gesundheitsminister ernannt. Dieser setzt sich jetzt für eine Neubewertung der mRNA-«Impfstoffe» ein, was durchaus zu einem Entzug der Zulassungen führen könnte.
Der europäischen Version von «Verteidigung der Demokratie» setzte der US-Vizepräsident J. D. Vance auf der Münchner Sicherheitskonferenz sein Verständnis entgegen: «Demokratie stärken, indem wir unseren Bürgern erlauben, ihre Meinung zu sagen». Das Abschalten von Medien, das Annullieren von Wahlen oder das Ausschließen von Menschen vom politischen Prozess schütze gar nichts. Vielmehr sei dies der todsichere Weg, die Demokratie zu zerstören.
In der Schweiz kamen seine Worte deutlich besser an als in den meisten europäischen NATO-Ländern. Bundespräsidentin Karin Keller-Sutter lobte die Rede und interpretierte sie als «Plädoyer für die direkte Demokratie». Möglicherweise zeichne sich hier eine außenpolitische Kehrtwende in Richtung integraler Neutralität ab, meint mein Kollege Daniel Funk. Das wären doch endlich mal ein paar gute Nachrichten.
Von der einstigen Idee einer europäischen Union mit engeren Beziehungen zwischen den Staaten, um Konflikte zu vermeiden und das Wohlergehen der Bürger zu verbessern, sind wir meilenweit abgekommen. Der heutige korrupte Verbund unter technokratischer Leitung ähnelt mehr einem Selbstbedienungsladen mit sehr begrenztem Zugang. Die EU-Wahlen im letzten Sommer haben daran ebenso wenig geändert, wie die Bundestagswahl am kommenden Sonntag darauf einen Einfluss haben wird.
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ a95c6243:d345522c
2025-02-19 09:23:17Die «moralische Weltordnung» – eine Art Astrologie. Friedrich Nietzsche
Das Treffen der BRICS-Staaten beim Gipfel im russischen Kasan war sicher nicht irgendein politisches Event. Gastgeber Wladimir Putin habe «Hof gehalten», sagen die Einen, China und Russland hätten ihre Vorstellung einer multipolaren Weltordnung zelebriert, schreiben Andere.
In jedem Fall zeigt die Anwesenheit von über 30 Delegationen aus der ganzen Welt, dass von einer geostrategischen Isolation Russlands wohl keine Rede sein kann. Darüber hinaus haben sowohl die Anreise von UN-Generalsekretär António Guterres als auch die Meldungen und Dementis bezüglich der Beitrittsbemühungen des NATO-Staats Türkei für etwas Aufsehen gesorgt.
Im Spannungsfeld geopolitischer und wirtschaftlicher Umbrüche zeigt die neue Allianz zunehmendes Selbstbewusstsein. In Sachen gemeinsamer Finanzpolitik schmiedet man interessante Pläne. Größere Unabhängigkeit von der US-dominierten Finanzordnung ist dabei ein wichtiges Ziel.
Beim BRICS-Wirtschaftsforum in Moskau, wenige Tage vor dem Gipfel, zählte ein nachhaltiges System für Finanzabrechnungen und Zahlungsdienste zu den vorrangigen Themen. Während dieses Treffens ging der russische Staatsfonds eine Partnerschaft mit dem Rechenzentrumsbetreiber BitRiver ein, um Bitcoin-Mining-Anlagen für die BRICS-Länder zu errichten.
Die Initiative könnte ein Schritt sein, Bitcoin und andere Kryptowährungen als Alternativen zu traditionellen Finanzsystemen zu etablieren. Das Projekt könnte dazu führen, dass die BRICS-Staaten den globalen Handel in Bitcoin abwickeln. Vor dem Hintergrund der Diskussionen über eine «BRICS-Währung» wäre dies eine Alternative zu dem ursprünglich angedachten Korb lokaler Währungen und zu goldgedeckten Währungen sowie eine mögliche Ergänzung zum Zahlungssystem BRICS Pay.
Dient der Bitcoin also der Entdollarisierung? Oder droht er inzwischen, zum Gegenstand geopolitischer Machtspielchen zu werden? Angesichts der globalen Vernetzungen ist es oft schwer zu durchschauen, «was eine Show ist und was im Hintergrund von anderen Strippenziehern insgeheim gesteuert wird». Sicher können Strukturen wie Bitcoin auch so genutzt werden, dass sie den Herrschenden dienlich sind. Aber die Grundeigenschaft des dezentralisierten, unzensierbaren Peer-to-Peer Zahlungsnetzwerks ist ihm schließlich nicht zu nehmen.
Wenn es nach der EZB oder dem IWF geht, dann scheint statt Instrumentalisierung momentan eher der Kampf gegen Kryptowährungen angesagt. Jürgen Schaaf, Senior Manager bei der Europäischen Zentralbank, hat jedenfalls dazu aufgerufen, Bitcoin «zu eliminieren». Der Internationale Währungsfonds forderte El Salvador, das Bitcoin 2021 als gesetzliches Zahlungsmittel eingeführt hat, kürzlich zu begrenzenden Maßnahmen gegen das Kryptogeld auf.
Dass die BRICS-Staaten ein freiheitliches Ansinnen im Kopf haben, wenn sie Kryptowährungen ins Spiel bringen, darf indes auch bezweifelt werden. Im Abschlussdokument bekennen sich die Gipfel-Teilnehmer ausdrücklich zur UN, ihren Programmen und ihrer «Agenda 2030». Ernst Wolff nennt das «eine Bankrotterklärung korrupter Politiker, die sich dem digital-finanziellen Komplex zu 100 Prozent unterwerfen».
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ a95c6243:d345522c
2025-02-15 19:05:38Auf der diesjährigen Münchner Sicherheitskonferenz geht es vor allem um die Ukraine. Protagonisten sind dabei zunächst die US-Amerikaner. Präsident Trump schockierte die Europäer kurz vorher durch ein Telefonat mit seinem Amtskollegen Wladimir Putin, während Vizepräsident Vance mit seiner Rede über Demokratie und Meinungsfreiheit für versteinerte Mienen und Empörung sorgte.
Die Bemühungen der Europäer um einen Frieden in der Ukraine halten sich, gelinde gesagt, in Grenzen. Größeres Augenmerk wird auf militärische Unterstützung, die Pflege von Feindbildern sowie Eskalation gelegt. Der deutsche Bundeskanzler Scholz reagierte auf die angekündigten Verhandlungen über einen möglichen Frieden für die Ukraine mit der Forderung nach noch höheren «Verteidigungsausgaben». Auch die amtierende Außenministerin Baerbock hatte vor der Münchner Konferenz klargestellt:
«Frieden wird es nur durch Stärke geben. (...) Bei Corona haben wir gesehen, zu was Europa fähig ist. Es braucht erneut Investitionen, die der historischen Wegmarke, vor der wir stehen, angemessen sind.»
Die Rüstungsindustrie freut sich in jedem Fall über weltweit steigende Militärausgaben. Die Kriege in der Ukraine und in Gaza tragen zu Rekordeinnahmen bei. Jetzt «winkt die Aussicht auf eine jahrelange große Nachrüstung in Europa», auch wenn der Ukraine-Krieg enden sollte, so hört man aus Finanzkreisen. In der Konsequenz kennt «die Aktie des deutschen Vorzeige-Rüstungskonzerns Rheinmetall in ihrem Anstieg offenbar gar keine Grenzen mehr». «Solche Friedensversprechen» wie das jetzige hätten in der Vergangenheit zu starken Kursverlusten geführt.
Für manche Leute sind Kriegswaffen und sonstige Rüstungsgüter Waren wie alle anderen, jedenfalls aus der Perspektive von Investoren oder Managern. Auch in diesem Bereich gibt es Startups und man spricht von Dingen wie innovativen Herangehensweisen, hocheffizienten Produktionsanlagen, skalierbaren Produktionstechniken und geringeren Stückkosten.
Wir lesen aktuell von Massenproduktion und gesteigerten Fertigungskapazitäten für Kriegsgerät. Der Motor solcher Dynamik und solchen Wachstums ist die Aufrüstung, die inzwischen permanent gefordert wird. Parallel wird die Bevölkerung verbal eingestimmt und auf Kriegstüchtigkeit getrimmt.
Das Rüstungs- und KI-Startup Helsing verkündete kürzlich eine «dezentrale Massenproduktion für den Ukrainekrieg». Mit dieser Expansion positioniere sich das Münchner Unternehmen als einer der weltweit führenden Hersteller von Kampfdrohnen. Der nächste «Meilenstein» steht auch bereits an: Man will eine Satellitenflotte im Weltraum aufbauen, zur Überwachung von Gefechtsfeldern und Truppenbewegungen.
Ebenfalls aus München stammt das als DefenseTech-Startup bezeichnete Unternehmen ARX Robotics. Kürzlich habe man in der Region die größte europäische Produktionsstätte für autonome Verteidigungssysteme eröffnet. Damit fahre man die Produktion von Militär-Robotern hoch. Diese Expansion diene auch der Lieferung der «größten Flotte unbemannter Bodensysteme westlicher Bauart» in die Ukraine.
Rüstung boomt und scheint ein Zukunftsmarkt zu sein. Die Hersteller und Vermarkter betonen, mit ihren Aktivitäten und Produkten solle die europäische Verteidigungsfähigkeit erhöht werden. Ihre Strategien sollten sogar «zum Schutz demokratischer Strukturen beitragen».
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ c631e267:c2b78d3e
2025-02-07 19:42:11Nur wenn wir aufeinander zugehen, haben wir die Chance \ auf Überwindung der gegenseitigen Ressentiments! \ Dr. med. dent. Jens Knipphals
In Wolfsburg sollte es kürzlich eine Gesprächsrunde von Kritikern der Corona-Politik mit Oberbürgermeister Dennis Weilmann und Vertretern der Stadtverwaltung geben. Der Zahnarzt und langjährige Maßnahmenkritiker Jens Knipphals hatte diese Einladung ins Rathaus erwirkt und publiziert. Seine Motivation:
«Ich möchte die Spaltung der Gesellschaft überwinden. Dazu ist eine umfassende Aufarbeitung der Corona-Krise in der Öffentlichkeit notwendig.»
Schon früher hatte Knipphals Antworten von den Kommunalpolitikern verlangt, zum Beispiel bei öffentlichen Bürgerfragestunden. Für das erwartete Treffen im Rathaus formulierte er Fragen wie: Warum wurden fachliche Argumente der Kritiker ignoriert? Weshalb wurde deren Ausgrenzung, Diskreditierung und Entmenschlichung nicht entgegengetreten? In welcher Form übernehmen Rat und Verwaltung in Wolfsburg persönlich Verantwortung für die erheblichen Folgen der politischen Corona-Krise?
Der Termin fand allerdings nicht statt – der Bürgermeister sagte ihn kurz vorher wieder ab. Knipphals bezeichnete Weilmann anschließend als Wiederholungstäter, da das Stadtoberhaupt bereits 2022 zu einem Runden Tisch in der Sache eingeladen hatte, den es dann nie gab. Gegenüber Multipolar erklärte der Arzt, Weilmann wolle scheinbar eine öffentliche Aufarbeitung mit allen Mitteln verhindern. Er selbst sei «inzwischen absolut desillusioniert» und die einzige Lösung sei, dass die Verantwortlichen gingen.
Die Aufarbeitung der Plandemie beginne bei jedem von uns selbst, sei aber letztlich eine gesamtgesellschaftliche Aufgabe, schreibt Peter Frey, der den «Fall Wolfsburg» auch in seinem Blog behandelt. Diese Aufgabe sei indes deutlich größer, als viele glaubten. Erfreulicherweise sei der öffentliche Informationsraum inzwischen größer, trotz der weiterhin unverfrorenen Desinformations-Kampagnen der etablierten Massenmedien.
Frey erinnert daran, dass Dennis Weilmann mitverantwortlich für gravierende Grundrechtseinschränkungen wie die 2021 eingeführten 2G-Regeln in der Wolfsburger Innenstadt zeichnet. Es sei naiv anzunehmen, dass ein Funktionär einzig im Interesse der Bürger handeln würde. Als früherer Dezernent des Amtes für Wirtschaft, Digitalisierung und Kultur der Autostadt kenne Weilmann zum Beispiel die Verknüpfung von Fördergeldern mit politischen Zielsetzungen gut.
Wolfsburg wurde damals zu einem Modellprojekt des Bundesministeriums des Innern (BMI) und war Finalist im Bitkom-Wettbewerb «Digitale Stadt». So habe rechtzeitig vor der Plandemie das Projekt «Smart City Wolfsburg» anlaufen können, das der Stadt «eine Vorreiterrolle für umfassende Vernetzung und Datenerfassung» aufgetragen habe, sagt Frey. Die Vereinten Nationen verkauften dann derartige «intelligente» Überwachungs- und Kontrollmaßnahmen ebenso als Rettung in der Not wie das Magazin Forbes im April 2020:
«Intelligente Städte können uns helfen, die Coronavirus-Pandemie zu bekämpfen. In einer wachsenden Zahl von Ländern tun die intelligenten Städte genau das. Regierungen und lokale Behörden nutzen Smart-City-Technologien, Sensoren und Daten, um die Kontakte von Menschen aufzuspüren, die mit dem Coronavirus infiziert sind. Gleichzeitig helfen die Smart Cities auch dabei, festzustellen, ob die Regeln der sozialen Distanzierung eingehalten werden.»
Offensichtlich gibt es viele Aspekte zu bedenken und zu durchleuten, wenn es um die Aufklärung und Aufarbeitung der sogenannten «Corona-Pandemie» und der verordneten Maßnahmen geht. Frustration und Desillusion sind angesichts der Realitäten absolut verständlich. Gerade deswegen sind Initiativen wie die von Jens Knipphals so bewundernswert und so wichtig – ebenso wie eine seiner Kernthesen: «Wir müssen aufeinander zugehen, da hilft alles nichts».
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ 91bea5cd:1df4451c
2025-04-26 10:16:21O Contexto Legal Brasileiro e o Consentimento
No ordenamento jurídico brasileiro, o consentimento do ofendido pode, em certas circunstâncias, afastar a ilicitude de um ato que, sem ele, configuraria crime (como lesão corporal leve, prevista no Art. 129 do Código Penal). Contudo, o consentimento tem limites claros: não é válido para bens jurídicos indisponíveis, como a vida, e sua eficácia é questionável em casos de lesões corporais graves ou gravíssimas.
A prática de BDSM consensual situa-se em uma zona complexa. Em tese, se ambos os parceiros são adultos, capazes, e consentiram livre e informadamente nos atos praticados, sem que resultem em lesões graves permanentes ou risco de morte não consentido, não haveria crime. O desafio reside na comprovação desse consentimento, especialmente se uma das partes, posteriormente, o negar ou alegar coação.
A Lei Maria da Penha (Lei nº 11.340/2006)
A Lei Maria da Penha é um marco fundamental na proteção da mulher contra a violência doméstica e familiar. Ela estabelece mecanismos para coibir e prevenir tal violência, definindo suas formas (física, psicológica, sexual, patrimonial e moral) e prevendo medidas protetivas de urgência.
Embora essencial, a aplicação da lei em contextos de BDSM pode ser delicada. Uma alegação de violência por parte da mulher, mesmo que as lesões ou situações decorram de práticas consensuais, tende a receber atenção prioritária das autoridades, dada a presunção de vulnerabilidade estabelecida pela lei. Isso pode criar um cenário onde o parceiro masculino enfrenta dificuldades significativas em demonstrar a natureza consensual dos atos, especialmente se não houver provas robustas pré-constituídas.
Outros riscos:
Lesão corporal grave ou gravíssima (art. 129, §§ 1º e 2º, CP), não pode ser justificada pelo consentimento, podendo ensejar persecução penal.
Crimes contra a dignidade sexual (arts. 213 e seguintes do CP) são de ação pública incondicionada e independem de representação da vítima para a investigação e denúncia.
Riscos de Falsas Acusações e Alegação de Coação Futura
Os riscos para os praticantes de BDSM, especialmente para o parceiro que assume o papel dominante ou que inflige dor/restrição (frequentemente, mas não exclusivamente, o homem), podem surgir de diversas frentes:
- Acusações Externas: Vizinhos, familiares ou amigos que desconhecem a natureza consensual do relacionamento podem interpretar sons, marcas ou comportamentos como sinais de abuso e denunciar às autoridades.
- Alegações Futuras da Parceira: Em caso de término conturbado, vingança, arrependimento ou mudança de perspectiva, a parceira pode reinterpretar as práticas passadas como abuso e buscar reparação ou retaliação através de uma denúncia. A alegação pode ser de que o consentimento nunca existiu ou foi viciado.
- Alegação de Coação: Uma das formas mais complexas de refutar é a alegação de que o consentimento foi obtido mediante coação (física, moral, psicológica ou econômica). A parceira pode alegar, por exemplo, que se sentia pressionada, intimidada ou dependente, e que seu "sim" não era genuíno. Provar a ausência de coação a posteriori é extremamente difícil.
- Ingenuidade e Vulnerabilidade Masculina: Muitos homens, confiando na dinâmica consensual e na parceira, podem negligenciar a necessidade de precauções. A crença de que "isso nunca aconteceria comigo" ou a falta de conhecimento sobre as implicações legais e o peso processual de uma acusação no âmbito da Lei Maria da Penha podem deixá-los vulneráveis. A presença de marcas físicas, mesmo que consentidas, pode ser usada como evidência de agressão, invertendo o ônus da prova na prática, ainda que não na teoria jurídica.
Estratégias de Prevenção e Mitigação
Não existe um método infalível para evitar completamente o risco de uma falsa acusação, mas diversas medidas podem ser adotadas para construir um histórico de consentimento e reduzir vulnerabilidades:
- Comunicação Explícita e Contínua: A base de qualquer prática BDSM segura é a comunicação constante. Negociar limites, desejos, palavras de segurança ("safewords") e expectativas antes, durante e depois das cenas é crucial. Manter registros dessas negociações (e-mails, mensagens, diários compartilhados) pode ser útil.
-
Documentação do Consentimento:
-
Contratos de Relacionamento/Cena: Embora a validade jurídica de "contratos BDSM" seja discutível no Brasil (não podem afastar normas de ordem pública), eles servem como forte evidência da intenção das partes, da negociação detalhada de limites e do consentimento informado. Devem ser claros, datados, assinados e, idealmente, reconhecidos em cartório (para prova de data e autenticidade das assinaturas).
-
Registros Audiovisuais: Gravar (com consentimento explícito para a gravação) discussões sobre consentimento e limites antes das cenas pode ser uma prova poderosa. Gravar as próprias cenas é mais complexo devido a questões de privacidade e potencial uso indevido, mas pode ser considerado em casos específicos, sempre com consentimento mútuo documentado para a gravação.
Importante: a gravação deve ser com ciência da outra parte, para não configurar violação da intimidade (art. 5º, X, da Constituição Federal e art. 20 do Código Civil).
-
-
Testemunhas: Em alguns contextos de comunidade BDSM, a presença de terceiros de confiança durante negociações ou mesmo cenas pode servir como testemunho, embora isso possa alterar a dinâmica íntima do casal.
- Estabelecimento Claro de Limites e Palavras de Segurança: Definir e respeitar rigorosamente os limites (o que é permitido, o que é proibido) e as palavras de segurança é fundamental. O desrespeito a uma palavra de segurança encerra o consentimento para aquele ato.
- Avaliação Contínua do Consentimento: O consentimento não é um cheque em branco; ele deve ser entusiástico, contínuo e revogável a qualquer momento. Verificar o bem-estar do parceiro durante a cena ("check-ins") é essencial.
- Discrição e Cuidado com Evidências Físicas: Ser discreto sobre a natureza do relacionamento pode evitar mal-entendidos externos. Após cenas que deixem marcas, é prudente que ambos os parceiros estejam cientes e de acordo, talvez documentando por fotos (com data) e uma nota sobre a consensualidade da prática que as gerou.
- Aconselhamento Jurídico Preventivo: Consultar um advogado especializado em direito de família e criminal, com sensibilidade para dinâmicas de relacionamento alternativas, pode fornecer orientação personalizada sobre as melhores formas de documentar o consentimento e entender os riscos legais específicos.
Observações Importantes
- Nenhuma documentação substitui a necessidade de consentimento real, livre, informado e contínuo.
- A lei brasileira protege a "integridade física" e a "dignidade humana". Práticas que resultem em lesões graves ou que violem a dignidade de forma não consentida (ou com consentimento viciado) serão ilegais, independentemente de qualquer acordo prévio.
- Em caso de acusação, a existência de documentação robusta de consentimento não garante a absolvição, mas fortalece significativamente a defesa, ajudando a demonstrar a natureza consensual da relação e das práticas.
-
A alegação de coação futura é particularmente difícil de prevenir apenas com documentos. Um histórico consistente de comunicação aberta (whatsapp/telegram/e-mails), respeito mútuo e ausência de dependência ou controle excessivo na relação pode ajudar a contextualizar a dinâmica como não coercitiva.
-
Cuidado com Marcas Visíveis e Lesões Graves Práticas que resultam em hematomas severos ou lesões podem ser interpretadas como agressão, mesmo que consentidas. Evitar excessos protege não apenas a integridade física, mas também evita questionamentos legais futuros.
O que vem a ser consentimento viciado
No Direito, consentimento viciado é quando a pessoa concorda com algo, mas a vontade dela não é livre ou plena — ou seja, o consentimento existe formalmente, mas é defeituoso por alguma razão.
O Código Civil brasileiro (art. 138 a 165) define várias formas de vício de consentimento. As principais são:
Erro: A pessoa se engana sobre o que está consentindo. (Ex.: A pessoa acredita que vai participar de um jogo leve, mas na verdade é exposta a práticas pesadas.)
Dolo: A pessoa é enganada propositalmente para aceitar algo. (Ex.: Alguém mente sobre o que vai acontecer durante a prática.)
Coação: A pessoa é forçada ou ameaçada a consentir. (Ex.: "Se você não aceitar, eu termino com você" — pressão emocional forte pode ser vista como coação.)
Estado de perigo ou lesão: A pessoa aceita algo em situação de necessidade extrema ou abuso de sua vulnerabilidade. (Ex.: Alguém em situação emocional muito fragilizada é induzida a aceitar práticas que normalmente recusaria.)
No contexto de BDSM, isso é ainda mais delicado: Mesmo que a pessoa tenha "assinado" um contrato ou dito "sim", se depois ela alegar que seu consentimento foi dado sob medo, engano ou pressão psicológica, o consentimento pode ser considerado viciado — e, portanto, juridicamente inválido.
Isso tem duas implicações sérias:
-
O crime não se descaracteriza: Se houver vício, o consentimento é ignorado e a prática pode ser tratada como crime normal (lesão corporal, estupro, tortura, etc.).
-
A prova do consentimento precisa ser sólida: Mostrando que a pessoa estava informada, lúcida, livre e sem qualquer tipo de coação.
Consentimento viciado é quando a pessoa concorda formalmente, mas de maneira enganada, forçada ou pressionada, tornando o consentimento inútil para efeitos jurídicos.
Conclusão
Casais que praticam BDSM consensual no Brasil navegam em um terreno que exige não apenas confiança mútua e comunicação excepcional, mas também uma consciência aguçada das complexidades legais e dos riscos de interpretações equivocadas ou acusações mal-intencionadas. Embora o BDSM seja uma expressão legítima da sexualidade humana, sua prática no Brasil exige responsabilidade redobrada. Ter provas claras de consentimento, manter a comunicação aberta e agir com prudência são formas eficazes de se proteger de falsas alegações e preservar a liberdade e a segurança de todos os envolvidos. Embora leis controversas como a Maria da Penha sejam "vitais" para a proteção contra a violência real, os praticantes de BDSM, e em particular os homens nesse contexto, devem adotar uma postura proativa e prudente para mitigar os riscos inerentes à potencial má interpretação ou instrumentalização dessas práticas e leis, garantindo que a expressão de sua consensualidade esteja resguardada na medida do possível.
Importante: No Brasil, mesmo com tudo isso, o Ministério Público pode denunciar por crime como lesão corporal grave, estupro ou tortura, independente de consentimento. Então a prudência nas práticas é fundamental.
Aviso Legal: Este artigo tem caráter meramente informativo e não constitui aconselhamento jurídico. As leis e interpretações podem mudar, e cada situação é única. Recomenda-se buscar orientação de um advogado qualificado para discutir casos específicos.
Se curtiu este artigo faça uma contribuição, se tiver algum ponto relevante para o artigo deixe seu comentário.