-
@ a008def1:57a3564d
2025-04-30 17:52:11A Vision for #GitViaNostr
Git has long been the standard for version control in software development, but over time, we has lost its distributed nature. Originally, Git used open, permissionless email for collaboration, which worked well at scale. However, the rise of GitHub and its centralized pull request (PR) model has shifted the landscape.
Now, we have the opportunity to revive Git's permissionless and distributed nature through Nostr!
We’ve developed tools to facilitate Git collaboration via Nostr, but there are still significant friction that prevents widespread adoption. This article outlines a vision for how we can reduce those barriers and encourage more repositories to embrace this approach.
First, we’ll review our progress so far. Then, we’ll propose a guiding philosophy for our next steps. Finally, we’ll discuss a vision to tackle specific challenges, mainly relating to the role of the Git server and CI/CD.
I am the lead maintainer of ngit and gitworkshop.dev, and I’ve been fortunate to work full-time on this initiative for the past two years, thanks to an OpenSats grant.
How Far We’ve Come
The aim of #GitViaNostr is to liberate discussions around code collaboration from permissioned walled gardens. At the core of this collaboration is the process of proposing and applying changes. That's what we focused on first.
Since Nostr shares characteristics with email, and with NIP34, we’ve adopted similar primitives to those used in the patches-over-email workflow. This is because of their simplicity and that they don’t require contributors to host anything, which adds reliability and makes participation more accessible.
However, the fork-branch-PR-merge workflow is the only model many developers have known, and changing established workflows can be challenging. To address this, we developed a new workflow that balances familiarity, user experience, and alignment with the Nostr protocol: the branch-PR-merge model.
This model is implemented in ngit, which includes a Git plugin that allows users to engage without needing to learn new commands. Additionally, gitworkshop.dev offers a GitHub-like interface for interacting with PRs and issues. We encourage you to try them out using the quick start guide and share your feedback. You can also explore PRs and issues with gitplaza.
For those who prefer the patches-over-email workflow, you can still use that approach with Nostr through gitstr or the
ngit send
andngit list
commands, and explore patches with patch34.The tools are now available to support the core collaboration challenge, but we are still at the beginning of the adoption curve.
Before we dive into the challenges—such as why the Git server setup can be jarring and the possibilities surrounding CI/CD—let’s take a moment to reflect on how we should approach the challenges ahead of us.
Philosophy
Here are some foundational principles I shared a few years ago:
- Let Git be Git
- Let Nostr be Nostr
- Learn from the successes of others
I’d like to add one more:
- Embrace anarchy and resist monolithic development.
Micro Clients FTW
Nostr celebrates simplicity, and we should strive to maintain that. Monolithic developments often lead to unnecessary complexity. Projects like gitworkshop.dev, which aim to cover various aspects of the code collaboration experience, should not stifle innovation.
Just yesterday, the launch of following.space demonstrated how vibe-coded micro clients can make a significant impact. They can be valuable on their own, shape the ecosystem, and help push large and widely used clients to implement features and ideas.
The primitives in NIP34 are straightforward, and if there are any barriers preventing the vibe-coding of a #GitViaNostr app in an afternoon, we should work to eliminate them.
Micro clients should lead the way and explore new workflows, experiences, and models of thinking.
Take kanbanstr.com. It provides excellent project management and organization features that work seamlessly with NIP34 primitives.
From kanban to code snippets, from CI/CD runners to SatShoot—may a thousand flowers bloom, and a thousand more after them.
Friction and Challenges
The Git Server
In #GitViaNostr, maintainers' branches (e.g.,
master
) are hosted on a Git server. Here’s why this approach is beneficial:- Follows the original Git vision and the "let Git be Git" philosophy.
- Super efficient, battle-tested, and compatible with all the ways people use Git (e.g., LFS, shallow cloning).
- Maintains compatibility with related systems without the need for plugins (e.g., for build and deployment).
- Only repository maintainers need write access.
In the original Git model, all users would need to add the Git server as a 'git remote.' However, with ngit, the Git server is hidden behind a Nostr remote, which enables:
- Hiding complexity from contributors and users, so that only maintainers need to know about the Git server component to start using #GitViaNostr.
- Maintainers can easily swap Git servers by updating their announcement event, allowing contributors/users using ngit to automatically switch to the new one.
Challenges with the Git Server
While the Git server model has its advantages, it also presents several challenges:
- Initial Setup: When creating a new repository, maintainers must select a Git server, which can be a jarring experience. Most options come with bloated social collaboration features tied to a centralized PR model, often difficult or impossible to disable.
-
Manual Configuration: New repositories require manual configuration, including adding new maintainers through a browser UI, which can be cumbersome and time-consuming.
-
User Onboarding: Many Git servers require email sign-up or KYC (Know Your Customer) processes, which can be a significant turn-off for new users exploring a decentralized and permissionless alternative to GitHub.
Once the initial setup is complete, the system works well if a reliable Git server is chosen. However, this is a significant "if," as we have become accustomed to the excellent uptime and reliability of GitHub. Even professionally run alternatives like Codeberg can experience downtime, which is frustrating when CI/CD and deployment processes are affected. This problem is exacerbated when self-hosting.
Currently, most repositories on Nostr rely on GitHub as the Git server. While maintainers can change servers without disrupting their contributors, this reliance on a centralized service is not the decentralized dream we aspire to achieve.
Vision for the Git Server
The goal is to transform the Git server from a single point of truth and failure into a component similar to a Nostr relay.
Functionality Already in ngit to Support This
-
State on Nostr: Store the state of branches and tags in a Nostr event, removing reliance on a single server. This validates that the data received has been signed by the maintainer, significantly reducing the trust requirement.
-
Proxy to Multiple Git Servers: Proxy requests to all servers listed in the announcement event, adding redundancy and eliminating the need for any one server to match GitHub's reliability.
Implementation Requirements
To achieve this vision, the Nostr Git server implementation should:
-
Implement the Git Smart HTTP Protocol without authentication (no SSH) and only accept pushes if the reference tip matches the latest state event.
-
Avoid Bloat: There should be no user authentication, no database, no web UI, and no unnecessary features.
-
Automatic Repository Management: Accept or reject new repositories automatically upon the first push based on the content of the repository announcement event referenced in the URL path and its author.
Just as there are many free, paid, and self-hosted relays, there will be a variety of free, zero-step signup options, as well as self-hosted and paid solutions.
Some servers may use a Web of Trust (WoT) to filter out spam, while others might impose bandwidth or repository size limits for free tiers or whitelist specific npubs.
Additionally, some implementations could bundle relay and blossom server functionalities to unify the provision of repository data into a single service. These would likely only accept content related to the stored repositories rather than general social nostr content.
The potential role of CI / CD via nostr DVMs could create the incentives for a market of highly reliable free at the point of use git servers.
This could make onboarding #GitViaNostr repositories as easy as entering a name and selecting from a multi-select list of Git server providers that announce via NIP89.
!(image)[https://image.nostr.build/badedc822995eb18b6d3c4bff0743b12b2e5ac018845ba498ce4aab0727caf6c.jpg]
Git Client in the Browser
Currently, many tasks are performed on a Git server web UI, such as:
- Browsing code, commits, branches, tags, etc.
- Creating and displaying permalinks to specific lines in commits.
- Merging PRs.
- Making small commits and PRs on-the-fly.
Just as nobody goes to the web UI of a relay (e.g., nos.lol) to interact with notes, nobody should need to go to a Git server to interact with repositories. We use the Nostr protocol to interact with Nostr relays, and we should use the Git protocol to interact with Git servers. This situation has evolved due to the centralization of Git servers. Instead of being restricted to the view and experience designed by the server operator, users should be able to choose the user experience that works best for them from a range of clients. To facilitate this, we need a library that lowers the barrier to entry for creating these experiences. This library should not require a full clone of every repository and should not depend on proprietary APIs. As a starting point, I propose wrapping the WASM-compiled gitlib2 library for the web and creating useful functions, such as showing a file, which utilizes clever flags to minimize bandwidth usage (e.g., shallow clone, noblob, etc.).
This approach would not only enhance clients like gitworkshop.dev but also bring forth a vision where Git servers simply run the Git protocol, making vibe coding Git experiences even better.
song
nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 created song with a complementary vision that has shaped how I see the role of the git server. Its a self-hosted, nostr-permissioned git server with a relay baked in. Its currently a WIP and there are some compatability with ngit that we need to work out.
We collaborated on the nostr-permissioning approach now reflected in nip34.
I'm really excited to see how this space evolves.
CI/CD
Most projects require CI/CD, and while this is often bundled with Git hosting solutions, it is currently not smoothly integrated into #GitViaNostr yet. There are many loosely coupled options, such as Jenkins, Travis, CircleCI, etc., that could be integrated with Nostr.
However, the more exciting prospect is to use DVMs (Data Vending Machines).
DVMs for CI/CD
Nostr Data Vending Machines (DVMs) can provide a marketplace of CI/CD task runners with Cashu for micro payments.
There are various trust levels in CI/CD tasks:
- Tasks with no secrets eg. tests.
- Tasks using updatable secrets eg. API keys.
- Unverifiable builds and steps that sign with Android, Nostr, or PGP keys.
DVMs allow tasks to be kicked off with specific providers using a Cashu token as payment.
It might be suitable for some high-compute and easily verifiable tasks to be run by the cheapest available providers. Medium trust tasks could be run by providers with a good reputation, while high trust tasks could be run on self-hosted runners.
Job requests, status, and results all get published to Nostr for display in Git-focused Nostr clients.
Jobs could be triggered manually, or self-hosted runners could be configured to watch a Nostr repository and kick off jobs using their own runners without payment.
But I'm most excited about the prospect of Watcher Agents.
CI/CD Watcher Agents
AI agents empowered with a NIP60 Cashu wallet can run tasks based on activity, such as a push to master or a new PR, using the most suitable available DVM runner that meets the user's criteria. To keep them running, anyone could top up their NIP60 Cashu wallet; otherwise, the watcher turns off when the funds run out. It could be users, maintainers, or anyone interested in helping the project who could top up the Watcher Agent's balance.
As aluded to earlier, part of building a reputation as a CI/CD provider could involve running reliable hosting (Git server, relay, and blossom server) for all FOSS Nostr Git repositories.
This provides a sustainable revenue model for hosting providers and creates incentives for many free-at-the-point-of-use hosting providers. This, in turn, would allow one-click Nostr repository creation workflows, instantly hosted by many different providers.
Progress to Date
nostr:npub1hw6amg8p24ne08c9gdq8hhpqx0t0pwanpae9z25crn7m9uy7yarse465gr and nostr:npub16ux4qzg4qjue95vr3q327fzata4n594c9kgh4jmeyn80v8k54nhqg6lra7 have been working on a runner that uses GitHub Actions YAML syntax (using act) for the dvm-cicd-runner and takes Cashu payment. You can see example runs on GitWorkshop. It currently takes testnuts, doesn't give any change, and the schema will likely change.
Note: The actions tab on GitWorkshop is currently available on all repositories if you turn on experimental mode (under settings in the user menu).
It's a work in progress, and we expect the format and schema to evolve.
Easy Web App Deployment
For those disapointed not to find a 'Nostr' button to import a git repository to Vercel menu: take heart, they made it easy. vercel.com_import_options.png there is a vercel cli that can be easily called in CI / CD jobs to kick of deployments. Not all managed solutions for web app deployment (eg. netlify) make it that easy.
Many More Opportunities
Large Patches via Blossom
I would be remiss not to mention the large patch problem. Some patches are too big to fit into Nostr events. Blossom is perfect for this, as it allows these larger patches to be included in a blossom file and referenced in a new patch kind.
Enhancing the #GitViaNostr Experience
Beyond the large patch issue, there are numerous opportunities to enhance the #GitViaNostr ecosystem. We can focus on improving browsing, discovery, social and notifications. Receiving notifications on daily driver Nostr apps is one of the killer features of Nostr. However, we must ensure that Git-related notifications are easily reviewable, so we don’t miss any critical updates.
We need to develop tools that cater to our curiosity—tools that enable us to discover and follow projects, engage in discussions that pique our interest, and stay informed about developments relevant to our work.
Additionally, we should not overlook the importance of robust search capabilities and tools that facilitate migrations.
Concluding Thoughts
The design space is vast. Its an exciting time to be working on freedom tech. I encourage everyone to contribute their ideas and creativity and get vibe-coding!
I welcome your honest feedback on this vision and any suggestions you might have. Your insights are invaluable as we collaborate to shape the future of #GitViaNostr. Onward.
Contributions
To conclude, I want to acknowledge some the individuals who have made recent code contributions related to #GitViaNostr:
nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 (gitstr, song, patch34), nostr:npub1useke4f9maul5nf67dj0m9sq6jcsmnjzzk4ycvldwl4qss35fvgqjdk5ks (gitplaza)
nostr:npub1elta7cneng3w8p9y4dw633qzdjr4kyvaparuyuttyrx6e8xp7xnq32cume (ngit contributions, git-remote-blossom),nostr:npub16p8v7varqwjes5hak6q7mz6pygqm4pwc6gve4mrned3xs8tz42gq7kfhdw (SatShoot, Flotilla-Budabit), nostr:npub1ehhfg09mr8z34wz85ek46a6rww4f7c7jsujxhdvmpqnl5hnrwsqq2szjqv (Flotilla-Budabit, Nostr Git Extension), nostr:npub1ahaz04ya9tehace3uy39hdhdryfvdkve9qdndkqp3tvehs6h8s5slq45hy (gnostr and experiments), and others.
nostr:npub1uplxcy63up7gx7cladkrvfqh834n7ylyp46l3e8t660l7peec8rsd2sfek (git-remote-nostr)
Project Management nostr:npub1ltx67888tz7lqnxlrg06x234vjnq349tcfyp52r0lstclp548mcqnuz40t (kanbanstr) Code Snippets nostr:npub1ygzj9skr9val9yqxkf67yf9jshtyhvvl0x76jp5er09nsc0p3j6qr260k2 (nodebin.io) nostr:npub1r0rs5q2gk0e3dk3nlc7gnu378ec6cnlenqp8a3cjhyzu6f8k5sgs4sq9ac (snipsnip.dev)
CI / CD nostr:npub16ux4qzg4qjue95vr3q327fzata4n594c9kgh4jmeyn80v8k54nhqg6lra7 nostr:npub1hw6amg8p24ne08c9gdq8hhpqx0t0pwanpae9z25crn7m9uy7yarse465gr
and for their nostr:npub1c03rad0r6q833vh57kyd3ndu2jry30nkr0wepqfpsm05vq7he25slryrnw nostr:npub1qqqqqq2stely3ynsgm5mh2nj3v0nk5gjyl3zqrzh34hxhvx806usxmln03 and nostr:npub1l5sga6xg72phsz5422ykujprejwud075ggrr3z2hwyrfgr7eylqstegx9z for their testing, feedback, ideas and encouragement.
Thank you for your support and collaboration! Let me know if I've missed you.
-
@ 6be5cc06:5259daf0
2025-04-28 01:05:49Eu reconheço que Deus, e somente Deus, é o soberano legítimo sobre todas as coisas. Nenhum homem, nenhuma instituição, nenhum parlamento tem autoridade para usurpar aquilo que pertence ao Rei dos reis. O Estado moderno, com sua pretensão totalizante, é uma farsa blasfema diante do trono de Cristo. Não aceito outro senhor.
A Lei que me guia não é a ditada por burocratas, mas a gravada por Deus na própria natureza humana. A razão, quando iluminada pela fé, é suficiente para discernir o que é justo. Rejeito as leis arbitrárias que pretendem legitimar o roubo, o assassinato ou a escravidão em nome da ordem. A justiça não nasce do decreto, mas da verdade.
Acredito firmemente na propriedade privada como extensão da própria pessoa. Aquilo que é fruto do meu trabalho, da minha criatividade, da minha dedicação, dos dons a mim concedidos por Deus, pertence a mim por direito natural. Ninguém pode legitimamente tomar o que é meu sem meu consentimento. Todo imposto é uma agressão; toda expropriação, um roubo. Defendo a liberdade econômica não por idolatria ao mercado, mas porque a liberdade é condição necessária para a virtude.
Assumo o Princípio da Não Agressão como o mínimo ético que devo respeitar. Não iniciarei o uso da força contra ninguém, nem contra sua propriedade. Exijo o mesmo de todos. Mas sei que isso não basta. O PNA delimita o que não devo fazer — ele não me ensina o que devo ser. A liberdade exterior só é boa se houver liberdade interior. O mercado pode ser livre, mas se a alma estiver escravizada pelo vício, o colapso será inevitável.
Por isso, não me basta a ética negativa. Creio que uma sociedade justa precisa de valores positivos: honra, responsabilidade, compaixão, respeito, fidelidade à verdade. Sem isso, mesmo uma sociedade que respeite formalmente os direitos individuais apodrecerá por dentro. Um povo que ama o lucro, mas despreza a verdade, que celebra a liberdade mas esquece a justiça, está se preparando para ser dominado. Trocará um déspota visível por mil tiranias invisíveis — o hedonismo, o consumismo, a mentira, o medo.
Não aceito a falsa caridade feita com o dinheiro tomado à força. A verdadeira generosidade nasce do coração livre, não da coerção institucional. Obrigar alguém a ajudar o próximo destrói tanto a liberdade quanto a virtude. Só há mérito onde há escolha. A caridade que nasce do amor é redentora; a que nasce do fisco é propaganda.
O Estado moderno é um ídolo. Ele promete segurança, mas entrega servidão. Promete justiça, mas entrega privilégios. Disfarça a opressão com linguagem técnica, legal e democrática. Mas por trás de suas máscaras, vejo apenas a velha serpente. Um parasita que se alimenta do trabalho alheio e manipula consciências para se perpetuar.
Resistir não é apenas um direito, é um dever. Obedecer a Deus antes que aos homens — essa é a minha regra. O poder se volta contra a verdade, mas minha lealdade pertence a quem criou o céu e a terra. A tirania não se combate com outro tirano, mas com a desobediência firme e pacífica dos que amam a justiça.
Não acredito em utopias. Desejo uma ordem natural, orgânica, enraizada no voluntarismo. Uma sociedade que se construa de baixo para cima: a partir da família, da comunidade local, da tradição e da fé. Não quero uma máquina que planeje a vida alheia, mas um tecido de relações voluntárias onde a liberdade floresça à sombra da cruz.
Desejo, sim, o reinado social de Cristo. Não por imposição, mas por convicção. Que Ele reine nos corações, nas famílias, nas ruas e nos contratos. Que a fé guie a razão e a razão ilumine a vida. Que a liberdade seja meio para a santidade — não um fim em si. E que, livres do jugo do Leviatã, sejamos servos apenas do Senhor.
-
@ a39d19ec:3d88f61e
2025-04-22 12:44:42Die Debatte um Migration, Grenzsicherung und Abschiebungen wird in Deutschland meist emotional geführt. Wer fordert, dass illegale Einwanderer abgeschoben werden, sieht sich nicht selten dem Vorwurf des Rassismus ausgesetzt. Doch dieser Vorwurf ist nicht nur sachlich unbegründet, sondern verkehrt die Realität ins Gegenteil: Tatsächlich sind es gerade diejenigen, die hinter jeder Forderung nach Rechtssicherheit eine rassistische Motivation vermuten, die selbst in erster Linie nach Hautfarbe, Herkunft oder Nationalität urteilen.
Das Recht steht über Emotionen
Deutschland ist ein Rechtsstaat. Das bedeutet, dass Regeln nicht nach Bauchgefühl oder politischer Stimmungslage ausgelegt werden können, sondern auf klaren gesetzlichen Grundlagen beruhen müssen. Einer dieser Grundsätze ist in Artikel 16a des Grundgesetzes verankert. Dort heißt es:
„Auf Absatz 1 [Asylrecht] kann sich nicht berufen, wer aus einem Mitgliedstaat der Europäischen Gemeinschaften oder aus einem anderen Drittstaat einreist, in dem die Anwendung des Abkommens über die Rechtsstellung der Flüchtlinge und der Europäischen Menschenrechtskonvention sichergestellt ist.“
Das bedeutet, dass jeder, der über sichere Drittstaaten nach Deutschland einreist, keinen Anspruch auf Asyl hat. Wer dennoch bleibt, hält sich illegal im Land auf und unterliegt den geltenden Regelungen zur Rückführung. Die Forderung nach Abschiebungen ist daher nichts anderes als die Forderung nach der Einhaltung von Recht und Gesetz.
Die Umkehrung des Rassismusbegriffs
Wer einerseits behauptet, dass das deutsche Asyl- und Aufenthaltsrecht strikt durchgesetzt werden soll, und andererseits nicht nach Herkunft oder Hautfarbe unterscheidet, handelt wertneutral. Diejenigen jedoch, die in einer solchen Forderung nach Rechtsstaatlichkeit einen rassistischen Unterton sehen, projizieren ihre eigenen Denkmuster auf andere: Sie unterstellen, dass die Debatte ausschließlich entlang ethnischer, rassistischer oder nationaler Kriterien geführt wird – und genau das ist eine rassistische Denkweise.
Jemand, der illegale Einwanderung kritisiert, tut dies nicht, weil ihn die Herkunft der Menschen interessiert, sondern weil er den Rechtsstaat respektiert. Hingegen erkennt jemand, der hinter dieser Kritik Rassismus wittert, offenbar in erster Linie die „Rasse“ oder Herkunft der betreffenden Personen und reduziert sie darauf.
Finanzielle Belastung statt ideologischer Debatte
Neben der rechtlichen gibt es auch eine ökonomische Komponente. Der deutsche Wohlfahrtsstaat basiert auf einem Solidarprinzip: Die Bürger zahlen in das System ein, um sich gegenseitig in schwierigen Zeiten zu unterstützen. Dieser Wohlstand wurde über Generationen hinweg von denjenigen erarbeitet, die hier seit langem leben. Die Priorität liegt daher darauf, die vorhandenen Mittel zuerst unter denjenigen zu verteilen, die durch Steuern, Sozialabgaben und Arbeit zum Erhalt dieses Systems beitragen – nicht unter denen, die sich durch illegale Einreise und fehlende wirtschaftliche Eigenleistung in das System begeben.
Das ist keine ideologische Frage, sondern eine rein wirtschaftliche Abwägung. Ein Sozialsystem kann nur dann nachhaltig funktionieren, wenn es nicht unbegrenzt belastet wird. Würde Deutschland keine klaren Regeln zur Einwanderung und Abschiebung haben, würde dies unweigerlich zur Überlastung des Sozialstaates führen – mit negativen Konsequenzen für alle.
Sozialpatriotismus
Ein weiterer wichtiger Aspekt ist der Schutz der Arbeitsleistung jener Generationen, die Deutschland nach dem Zweiten Weltkrieg mühsam wieder aufgebaut haben. Während oft betont wird, dass die Deutschen moralisch kein Erbe aus der Zeit vor 1945 beanspruchen dürfen – außer der Verantwortung für den Holocaust –, ist es umso bedeutsamer, das neue Erbe nach 1945 zu respektieren, das auf Fleiß, Disziplin und harter Arbeit beruht. Der Wiederaufbau war eine kollektive Leistung deutscher Menschen, deren Früchte nicht bedenkenlos verteilt werden dürfen, sondern vorrangig denjenigen zugutekommen sollten, die dieses Fundament mitgeschaffen oder es über Generationen mitgetragen haben.
Rechtstaatlichkeit ist nicht verhandelbar
Wer sich für eine konsequente Abschiebepraxis ausspricht, tut dies nicht aus rassistischen Motiven, sondern aus Respekt vor der Rechtsstaatlichkeit und den wirtschaftlichen Grundlagen des Landes. Der Vorwurf des Rassismus in diesem Kontext ist daher nicht nur falsch, sondern entlarvt eine selektive Wahrnehmung nach rassistischen Merkmalen bei denjenigen, die ihn erheben.
-
@ 52b4a076:e7fad8bd
2025-04-28 00:48:57I have been recently building NFDB, a new relay DB. This post is meant as a short overview.
Regular relays have challenges
Current relay software have significant challenges, which I have experienced when hosting Nostr.land: - Scalability is only supported by adding full replicas, which does not scale to large relays. - Most relays use slow databases and are not optimized for large scale usage. - Search is near-impossible to implement on standard relays. - Privacy features such as NIP-42 are lacking. - Regular DB maintenance tasks on normal relays require extended downtime. - Fault-tolerance is implemented, if any, using a load balancer, which is limited. - Personalization and advanced filtering is not possible. - Local caching is not supported.
NFDB: A scalable database for large relays
NFDB is a new database meant for medium-large scale relays, built on FoundationDB that provides: - Near-unlimited scalability - Extended fault tolerance - Instant loading - Better search - Better personalization - and more.
Search
NFDB has extended search capabilities including: - Semantic search: Search for meaning, not words. - Interest-based search: Highlight content you care about. - Multi-faceted queries: Easily filter by topic, author group, keywords, and more at the same time. - Wide support for event kinds, including users, articles, etc.
Personalization
NFDB allows significant personalization: - Customized algorithms: Be your own algorithm. - Spam filtering: Filter content to your WoT, and use advanced spam filters. - Topic mutes: Mute topics, not keywords. - Media filtering: With Nostr.build, you will be able to filter NSFW and other content - Low data mode: Block notes that use high amounts of cellular data. - and more
Other
NFDB has support for many other features such as: - NIP-42: Protect your privacy with private drafts and DMs - Microrelays: Easily deploy your own personal microrelay - Containers: Dedicated, fast storage for discoverability events such as relay lists
Calcite: A local microrelay database
Calcite is a lightweight, local version of NFDB that is meant for microrelays and caching, meant for thousands of personal microrelays.
Calcite HA is an additional layer that allows live migration and relay failover in under 30 seconds, providing higher availability compared to current relays with greater simplicity. Calcite HA is enabled in all Calcite deployments.
For zero-downtime, NFDB is recommended.
Noswhere SmartCache
Relays are fixed in one location, but users can be anywhere.
Noswhere SmartCache is a CDN for relays that dynamically caches data on edge servers closest to you, allowing: - Multiple regions around the world - Improved throughput and performance - Faster loading times
routerd
routerd
is a custom load-balancer optimized for Nostr relays, integrated with SmartCache.routerd
is specifically integrated with NFDB and Calcite HA to provide fast failover and high performance.Ending notes
NFDB is planned to be deployed to Nostr.land in the coming weeks.
A lot more is to come. 👀️️️️️️
-
@ a39d19ec:3d88f61e
2025-03-18 17:16:50Nun da das deutsche Bundesregime den Ruin Deutschlands beschlossen hat, der sehr wahrscheinlich mit dem Werkzeug des Geld druckens "finanziert" wird, kamen mir so viele Gedanken zur Geldmengenausweitung, dass ich diese für einmal niedergeschrieben habe.
Die Ausweitung der Geldmenge führt aus klassischer wirtschaftlicher Sicht immer zu Preissteigerungen, weil mehr Geld im Umlauf auf eine begrenzte Menge an Gütern trifft. Dies lässt sich in mehreren Schritten analysieren:
1. Quantitätstheorie des Geldes
Die klassische Gleichung der Quantitätstheorie des Geldes lautet:
M • V = P • Y
wobei:
- M die Geldmenge ist,
- V die Umlaufgeschwindigkeit des Geldes,
- P das Preisniveau,
- Y die reale Wirtschaftsleistung (BIP).Wenn M steigt und V sowie Y konstant bleiben, muss P steigen – also Inflation entstehen.
2. Gütermenge bleibt begrenzt
Die Menge an real produzierten Gütern und Dienstleistungen wächst meist nur langsam im Vergleich zur Ausweitung der Geldmenge. Wenn die Geldmenge schneller steigt als die Produktionsgütermenge, führt dies dazu, dass mehr Geld für die gleiche Menge an Waren zur Verfügung steht – die Preise steigen.
3. Erwartungseffekte und Spekulation
Wenn Unternehmen und Haushalte erwarten, dass mehr Geld im Umlauf ist, da eine zentrale Planung es so wollte, können sie steigende Preise antizipieren. Unternehmen erhöhen ihre Preise vorab, und Arbeitnehmer fordern höhere Löhne. Dies kann eine sich selbst verstärkende Spirale auslösen.
4. Internationale Perspektive
Eine erhöhte Geldmenge kann die Währung abwerten, wenn andere Länder ihre Geldpolitik stabil halten. Eine schwächere Währung macht Importe teurer, was wiederum Preissteigerungen antreibt.
5. Kritik an der reinen Geldmengen-Theorie
Der Vollständigkeit halber muss erwähnt werden, dass die meisten modernen Ökonomen im Staatsauftrag argumentieren, dass Inflation nicht nur von der Geldmenge abhängt, sondern auch von der Nachfrage nach Geld (z. B. in einer Wirtschaftskrise). Dennoch zeigt die historische Erfahrung, dass eine unkontrollierte Geldmengenausweitung langfristig immer zu Preissteigerungen führt, wie etwa in der Hyperinflation der Weimarer Republik oder in Simbabwe.
-
@ 0d97beae:c5274a14
2025-01-11 16:52:08This article hopes to complement the article by Lyn Alden on YouTube: https://www.youtube.com/watch?v=jk_HWmmwiAs
The reason why we have broken money
Before the invention of key technologies such as the printing press and electronic communications, even such as those as early as morse code transmitters, gold had won the competition for best medium of money around the world.
In fact, it was not just gold by itself that became money, rulers and world leaders developed coins in order to help the economy grow. Gold nuggets were not as easy to transact with as coins with specific imprints and denominated sizes.
However, these modern technologies created massive efficiencies that allowed us to communicate and perform services more efficiently and much faster, yet the medium of money could not benefit from these advancements. Gold was heavy, slow and expensive to move globally, even though requesting and performing services globally did not have this limitation anymore.
Banks took initiative and created derivatives of gold: paper and electronic money; these new currencies allowed the economy to continue to grow and evolve, but it was not without its dark side. Today, no currency is denominated in gold at all, money is backed by nothing and its inherent value, the paper it is printed on, is worthless too.
Banks and governments eventually transitioned from a money derivative to a system of debt that could be co-opted and controlled for political and personal reasons. Our money today is broken and is the cause of more expensive, poorer quality goods in the economy, a larger and ever growing wealth gap, and many of the follow-on problems that have come with it.
Bitcoin overcomes the "transfer of hard money" problem
Just like gold coins were created by man, Bitcoin too is a technology created by man. Bitcoin, however is a much more profound invention, possibly more of a discovery than an invention in fact. Bitcoin has proven to be unbreakable, incorruptible and has upheld its ability to keep its units scarce, inalienable and counterfeit proof through the nature of its own design.
Since Bitcoin is a digital technology, it can be transferred across international borders almost as quickly as information itself. It therefore severely reduces the need for a derivative to be used to represent money to facilitate digital trade. This means that as the currency we use today continues to fare poorly for many people, bitcoin will continue to stand out as hard money, that just so happens to work as well, functionally, along side it.
Bitcoin will also always be available to anyone who wishes to earn it directly; even China is unable to restrict its citizens from accessing it. The dollar has traditionally become the currency for people who discover that their local currency is unsustainable. Even when the dollar has become illegal to use, it is simply used privately and unofficially. However, because bitcoin does not require you to trade it at a bank in order to use it across borders and across the web, Bitcoin will continue to be a viable escape hatch until we one day hit some critical mass where the world has simply adopted Bitcoin globally and everyone else must adopt it to survive.
Bitcoin has not yet proven that it can support the world at scale. However it can only be tested through real adoption, and just as gold coins were developed to help gold scale, tools will be developed to help overcome problems as they arise; ideally without the need for another derivative, but if necessary, hopefully with one that is more neutral and less corruptible than the derivatives used to represent gold.
Bitcoin blurs the line between commodity and technology
Bitcoin is a technology, it is a tool that requires human involvement to function, however it surprisingly does not allow for any concentration of power. Anyone can help to facilitate Bitcoin's operations, but no one can take control of its behaviour, its reach, or its prioritisation, as it operates autonomously based on a pre-determined, neutral set of rules.
At the same time, its built-in incentive mechanism ensures that people do not have to operate bitcoin out of the good of their heart. Even though the system cannot be co-opted holistically, It will not stop operating while there are people motivated to trade their time and resources to keep it running and earn from others' transaction fees. Although it requires humans to operate it, it remains both neutral and sustainable.
Never before have we developed or discovered a technology that could not be co-opted and used by one person or faction against another. Due to this nature, Bitcoin's units are often described as a commodity; they cannot be usurped or virtually cloned, and they cannot be affected by political biases.
The dangers of derivatives
A derivative is something created, designed or developed to represent another thing in order to solve a particular complication or problem. For example, paper and electronic money was once a derivative of gold.
In the case of Bitcoin, if you cannot link your units of bitcoin to an "address" that you personally hold a cryptographically secure key to, then you very likely have a derivative of bitcoin, not bitcoin itself. If you buy bitcoin on an online exchange and do not withdraw the bitcoin to a wallet that you control, then you legally own an electronic derivative of bitcoin.
Bitcoin is a new technology. It will have a learning curve and it will take time for humanity to learn how to comprehend, authenticate and take control of bitcoin collectively. Having said that, many people all over the world are already using and relying on Bitcoin natively. For many, it will require for people to find the need or a desire for a neutral money like bitcoin, and to have been burned by derivatives of it, before they start to understand the difference between the two. Eventually, it will become an essential part of what we regard as common sense.
Learn for yourself
If you wish to learn more about how to handle bitcoin and avoid derivatives, you can start by searching online for tutorials about "Bitcoin self custody".
There are many options available, some more practical for you, and some more practical for others. Don't spend too much time trying to find the perfect solution; practice and learn. You may make mistakes along the way, so be careful not to experiment with large amounts of your bitcoin as you explore new ideas and technologies along the way. This is similar to learning anything, like riding a bicycle; you are sure to fall a few times, scuff the frame, so don't buy a high performance racing bike while you're still learning to balance.
-
@ c631e267:c2b78d3e
2025-04-25 20:06:24Die Wahrheit verletzt tiefer als jede Beleidigung. \ Marquis de Sade
Sagen Sie niemals «Terroristin B.», «Schwachkopf H.», «korrupter Drecksack S.» oder «Meinungsfreiheitshasserin F.» und verkneifen Sie sich Memes, denn so etwas könnte Ihnen als Beleidigung oder Verleumdung ausgelegt werden und rechtliche Konsequenzen haben. Auch mit einer Frau M.-A. S.-Z. ist in dieser Beziehung nicht zu spaßen, sie gehört zu den Top-Anzeigenstellern.
«Politikerbeleidigung» als Straftatbestand wurde 2021 im Kampf gegen «Rechtsextremismus und Hasskriminalität» in Deutschland eingeführt, damals noch unter der Regierung Merkel. Im Gesetz nicht festgehalten ist die Unterscheidung zwischen schlechter Hetze und guter Hetze – trotzdem ist das gängige Praxis, wie der Titel fast schon nahelegt.
So dürfen Sie als Politikerin heute den Tesla als «Nazi-Auto» bezeichnen und dies ausdrücklich auf den Firmengründer Elon Musk und dessen «rechtsextreme Positionen» beziehen, welche Sie nicht einmal belegen müssen. [1] Vielleicht ernten Sie Proteste, jedoch vorrangig wegen der «gut bezahlten, unbefristeten Arbeitsplätze» in Brandenburg. Ihren Tweet hat die Berliner Senatorin Cansel Kiziltepe inzwischen offenbar dennoch gelöscht.
Dass es um die Meinungs- und Pressefreiheit in der Bundesrepublik nicht mehr allzu gut bestellt ist, befürchtet man inzwischen auch schon im Ausland. Der Fall des Journalisten David Bendels, der kürzlich wegen eines Faeser-Memes zu sieben Monaten Haft auf Bewährung verurteilt wurde, führte in diversen Medien zu Empörung. Die Welt versteckte ihre Kritik mit dem Titel «Ein Urteil wie aus einer Diktatur» hinter einer Bezahlschranke.
Unschöne, heutzutage vielleicht strafbare Kommentare würden mir auch zu einigen anderen Themen und Akteuren einfallen. Ein Kandidat wäre der deutsche Bundesgesundheitsminister (ja, er ist es tatsächlich immer noch). Während sich in den USA auf dem Gebiet etwas bewegt und zum Beispiel Robert F. Kennedy Jr. will, dass die Gesundheitsbehörde (CDC) keine Covid-Impfungen für Kinder mehr empfiehlt, möchte Karl Lauterbach vor allem das Corona-Lügengebäude vor dem Einsturz bewahren.
«Ich habe nie geglaubt, dass die Impfungen nebenwirkungsfrei sind», sagte Lauterbach jüngst der ZDF-Journalistin Sarah Tacke. Das steht in krassem Widerspruch zu seiner früher verbreiteten Behauptung, die Gen-Injektionen hätten keine Nebenwirkungen. Damit entlarvt er sich selbst als Lügner. Die Bezeichnung ist absolut berechtigt, dieser Mann dürfte keinerlei politische Verantwortung tragen und das Verhalten verlangt nach einer rechtlichen Überprüfung. Leider ist ja die Justiz anderweitig beschäftigt und hat außerdem selbst keine weiße Weste.
Obendrein kämpfte der Herr Minister für eine allgemeine Impfpflicht. Er beschwor dabei das Schließen einer «Impflücke», wie es die Weltgesundheitsorganisation – die «wegen Trump» in finanziellen Schwierigkeiten steckt – bis heute tut. Die WHO lässt aktuell ihre «Europäische Impfwoche» propagieren, bei der interessanterweise von Covid nicht mehr groß die Rede ist.
Einen «Klima-Leugner» würden manche wohl Nir Shaviv nennen, das ist ja nicht strafbar. Der Astrophysiker weist nämlich die Behauptung von einer Klimakrise zurück. Gemäß seiner Forschung ist mindestens die Hälfte der Erderwärmung nicht auf menschliche Emissionen, sondern auf Veränderungen im Sonnenverhalten zurückzuführen.
Das passt vielleicht auch den «Klima-Hysterikern» der britischen Regierung ins Konzept, die gerade Experimente zur Verdunkelung der Sonne angekündigt haben. Produzenten von Kunstfleisch oder Betreiber von Insektenfarmen würden dagegen vermutlich die Geschichte vom fatalen CO2 bevorzugen. Ihnen würde es besser passen, wenn der verantwortungsvolle Erdenbürger sein Verhalten gründlich ändern müsste.
In unserer völlig verkehrten Welt, in der praktisch jede Verlautbarung außerhalb der abgesegneten Narrative potenziell strafbar sein kann, gehört fast schon Mut dazu, Dinge offen anzusprechen. Im «besten Deutschland aller Zeiten» glaubten letztes Jahr nur noch 40 Prozent der Menschen, ihre Meinung frei äußern zu können. Das ist ein Armutszeugnis, und es sieht nicht gerade nach Besserung aus. Umso wichtiger ist es, dagegen anzugehen.
[Titelbild: Pixabay]
--- Quellen: ---
[1] Zur Orientierung wenigstens ein paar Hinweise zur NS-Vergangenheit deutscher Automobilhersteller:
- Volkswagen
- Porsche
- Daimler-Benz
- BMW
- Audi
- Opel
- Heute: «Auto-Werke für die Rüstung? Rheinmetall prüft Übernahmen»
Dieser Beitrag wurde mit dem Pareto-Client geschrieben und ist zuerst auf Transition News erschienen.
-
@ 30ceb64e:7f08bdf5
2025-04-26 20:33:30Status: Draft
Author: TheWildHustleAbstract
This NIP defines a framework for storing and sharing health and fitness profile data on Nostr. It establishes a set of standardized event kinds for individual health metrics, allowing applications to selectively access specific health information while preserving user control and privacy.
In this framework exists - NIP-101h.1 Weight using kind 1351 - NIP-101h.2 Height using kind 1352 - NIP-101h.3 Age using kind 1353 - NIP-101h.4 Gender using kind 1354 - NIP-101h.5 Fitness Level using kind 1355
Motivation
I want to build and support an ecosystem of health and fitness related nostr clients that have the ability to share and utilize a bunch of specific interoperable health metrics.
- Selective access - Applications can access only the data they need
- User control - Users can choose which metrics to share
- Interoperability - Different health applications can share data
- Privacy - Sensitive health information can be managed independently
Specification
Kind Number Range
Health profile metrics use the kind number range 1351-1399:
| Kind | Metric | | --------- | ---------------------------------- | | 1351 | Weight | | 1352 | Height | | 1353 | Age | | 1354 | Gender | | 1355 | Fitness Level | | 1356-1399 | Reserved for future health metrics |
Common Structure
All health metric events SHOULD follow these guidelines:
- The content field contains the primary value of the metric
- Required tags:
['t', 'health']
- For categorizing as health data['t', metric-specific-tag]
- For identifying the specific metric['unit', unit-of-measurement]
- When applicable- Optional tags:
['converted_value', value, unit]
- For providing alternative unit measurements['timestamp', ISO8601-date]
- When the metric was measured['source', application-name]
- The source of the measurement
Unit Handling
Health metrics often have multiple ways to be measured. To ensure interoperability:
- Where multiple units are possible, one standard unit SHOULD be chosen as canonical
- When using non-standard units, a
converted_value
tag SHOULD be included with the canonical unit - Both the original and converted values should be provided for maximum compatibility
Client Implementation Guidelines
Clients implementing this NIP SHOULD:
- Allow users to explicitly choose which metrics to publish
- Support reading health metrics from other users when appropriate permissions exist
- Support updating metrics with new values over time
- Preserve tags they don't understand for future compatibility
- Support at least the canonical unit for each metric
Extensions
New health metrics can be proposed as extensions to this NIP using the format:
- NIP-101h.X where X is the metric number
Each extension MUST specify: - A unique kind number in the range 1351-1399 - The content format and meaning - Required and optional tags - Examples of valid events
Privacy Considerations
Health data is sensitive personal information. Clients implementing this NIP SHOULD:
- Make it clear to users when health data is being published
- Consider incorporating NIP-44 encryption for sensitive metrics
- Allow users to selectively share metrics with specific individuals
- Provide easy ways to delete previously published health data
NIP-101h.1: Weight
Description
This NIP defines the format for storing and sharing weight data on Nostr.
Event Kind: 1351
Content
The content field MUST contain the numeric weight value as a string.
Required Tags
- ['unit', 'kg' or 'lb'] - Unit of measurement
- ['t', 'health'] - Categorization tag
- ['t', 'weight'] - Specific metric tag
Optional Tags
- ['converted_value', value, unit] - Provides the weight in alternative units for interoperability
- ['timestamp', ISO8601 date] - When the weight was measured
Examples
json { "kind": 1351, "content": "70", "tags": [ ["unit", "kg"], ["t", "health"], ["t", "weight"] ] }
json { "kind": 1351, "content": "154", "tags": [ ["unit", "lb"], ["t", "health"], ["t", "weight"], ["converted_value", "69.85", "kg"] ] }
NIP-101h.2: Height
Status: Draft
Description
This NIP defines the format for storing and sharing height data on Nostr.
Event Kind: 1352
Content
The content field can use two formats: - For metric height: A string containing the numeric height value in centimeters (cm) - For imperial height: A JSON string with feet and inches properties
Required Tags
['t', 'health']
- Categorization tag['t', 'height']
- Specific metric tag['unit', 'cm' or 'imperial']
- Unit of measurement
Optional Tags
['converted_value', value, 'cm']
- Provides height in centimeters for interoperability when imperial is used['timestamp', ISO8601-date]
- When the height was measured
Examples
```jsx // Example 1: Metric height Apply to App.jsx
// Example 2: Imperial height with conversion Apply to App.jsx ```
Implementation Notes
- Centimeters (cm) is the canonical unit for height interoperability
- When using imperial units, a conversion to centimeters SHOULD be provided
- Height values SHOULD be positive integers
- For maximum compatibility, clients SHOULD support both formats
NIP-101h.3: Age
Status: Draft
Description
This NIP defines the format for storing and sharing age data on Nostr.
Event Kind: 1353
Content
The content field MUST contain the numeric age value as a string.
Required Tags
['unit', 'years']
- Unit of measurement['t', 'health']
- Categorization tag['t', 'age']
- Specific metric tag
Optional Tags
['timestamp', ISO8601-date]
- When the age was recorded['dob', ISO8601-date]
- Date of birth (if the user chooses to share it)
Examples
```jsx // Example 1: Basic age Apply to App.jsx
// Example 2: Age with DOB Apply to App.jsx ```
Implementation Notes
- Age SHOULD be represented as a positive integer
- For privacy reasons, date of birth (dob) is optional
- Clients SHOULD consider updating age automatically if date of birth is known
- Age can be a sensitive metric and clients may want to consider encrypting this data
NIP-101h.4: Gender
Status: Draft
Description
This NIP defines the format for storing and sharing gender data on Nostr.
Event Kind: 1354
Content
The content field contains a string representing the user's gender.
Required Tags
['t', 'health']
- Categorization tag['t', 'gender']
- Specific metric tag
Optional Tags
['timestamp', ISO8601-date]
- When the gender was recorded['preferred_pronouns', string]
- User's preferred pronouns
Common Values
While any string value is permitted, the following common values are recommended for interoperability: - male - female - non-binary - other - prefer-not-to-say
Examples
```jsx // Example 1: Basic gender Apply to App.jsx
// Example 2: Gender with pronouns Apply to App.jsx ```
Implementation Notes
- Clients SHOULD allow free-form input for gender
- For maximum compatibility, clients SHOULD support the common values
- Gender is a sensitive personal attribute and clients SHOULD consider appropriate privacy controls
- Applications focusing on health metrics should be respectful of gender diversity
NIP-101h.5: Fitness Level
Status: Draft
Description
This NIP defines the format for storing and sharing fitness level data on Nostr.
Event Kind: 1355
Content
The content field contains a string representing the user's fitness level.
Required Tags
['t', 'health']
- Categorization tag['t', 'fitness']
- Fitness category tag['t', 'level']
- Specific metric tag
Optional Tags
['timestamp', ISO8601-date]
- When the fitness level was recorded['activity', activity-type]
- Specific activity the fitness level relates to['metrics', JSON-string]
- Quantifiable fitness metrics used to determine level
Common Values
While any string value is permitted, the following common values are recommended for interoperability: - beginner - intermediate - advanced - elite - professional
Examples
```jsx // Example 1: Basic fitness level Apply to App.jsx
// Example 2: Activity-specific fitness level with metrics Apply to App.jsx ```
Implementation Notes
- Fitness level is subjective and may vary by activity
- The activity tag can be used to specify fitness level for different activities
- The metrics tag can provide objective measurements to support the fitness level
- Clients can extend this format to include activity-specific fitness assessments
- For general fitness apps, the simple beginner/intermediate/advanced scale is recommended
-
@ 59b96df8:b208bd59
2025-04-30 19:27:41Nostr is a decentralized protocol designed to be censorship-resistant.
However, this resilience can sometimes make data synchronization between relays more difficult—though not impossible.In my opinion, Nostr still lacks a few key features to ensure consistent and reliable operation, especially regarding data versioning.
Profile Versions
When I log in to a new Nostr client using my private key, I might end up with an outdated version of my profile, depending on how the client is built or configured.
Why does this happen?
The client fetches my profile data (kind:0 - NIP 1) from its own list of selected relays.
If I didn’t publish the latest version of my profile on those specific relays, the client will only display an older version.Relay List Metadata
The same issue occurs with the relay list metadata (kind:10002 - NIP 65).
When switching to a new client, it's common that my configured relay list isn't properly carried over because it also depends on where the data is fetched.Protocol Change Proposal
I believe the protocol should evolve, specifically regarding how
kind:0
(user metadata) andkind:10002
(relay list metadata) events are distributed to relays.Relays should be able to build a list of public relays automatically (via autodiscovery), and forward all received
kind:0
andkind:10002
events to every relay in that list.This would create a ripple effect:
``` Relay A relay list: [Relay B, Relay C]
Relay B relay list: [Relay A, Relay C]
Relay C relay list: [Relay A, Relay B]User A sends kind:0 to Relay A
→ Relay A forwards kind:0 to Relay B
→ Relay A forwards kind:0 to Relay C
→ Relay B forwards kind:0 to Relay A
→ Relay B forwards kind:0 to Relay C → Relay A forwards kind:0 to Relay B
→ etc. ```Solution: Event Encapsulation
To avoid infinite replication loops, the solution could be to wrap the user’s signed event inside a new event signed by the relay, using a dedicated
kind
(e.g.,kind:9999
).When Relay B receives a
kind:9999
event from Relay A, it extracts the original event, checks whether it already exists or if a newer version is present. If not, it adds the event to its database.Here is an example of such encapsulated data:
json { "content": "{\"content\":\"{\\\"lud16\\\":\\\"dolu@npub.cash\\\",\\\"name\\\":\\\"dolu\\\",\\\"nip05\\\":\\\"dolu@dolu.dev\\\",\\\"picture\\\":\\\"!(image)[!(image)[https://pbs.twimg.com/profile_images/1577320325158682626/igGerO9A_400x400.jpg]]\\\",\\\"pubkey\\\":\\\"59b96df8d8b5e66b3b95a3e1ba159750a6edd69bcbba1857aeb652a5b208bd59\\\",\\\"npub\\\":\\\"npub1txukm7xckhnxkwu450sm59vh2znwm45mewaps4awkef2tvsgh4vsf7phrl\\\",\\\"created_at\\\":1688312044}\",\"created_at\":1728233747,\"id\":\"afc3629314aad00f8786af97877115de30c184a25a48440a480bff590a0f9ba8\",\"kind\":0,\"pubkey\":\"59b96df8d8b5e66b3b95a3e1ba159750a6edd69bcbba1857aeb652a5b208bd59\",\"sig\":\"989b250f7fd5d4cfc9a6ee567594c81ee0a91f972e76b61332005fb02aa1343854104fdbcb6c4f77ae8896acd886ab4188043c383e32a6bba509fd78fedb984a\",\"tags\":[]}", "created_at": 1746036589, "id": "efe7fa5844c5c4428fb06d1657bf663d8b256b60c793b5a2c5a426ec773c745c", "kind": 9999, "pubkey": "79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798", "sig": "219f8bc840d12969ceb0093fb62f314a1f2e19a0cbe3e34b481bdfdf82d8238e1f00362791d17801548839f511533461f10dd45cd0aa4e264d71db6844f5e97c", "tags": [] }
-
@ 37fe9853:bcd1b039
2025-01-11 15:04:40yoyoaa
-
@ 62033ff8:e4471203
2025-01-11 15:00:24收录的内容中 kind=1的部分,实话说 质量不高。 所以我增加了kind=30023 长文的article,但是更新的太少,多个relays 的服务器也没有多少长文。
所有搜索nostr如果需要产生价值,需要有高质量的文章和新闻。 而且现在有很多机器人的文章充满着浪费空间的作用,其他作用都用不上。
https://www.duozhutuan.com 目前放的是给搜索引擎提供搜索的原材料。没有做UI给人类浏览。所以看上去是粗糙的。 我并没有打算去做一个发microblog的 web客户端,那类的客户端太多了。
我觉得nostr社区需要解决的还是应用。如果仅仅是microblog 感觉有点够呛
幸运的是npub.pro 建站这样的,我觉得有点意思。
yakihonne 智能widget 也有意思
我做的TaskQ5 我自己在用了。分布式的任务系统,也挺好的。
-
@ 866e0139:6a9334e5
2025-04-30 18:47:50Autor: Ulrike Guérot. Dieser Beitrag wurde mit dem Pareto-Client geschrieben. Sie finden alle Texte der Friedenstaube und weitere Texte zum Thema Frieden hier.**
Die neuesten Artikel der Friedenstaube gibt es jetzt auch im eigenen Friedenstaube-Telegram-Kanal.
https://www.youtube.com/watch?v=KarwcXKmD3E
Liebe Freunde und Bekannte,
liebe Friedensbewegte,
liebe Dresdener, Dresden ist ja auch eine kriegsgeplagte Stadt,
dies ist meine dritte Rede auf einer Friedensdemonstration innerhalb von nur gut einem halben Jahr: München im September, München im Februar, Dresden im April. Und der Krieg rückt immer näher! Wer sich den „Operationsplan Deutschland über die zivil-militärische Kooperation als wesentlicher Bestandteil der Kriegsführung“ anschaut, dem kann nur schlecht werden zu sehen, wie weit die Kriegsvorbereitungen schon gediehen sind.
Doch bevor ich darauf eingehe, möchte ich mich als erstes distanzieren von dem wieder einmal erbärmlichen Framing dieser Demo als Querfront oder Schwurblerdemo. Durch dieses Framing wurde diese Demo vom Dresdener Marktplatz auf den Postplatz verwiesen, wurden wir geschmäht und wurde die Stadtverwaltung Dresden dazu gebracht, eine „genehmere“ Demo auf dem Marktplatz zuzulassen! Es wäre schön, wenn wir alle - alle! - solche Framings weglassen würden und uns als Friedensbewegte die Hand reichen! Der Frieden im eigenen Haus ist die Voraussetzung für unsere Friedensarbeit. Der Streit in unserem Haus nutzt nur denen, die den Krieg wollen und uns spalten!
Ich möchte hier noch einmal klarstellen, von welcher Position aus ich hier und heute wiederholt auf einer Bühne spreche: Ich spreche als engagierte Bürgerin der Bundesrepublik Deutschland. Ich spreche als Europäerin, die lange Jahre in und an dem einstigen Friedensprojekt EU gearbeitet hat. Ich spreche als Enkelin von zwei Großvätern. Der eine ist im Krieg gefallen, der andere kam ohne Beine zurück. Ich spreche als Tochter einer Mutter, die 1945, als 6-Jährige, unter traumatischen Umständen aus Schlesien vertrieben wurde, nach Delitzsch in Sachsen übrigens. Ich spreche als Mutter von zwei Söhnen, 33 und 31 Jahre, von denen ich nicht möchte, dass sie in einen Krieg müssen. Von dieser, und nur dieser Position aus spreche ich heute zu Ihnen und von keiner anderen! Ich bin nicht rechts, ich bin keine Schwurblerin, ich bin nicht radikal, ich bin keine Querfront.
Als Bürgerin wünsche ich mir – nein, verlange ich! – dass die Bundesrepublik Deutschland sich an ihre gesetzlichen Grundlagen und Vertragstexte hält. Das sind namentlich: Die Friedensklausel des Grundgesetzes aus Art. 125 und 126 GG, dass von deutschem Boden nie wieder Krieg ausgeht. Und der Zwei-plus-Vier-Vertrag, in dem Deutschland 1990 unterschrieben hat, dass es nie an einem bewaffneten Konflikt gegen Russland teilnimmt. Ich schäme mich dafür, dass mein Land dabei ist, vertragsbrüchig zu werden. Ich bitte Friedrich Merz, den designierten Bundeskanzler, keinen Vertragsbruch durch die Lieferung von Taurus-Raketen zu begehen!
Ich bitte ferner darum, dass sich dieses Land an seine didaktischen Vorgaben für Schulen hält, die im immer noch geltenden „Beutelsbacher Konsens“ aus den 1970er Jahren festgelegt wurden. In diesem steht in Artikel I. ein Überwältigungsverbot: „Es ist nicht erlaubt, den Schüler – mit welchen Mitteln auch immer – im Sinne erwünschter Meinungen zu überrumpeln und damit an der Gewinnung eines selbständigen Urteils zu hindern.“ Vor diesem Hintergrund ist es nicht erlaubt, Soldaten oder Gefreite in Schulen zu schicken und für die Bundeswehr zu werben. Vielmehr wäre es geboten, unsere Kinder über Art. 125 & 126 GG und die Friedenspflicht des Landes und seine Geschichte mit Blick auf Russland aufzuklären.
Als Europäerin wünsche ich mir, dass wir die europäische Hymne, Beethovens 9. Sinfonie, ernst nehmen, deren Text da lautet: Alle Menschen werden Brüder. Alle Menschen werden Brüder. Alle! Dazu gehören auch die Russen und natürlich auch die Ukrainer!
Als Europäerin, die in den 1990er Jahren für den großartigen EU-Kommissionspräsidenten Jacques Delors gearbeitet hat, Katholik, Sozialist und Gewerkschafter, wünsche ich mir, dass wir das Versprechen, #Europa ist nie wieder Krieg, ernst nehmen. Wir haben es 70 Jahre lang auf diesem Kontinent erzählt. Die Lügen und die Propaganda, mit der jetzt die Kriegsnotwendigkeit gegen Russland herbeigeredet wird, sind unerträglich. Die EU, Friedensnobelpreisträgerin von 2012, ist dabei – oder hat schon – ihr Ansehen in der Welt verloren. Es ist eine politische Tragödie! Neben ihrem Ansehen ist die EU jetzt dabei, das zivilisatorische Erbe Europas zu verspielen, die civilité européenne, wie der französische Historiker und Marxist, Étienne Balibar es nennt.
Ein Element dieses historischen Erbes ist es, dass uns in Europa eint, dass wir über Jahrhunderte alle zugleich Täter und Opfer gewesen sind. Ce que nous partageons, c’est ce que nous étions tous bourreaux et victimes. So schreibt es der französische Literat Laurent Gaudet in seinem europäischen Epos, L’Europe. Une Banquet des Peuples von 2016.
Das heißt, dass niemand in Europa, niemand – auch die Esten nicht! – das Recht hat, vorgängige Traumata, die die baltischen Staaten unbestrittenermaßen mit Stalin-Russland gehabt haben, zu verabsolutieren, auf die gesamte EU zu übertragen, die EU damit zu blockieren und die Politikgestaltung der EU einseitig auf einen Kriegskurs gegen Russland auszurichten. Ich wende mich mit dieser Feststellung direkt an Kaja Kalles, die Hohe Beauftragte für Sicherheits- und Außenpolitik der EU und hoffe, dass sie diese Rede hört und das Epos von Laurent Gaudet liest.
Es gibt keinen gerechten Krieg! Krieg ist immer nur Leid. In Straßburg, dem Sitz des Europäischen Parlaments, steht auf dem Place de la République eine Statue, eine Frau, die Republik. Sie hält in jedem Arm einen Sohn, einen Elsässer und einen Franzosen, die aus dem Krieg kommen. In der Darstellung der Bronzefigur haben die beiden Soldaten-Männer ihre Uniformen schon ausgezogen und werden von Madame la République gehalten und getröstet. An diesem Denkmal sollten sich alle Abgeordnete des Straßburger Europaparlamentes am 9. Mai versammeln. Ich zitiere noch einmal Cicero: Der ungerechteste Friede ist besser als der gerechteste Krieg. Für den Vortrag dieses Zitats eines der größten Staatsdenker des antiken Roms in einer Fernsehsendung bin ich 2022 mit einem Shitstorm überzogen worden. Allein das ist Ausdruck des Verfalls unserer Diskussionskultur in unfassbarem Ausmaß, ganz besonders in Deutschland.
Als Europäerin verlange ich die Überwindung unserer kognitiven Dissonanz. Wenn schon die New York Times am 27. März 2025 ein 27-seitiges Dossier veröffentlicht, das nicht nur belegt, was man eigentlich schon weiß, aber bisher nicht sagen durfte, nämlich, dass der ukrainisch-russische Krieg ein eindeutiger Stellvertreter-Krieg der USA ist, in der die Ukraine auf monströseste Weise instrumentalisiert wurde – was das Dossier der NYT unumwunden zugibt! – wäre es an der Zeit, die eindeutige Schuldzuweisung an Russland für den Krieg zurückzuziehen und die gezielt verbreitete Russophobie in Europa zu beenden. Anstatt dass – wofür es leider viele Verdachtsmomente gibt – die EU die Friedensverhandlungen in Saudi-Arabien nach Strich und Faden torpediert.
Der französische Philosoph Luc Ferry hat vor ein paar Tagen im prime time französischen Fernsehen ganz klar gesagt, dass der Krieg 2014 nach der Instrumentalisierung des Maidan durch die USA von der West-Ukraine ausging, dass Zelensky diesen Krieg wollte und – mit amerikanischer Rückendeckung – provoziert hat, dass Putin nicht Hitler ist und dass die einzigen mit faschistoiden Tendenzen in der ukrainischen Regierung sitzen. Ich wünschte mir, ein solches Statement wäre auch im Deutschen Fernsehen möglich und danke Richard David Precht, dass er, der noch in den Öffentlich-Rechtlichen Rundfunk vorgelassen wird, an dieser Stelle versucht, etwas Vernunft in die Debatte zu bringen.
Auch ist es gerade als Europäerin nicht hinzunehmen, dass russische Diplomaten von den Feierlichkeiten am 8. Mai 2025 ausgeschlossen werden sollen, ausgerechnet 80 Jahre nach Ende des II. Weltkrieges. Nicht nur sind Feierlichkeiten genau dazu da, sich die Hand zu reichen und den Frieden zu feiern. Doch gerade vor dem Hintergrund von 27 Millionen gefallenen sowjetischen Soldaten ist die Zurückweisung der Russen von den Feierlichkeiten geradezu eklatante Geschichtsvergessenheit.
***
Der Völkerbund hat 1925 die Frage erörtert, warum der I. Weltkrieg noch so lange gedauert hat, obgleich er militärisch bereits 1916 nach Eröffnung des Zweifrontenkrieges zu Lasten des Deutschen Reiches entschieden war. Wir erinnern uns: Für die Niederlage wurden mit der Dolchstoßlegende die jüdischen, kommunistischen und sozialistischen Pazifisten verantwortlich gemacht. Richtig ist, so der Bericht des Völkerbundes von 1925, dass allein die Rüstungsindustrie dafür gesorgt hat, dass der militärisch eigentlich schon entschiedene Krieg noch zwei weitere Jahre als Materialabnutzungs- und Stellungskrieg weiterbetrieben wurde, nur, damit noch ein bisschen Geld verdient werden konnte. Genauso scheint es heute zu sein. Der Krieg ist militärisch entschieden. Er kann und muss sofort beendet werden, und das passiert lediglich deswegen nicht, weil der Westen seine Niederlage nicht zugeben kann. Hochmut aber kommt vor dem Fall, und es darf nicht sein, dass für europäischen Hochmut jeden Tag rund 2000 ukrainische oder russische Soldaten und viele Zivilisten sterben. Die offenbare europäische Absicht, den Krieg jetzt einzufrieren, nur, um ihn 2029/ 2030 wieder zu entfachen, wenn Europa dann besser aufgerüstet ist, ist nur noch zynisch.
Als Kriegsenkelin von Kriegsversehrten, Tochter einer Flüchtlingsmutter und Mutter von zwei Söhnen, deren französischer Urgroßvater 6 Jahre in deutscher Kriegsgefangenschaft war, wünsche ich mir schließlich und zum Abschluss, dass wir die Kraft haben werden, wenn dieser Wahnsinn, den man den europäischen Bürgern gerade aufbürdet, vorbei sein wird, ein neues europäisches Projekt zu erdenken und zu erbauen, in dem Europa politisch geeint ist und es bleibt, aber dezentral, regional, subsidiär, friedlich und neutral gestaltet wird. Also ein Europa jenseits der Strukturen der EU, das bereit ist, die Pax Americana zu überwinden, aus der NATO auszutreten und der multipolaren Welt seine Hand auszustrecken! Unser Europa ist postimperial, postkolonial, groß, vielfältig und friedfertig!
Ulrike Guérot, Jg. 1964, ist europäische Professorin, Publizistin und Bestsellerautorin. Seit rund 30 Jahren beschäftigt sie sich in europäischen Think Tanks und Universitäten in Paris, Brüssel, London, Washington, New York, Wien und Berlin mit Fragen der europäischen Demokratie sowie mit der Rolle Europas in der Welt. Ulrike Guérot ist seit März 2014 Gründerin und Direktorin des European Democracy Lab e.V., Berlin und initiierte im März 2023 das European Citizens Radio, das auf Spotify zu finden ist. Zuletzt erschien von ihr „Über Halford J. Mackinders Heartland-Theorie, Der geografische Drehpunkt der Geschichte“ (Westend, 2024). Mehr Infos zur Autorin hier.
LASSEN SIE DER FRIEDENSTAUBE FLÜGEL WACHSEN!
Hier können Sie die Friedenstaube abonnieren und bekommen die Artikel zugesandt.
Schon jetzt können Sie uns unterstützen:
- Für 50 CHF/EURO bekommen Sie ein Jahresabo der Friedenstaube.
- Für 120 CHF/EURO bekommen Sie ein Jahresabo und ein T-Shirt/Hoodie mit der Friedenstaube.
- Für 500 CHF/EURO werden Sie Förderer und bekommen ein lebenslanges Abo sowie ein T-Shirt/Hoodie mit der Friedenstaube.
- Ab 1000 CHF werden Sie Genossenschafter der Friedenstaube mit Stimmrecht (und bekommen lebenslanges Abo, T-Shirt/Hoodie).
Für Einzahlungen in CHF (Betreff: Friedenstaube):
Für Einzahlungen in Euro:
Milosz Matuschek
IBAN DE 53710520500000814137
BYLADEM1TST
Sparkasse Traunstein-Trostberg
Betreff: Friedenstaube
Wenn Sie auf anderem Wege beitragen wollen, schreiben Sie die Friedenstaube an: friedenstaube@pareto.space
Sie sind noch nicht auf Nostr and wollen die volle Erfahrung machen (liken, kommentieren etc.)? Zappen können Sie den Autor auch ohne Nostr-Profil! Erstellen Sie sich einen Account auf Start. Weitere Onboarding-Leitfäden gibt es im Pareto-Wiki.
-
@ 23b0e2f8:d8af76fc
2025-01-08 18:17:52Necessário
- Um Android que você não use mais (a câmera deve estar funcionando).
- Um cartão microSD (opcional, usado apenas uma vez).
- Um dispositivo para acompanhar seus fundos (provavelmente você já tem um).
Algumas coisas que você precisa saber
- O dispositivo servirá como um assinador. Qualquer movimentação só será efetuada após ser assinada por ele.
- O cartão microSD será usado para transferir o APK do Electrum e garantir que o aparelho não terá contato com outras fontes de dados externas após sua formatação. Contudo, é possível usar um cabo USB para o mesmo propósito.
- A ideia é deixar sua chave privada em um dispositivo offline, que ficará desligado em 99% do tempo. Você poderá acompanhar seus fundos em outro dispositivo conectado à internet, como seu celular ou computador pessoal.
O tutorial será dividido em dois módulos:
- Módulo 1 - Criando uma carteira fria/assinador.
- Módulo 2 - Configurando um dispositivo para visualizar seus fundos e assinando transações com o assinador.
No final, teremos:
- Uma carteira fria que também servirá como assinador.
- Um dispositivo para acompanhar os fundos da carteira.
Módulo 1 - Criando uma carteira fria/assinador
-
Baixe o APK do Electrum na aba de downloads em https://electrum.org/. Fique à vontade para verificar as assinaturas do software, garantindo sua autenticidade.
-
Formate o cartão microSD e coloque o APK do Electrum nele. Caso não tenha um cartão microSD, pule este passo.
- Retire os chips e acessórios do aparelho que será usado como assinador, formate-o e aguarde a inicialização.
- Durante a inicialização, pule a etapa de conexão ao Wi-Fi e rejeite todas as solicitações de conexão. Após isso, você pode desinstalar aplicativos desnecessários, pois precisará apenas do Electrum. Certifique-se de que Wi-Fi, Bluetooth e dados móveis estejam desligados. Você também pode ativar o modo avião.\ (Curiosidade: algumas pessoas optam por abrir o aparelho e danificar a antena do Wi-Fi/Bluetooth, impossibilitando essas funcionalidades.)
- Insira o cartão microSD com o APK do Electrum no dispositivo e instale-o. Será necessário permitir instalações de fontes não oficiais.
- No Electrum, crie uma carteira padrão e gere suas palavras-chave (seed). Anote-as em um local seguro. Caso algo aconteça com seu assinador, essas palavras permitirão o acesso aos seus fundos novamente. (Aqui entra seu método pessoal de backup.)
Módulo 2 - Configurando um dispositivo para visualizar seus fundos e assinando transações com o assinador.
-
Criar uma carteira somente leitura em outro dispositivo, como seu celular ou computador pessoal, é uma etapa bastante simples. Para este tutorial, usaremos outro smartphone Android com Electrum. Instale o Electrum a partir da aba de downloads em https://electrum.org/ ou da própria Play Store. (ATENÇÃO: O Electrum não existe oficialmente para iPhone. Desconfie se encontrar algum.)
-
Após instalar o Electrum, crie uma carteira padrão, mas desta vez escolha a opção Usar uma chave mestra.
- Agora, no assinador que criamos no primeiro módulo, exporte sua chave pública: vá em Carteira > Detalhes da carteira > Compartilhar chave mestra pública.
-
Escaneie o QR gerado da chave pública com o dispositivo de consulta. Assim, ele poderá acompanhar seus fundos, mas sem permissão para movimentá-los.
-
Para receber fundos, envie Bitcoin para um dos endereços gerados pela sua carteira: Carteira > Addresses/Coins.
-
Para movimentar fundos, crie uma transação no dispositivo de consulta. Como ele não possui a chave privada, será necessário assiná-la com o dispositivo assinador.
- No assinador, escaneie a transação não assinada, confirme os detalhes, assine e compartilhe. Será gerado outro QR, desta vez com a transação já assinada.
- No dispositivo de consulta, escaneie o QR da transação assinada e transmita-a para a rede.
Conclusão
Pontos positivos do setup:
- Simplicidade: Basta um dispositivo Android antigo.
- Flexibilidade: Funciona como uma ótima carteira fria, ideal para holders.
Pontos negativos do setup:
- Padronização: Não utiliza seeds no padrão BIP-39, você sempre precisará usar o electrum.
- Interface: A aparência do Electrum pode parecer antiquada para alguns usuários.
Nesse ponto, temos uma carteira fria que também serve para assinar transações. O fluxo de assinar uma transação se torna: Gerar uma transação não assinada > Escanear o QR da transação não assinada > Conferir e assinar essa transação com o assinador > Gerar QR da transação assinada > Escanear a transação assinada com qualquer outro dispositivo que possa transmiti-la para a rede.
Como alguns devem saber, uma transação assinada de Bitcoin é praticamente impossível de ser fraudada. Em um cenário catastrófico, você pode mesmo que sem internet, repassar essa transação assinada para alguém que tenha acesso à rede por qualquer meio de comunicação. Mesmo que não queiramos que isso aconteça um dia, esse setup acaba por tornar essa prática possível.
-
@ 91bea5cd:1df4451c
2025-04-26 10:16:21O Contexto Legal Brasileiro e o Consentimento
No ordenamento jurídico brasileiro, o consentimento do ofendido pode, em certas circunstâncias, afastar a ilicitude de um ato que, sem ele, configuraria crime (como lesão corporal leve, prevista no Art. 129 do Código Penal). Contudo, o consentimento tem limites claros: não é válido para bens jurídicos indisponíveis, como a vida, e sua eficácia é questionável em casos de lesões corporais graves ou gravíssimas.
A prática de BDSM consensual situa-se em uma zona complexa. Em tese, se ambos os parceiros são adultos, capazes, e consentiram livre e informadamente nos atos praticados, sem que resultem em lesões graves permanentes ou risco de morte não consentido, não haveria crime. O desafio reside na comprovação desse consentimento, especialmente se uma das partes, posteriormente, o negar ou alegar coação.
A Lei Maria da Penha (Lei nº 11.340/2006)
A Lei Maria da Penha é um marco fundamental na proteção da mulher contra a violência doméstica e familiar. Ela estabelece mecanismos para coibir e prevenir tal violência, definindo suas formas (física, psicológica, sexual, patrimonial e moral) e prevendo medidas protetivas de urgência.
Embora essencial, a aplicação da lei em contextos de BDSM pode ser delicada. Uma alegação de violência por parte da mulher, mesmo que as lesões ou situações decorram de práticas consensuais, tende a receber atenção prioritária das autoridades, dada a presunção de vulnerabilidade estabelecida pela lei. Isso pode criar um cenário onde o parceiro masculino enfrenta dificuldades significativas em demonstrar a natureza consensual dos atos, especialmente se não houver provas robustas pré-constituídas.
Outros riscos:
Lesão corporal grave ou gravíssima (art. 129, §§ 1º e 2º, CP), não pode ser justificada pelo consentimento, podendo ensejar persecução penal.
Crimes contra a dignidade sexual (arts. 213 e seguintes do CP) são de ação pública incondicionada e independem de representação da vítima para a investigação e denúncia.
Riscos de Falsas Acusações e Alegação de Coação Futura
Os riscos para os praticantes de BDSM, especialmente para o parceiro que assume o papel dominante ou que inflige dor/restrição (frequentemente, mas não exclusivamente, o homem), podem surgir de diversas frentes:
- Acusações Externas: Vizinhos, familiares ou amigos que desconhecem a natureza consensual do relacionamento podem interpretar sons, marcas ou comportamentos como sinais de abuso e denunciar às autoridades.
- Alegações Futuras da Parceira: Em caso de término conturbado, vingança, arrependimento ou mudança de perspectiva, a parceira pode reinterpretar as práticas passadas como abuso e buscar reparação ou retaliação através de uma denúncia. A alegação pode ser de que o consentimento nunca existiu ou foi viciado.
- Alegação de Coação: Uma das formas mais complexas de refutar é a alegação de que o consentimento foi obtido mediante coação (física, moral, psicológica ou econômica). A parceira pode alegar, por exemplo, que se sentia pressionada, intimidada ou dependente, e que seu "sim" não era genuíno. Provar a ausência de coação a posteriori é extremamente difícil.
- Ingenuidade e Vulnerabilidade Masculina: Muitos homens, confiando na dinâmica consensual e na parceira, podem negligenciar a necessidade de precauções. A crença de que "isso nunca aconteceria comigo" ou a falta de conhecimento sobre as implicações legais e o peso processual de uma acusação no âmbito da Lei Maria da Penha podem deixá-los vulneráveis. A presença de marcas físicas, mesmo que consentidas, pode ser usada como evidência de agressão, invertendo o ônus da prova na prática, ainda que não na teoria jurídica.
Estratégias de Prevenção e Mitigação
Não existe um método infalível para evitar completamente o risco de uma falsa acusação, mas diversas medidas podem ser adotadas para construir um histórico de consentimento e reduzir vulnerabilidades:
- Comunicação Explícita e Contínua: A base de qualquer prática BDSM segura é a comunicação constante. Negociar limites, desejos, palavras de segurança ("safewords") e expectativas antes, durante e depois das cenas é crucial. Manter registros dessas negociações (e-mails, mensagens, diários compartilhados) pode ser útil.
-
Documentação do Consentimento:
-
Contratos de Relacionamento/Cena: Embora a validade jurídica de "contratos BDSM" seja discutível no Brasil (não podem afastar normas de ordem pública), eles servem como forte evidência da intenção das partes, da negociação detalhada de limites e do consentimento informado. Devem ser claros, datados, assinados e, idealmente, reconhecidos em cartório (para prova de data e autenticidade das assinaturas).
-
Registros Audiovisuais: Gravar (com consentimento explícito para a gravação) discussões sobre consentimento e limites antes das cenas pode ser uma prova poderosa. Gravar as próprias cenas é mais complexo devido a questões de privacidade e potencial uso indevido, mas pode ser considerado em casos específicos, sempre com consentimento mútuo documentado para a gravação.
Importante: a gravação deve ser com ciência da outra parte, para não configurar violação da intimidade (art. 5º, X, da Constituição Federal e art. 20 do Código Civil).
-
-
Testemunhas: Em alguns contextos de comunidade BDSM, a presença de terceiros de confiança durante negociações ou mesmo cenas pode servir como testemunho, embora isso possa alterar a dinâmica íntima do casal.
- Estabelecimento Claro de Limites e Palavras de Segurança: Definir e respeitar rigorosamente os limites (o que é permitido, o que é proibido) e as palavras de segurança é fundamental. O desrespeito a uma palavra de segurança encerra o consentimento para aquele ato.
- Avaliação Contínua do Consentimento: O consentimento não é um cheque em branco; ele deve ser entusiástico, contínuo e revogável a qualquer momento. Verificar o bem-estar do parceiro durante a cena ("check-ins") é essencial.
- Discrição e Cuidado com Evidências Físicas: Ser discreto sobre a natureza do relacionamento pode evitar mal-entendidos externos. Após cenas que deixem marcas, é prudente que ambos os parceiros estejam cientes e de acordo, talvez documentando por fotos (com data) e uma nota sobre a consensualidade da prática que as gerou.
- Aconselhamento Jurídico Preventivo: Consultar um advogado especializado em direito de família e criminal, com sensibilidade para dinâmicas de relacionamento alternativas, pode fornecer orientação personalizada sobre as melhores formas de documentar o consentimento e entender os riscos legais específicos.
Observações Importantes
- Nenhuma documentação substitui a necessidade de consentimento real, livre, informado e contínuo.
- A lei brasileira protege a "integridade física" e a "dignidade humana". Práticas que resultem em lesões graves ou que violem a dignidade de forma não consentida (ou com consentimento viciado) serão ilegais, independentemente de qualquer acordo prévio.
- Em caso de acusação, a existência de documentação robusta de consentimento não garante a absolvição, mas fortalece significativamente a defesa, ajudando a demonstrar a natureza consensual da relação e das práticas.
-
A alegação de coação futura é particularmente difícil de prevenir apenas com documentos. Um histórico consistente de comunicação aberta (whatsapp/telegram/e-mails), respeito mútuo e ausência de dependência ou controle excessivo na relação pode ajudar a contextualizar a dinâmica como não coercitiva.
-
Cuidado com Marcas Visíveis e Lesões Graves Práticas que resultam em hematomas severos ou lesões podem ser interpretadas como agressão, mesmo que consentidas. Evitar excessos protege não apenas a integridade física, mas também evita questionamentos legais futuros.
O que vem a ser consentimento viciado
No Direito, consentimento viciado é quando a pessoa concorda com algo, mas a vontade dela não é livre ou plena — ou seja, o consentimento existe formalmente, mas é defeituoso por alguma razão.
O Código Civil brasileiro (art. 138 a 165) define várias formas de vício de consentimento. As principais são:
Erro: A pessoa se engana sobre o que está consentindo. (Ex.: A pessoa acredita que vai participar de um jogo leve, mas na verdade é exposta a práticas pesadas.)
Dolo: A pessoa é enganada propositalmente para aceitar algo. (Ex.: Alguém mente sobre o que vai acontecer durante a prática.)
Coação: A pessoa é forçada ou ameaçada a consentir. (Ex.: "Se você não aceitar, eu termino com você" — pressão emocional forte pode ser vista como coação.)
Estado de perigo ou lesão: A pessoa aceita algo em situação de necessidade extrema ou abuso de sua vulnerabilidade. (Ex.: Alguém em situação emocional muito fragilizada é induzida a aceitar práticas que normalmente recusaria.)
No contexto de BDSM, isso é ainda mais delicado: Mesmo que a pessoa tenha "assinado" um contrato ou dito "sim", se depois ela alegar que seu consentimento foi dado sob medo, engano ou pressão psicológica, o consentimento pode ser considerado viciado — e, portanto, juridicamente inválido.
Isso tem duas implicações sérias:
-
O crime não se descaracteriza: Se houver vício, o consentimento é ignorado e a prática pode ser tratada como crime normal (lesão corporal, estupro, tortura, etc.).
-
A prova do consentimento precisa ser sólida: Mostrando que a pessoa estava informada, lúcida, livre e sem qualquer tipo de coação.
Consentimento viciado é quando a pessoa concorda formalmente, mas de maneira enganada, forçada ou pressionada, tornando o consentimento inútil para efeitos jurídicos.
Conclusão
Casais que praticam BDSM consensual no Brasil navegam em um terreno que exige não apenas confiança mútua e comunicação excepcional, mas também uma consciência aguçada das complexidades legais e dos riscos de interpretações equivocadas ou acusações mal-intencionadas. Embora o BDSM seja uma expressão legítima da sexualidade humana, sua prática no Brasil exige responsabilidade redobrada. Ter provas claras de consentimento, manter a comunicação aberta e agir com prudência são formas eficazes de se proteger de falsas alegações e preservar a liberdade e a segurança de todos os envolvidos. Embora leis controversas como a Maria da Penha sejam "vitais" para a proteção contra a violência real, os praticantes de BDSM, e em particular os homens nesse contexto, devem adotar uma postura proativa e prudente para mitigar os riscos inerentes à potencial má interpretação ou instrumentalização dessas práticas e leis, garantindo que a expressão de sua consensualidade esteja resguardada na medida do possível.
Importante: No Brasil, mesmo com tudo isso, o Ministério Público pode denunciar por crime como lesão corporal grave, estupro ou tortura, independente de consentimento. Então a prudência nas práticas é fundamental.
Aviso Legal: Este artigo tem caráter meramente informativo e não constitui aconselhamento jurídico. As leis e interpretações podem mudar, e cada situação é única. Recomenda-se buscar orientação de um advogado qualificado para discutir casos específicos.
Se curtiu este artigo faça uma contribuição, se tiver algum ponto relevante para o artigo deixe seu comentário.
-
@ c631e267:c2b78d3e
2025-04-20 19:54:32Es ist völlig unbestritten, dass der Angriff der russischen Armee auf die Ukraine im Februar 2022 strikt zu verurteilen ist. Ebenso unbestritten ist Russland unter Wladimir Putin keine brillante Demokratie. Aus diesen Tatsachen lässt sich jedoch nicht das finstere Bild des russischen Präsidenten – und erst recht nicht des Landes – begründen, das uns durchweg vorgesetzt wird und den Kern des aktuellen europäischen Bedrohungs-Szenarios darstellt. Da müssen wir schon etwas genauer hinschauen.
Der vorliegende Artikel versucht derweil nicht, den Einsatz von Gewalt oder die Verletzung von Menschenrechten zu rechtfertigen oder zu entschuldigen – ganz im Gegenteil. Dass jedoch der Verdacht des «Putinverstehers» sofort latent im Raume steht, verdeutlicht, was beim Thema «Russland» passiert: Meinungsmache und Manipulation.
Angesichts der mentalen Mobilmachung seitens Politik und Medien sowie des Bestrebens, einen bevorstehenden Krieg mit Russland geradezu herbeizureden, ist es notwendig, dieser fatalen Entwicklung entgegenzutreten. Wenn wir uns nur ein wenig von der herrschenden Schwarz-Weiß-Malerei freimachen, tauchen automatisch Fragen auf, die Risse im offiziellen Narrativ enthüllen. Grund genug, nachzuhaken.
Wer sich schon länger auch abseits der Staats- und sogenannten Leitmedien informiert, der wird in diesem Artikel vermutlich nicht viel Neues erfahren. Andere könnten hier ein paar unbekannte oder vergessene Aspekte entdecken. Möglicherweise klärt sich in diesem Kontext die Wahrnehmung der aktuellen (unserer eigenen!) Situation ein wenig.
Manipulation erkennen
Corona-«Pandemie», menschengemachter Klimawandel oder auch Ukraine-Krieg: Jede Menge Krisen, und für alle gibt es ein offizielles Narrativ, dessen Hinterfragung unerwünscht ist. Nun ist aber ein Narrativ einfach eine Erzählung, eine Geschichte (Latein: «narratio») und kein Tatsachenbericht. Und so wie ein Märchen soll auch das Narrativ eine Botschaft vermitteln.
Über die Methoden der Manipulation ist viel geschrieben worden, sowohl in Bezug auf das Individuum als auch auf die Massen. Sehr wertvolle Tipps dazu, wie man Manipulationen durchschauen kann, gibt ein Büchlein [1] von Albrecht Müller, dem Herausgeber der NachDenkSeiten.
Die Sprache selber eignet sich perfekt für die Manipulation. Beispielsweise kann die Wortwahl Bewertungen mitschwingen lassen, regelmäßiges Wiederholen (gerne auch von verschiedenen Seiten) lässt Dinge irgendwann «wahr» erscheinen, Übertreibungen fallen auf und hinterlassen wenigstens eine Spur im Gedächtnis, genauso wie Andeutungen. Belege spielen dabei keine Rolle.
Es gibt auffällig viele Sprachregelungen, die offenbar irgendwo getroffen und irgendwie koordiniert werden. Oder alle Redenschreiber und alle Medien kopieren sich neuerdings permanent gegenseitig. Welchen Zweck hat es wohl, wenn der Krieg in der Ukraine durchgängig und quasi wörtlich als «russischer Angriffskrieg auf die Ukraine» bezeichnet wird? Obwohl das in der Sache richtig ist, deutet die Art der Verwendung auf gezielte Beeinflussung hin und soll vor allem das Feindbild zementieren.
Sprachregelungen dienen oft der Absicherung einer einseitigen Darstellung. Das Gleiche gilt für das Verkürzen von Informationen bis hin zum hartnäckigen Verschweigen ganzer Themenbereiche. Auch hierfür gibt es rund um den Ukraine-Konflikt viele gute Beispiele.
Das gewünschte Ergebnis solcher Methoden ist eine Schwarz-Weiß-Malerei, bei der einer eindeutig als «der Böse» markiert ist und die anderen automatisch «die Guten» sind. Das ist praktisch und demonstriert gleichzeitig ein weiteres Manipulationswerkzeug: die Verwendung von Doppelstandards. Wenn man es schafft, bei wichtigen Themen regelmäßig mit zweierlei Maß zu messen, ohne dass das Publikum protestiert, dann hat man freie Bahn.
Experten zu bemühen, um bestimmte Sachverhalte zu erläutern, ist sicher sinnvoll, kann aber ebenso missbraucht werden, schon allein durch die Auswahl der jeweiligen Spezialisten. Seit «Corona» werden viele erfahrene und ehemals hoch angesehene Fachleute wegen der «falschen Meinung» diffamiert und gecancelt. [2] Das ist nicht nur ein brutaler Umgang mit Menschen, sondern auch eine extreme Form, die öffentliche Meinung zu steuern.
Wann immer wir also erkennen (weil wir aufmerksam waren), dass wir bei einem bestimmten Thema manipuliert werden, dann sind zwei logische und notwendige Fragen: Warum? Und was ist denn richtig? In unserem Russland-Kontext haben die Antworten darauf viel mit Geopolitik und Geschichte zu tun.
Ist Russland aggressiv und expansiv?
Angeblich plant Russland, europäische NATO-Staaten anzugreifen, nach dem Motto: «Zuerst die Ukraine, dann den Rest». In Deutschland weiß man dafür sogar das Datum: «Wir müssen bis 2029 kriegstüchtig sein», versichert Verteidigungsminister Pistorius.
Historisch gesehen ist es allerdings eher umgekehrt: Russland, bzw. die Sowjetunion, ist bereits dreimal von Westeuropa aus militärisch angegriffen worden. Die Feldzüge Napoleons, des deutschen Kaiserreichs und Nazi-Deutschlands haben Millionen Menschen das Leben gekostet. Bei dem ausdrücklichen Vernichtungskrieg ab 1941 kam es außerdem zu Brutalitäten wie der zweieinhalbjährigen Belagerung Leningrads (heute St. Petersburg) durch Hitlers Wehrmacht. Deren Ziel, die Bevölkerung auszuhungern, wurde erreicht: über eine Million tote Zivilisten.
Trotz dieser Erfahrungen stimmte Michail Gorbatschow 1990 der deutschen Wiedervereinigung zu und die Sowjetunion zog ihre Truppen aus Osteuropa zurück (vgl. Abb. 1). Der Warschauer Pakt wurde aufgelöst, der Kalte Krieg formell beendet. Die Sowjets erhielten damals von führenden westlichen Politikern die Zusicherung, dass sich die NATO «keinen Zentimeter ostwärts» ausdehnen würde, das ist dokumentiert. [3]
Expandiert ist die NATO trotzdem, und zwar bis an Russlands Grenzen (vgl. Abb. 2). Laut dem Politikberater Jeffrey Sachs handelt es sich dabei um ein langfristiges US-Projekt, das von Anfang an die Ukraine und Georgien mit einschloss. Offiziell wurde der Beitritt beiden Staaten 2008 angeboten. In jedem Fall könnte die massive Ost-Erweiterung seit 1999 aus russischer Sicht nicht nur als Vertrauensbruch, sondern durchaus auch als aggressiv betrachtet werden.
Russland hat den europäischen Staaten mehrfach die Hand ausgestreckt [4] für ein friedliches Zusammenleben und den «Aufbau des europäischen Hauses». Präsident Putin sei «in seiner ersten Amtszeit eine Chance für Europa» gewesen, urteilt die Journalistin und langjährige Russland-Korrespondentin der ARD, Gabriele Krone-Schmalz. Er habe damals viele positive Signale Richtung Westen gesendet.
Die Europäer jedoch waren scheinbar an einer Partnerschaft mit dem kontinentalen Nachbarn weniger interessiert als an der mit dem transatlantischen Hegemon. Sie verkennen bis heute, dass eine gedeihliche Zusammenarbeit in Eurasien eine Gefahr für die USA und deren bekundetes Bestreben ist, die «einzige Weltmacht» zu sein – «Full Spectrum Dominance» [5] nannte das Pentagon das. Statt einem neuen Kalten Krieg entgegenzuarbeiten, ließen sich europäische Staaten selber in völkerrechtswidrige «US-dominierte Angriffskriege» [6] verwickeln, wie in Serbien, Afghanistan, dem Irak, Libyen oder Syrien. Diese werden aber selten so benannt.
Speziell den Deutschen stünde außer einer Portion Realismus auch etwas mehr Dankbarkeit gut zu Gesicht. Das Geschichtsbewusstsein der Mehrheit scheint doch recht selektiv und das Selbstbewusstsein einiger etwas desorientiert zu sein. Bekanntermaßen waren es die Soldaten der sowjetischen Roten Armee, die unter hohen Opfern 1945 Deutschland «vom Faschismus befreit» haben. Bei den Gedenkfeiern zu 80 Jahren Kriegsende will jedoch das Auswärtige Amt – noch unter der Diplomatie-Expertin Baerbock, die sich schon länger offiziell im Krieg mit Russland wähnt, – nun keine Russen sehen: Sie sollen notfalls rausgeschmissen werden.
«Die Grundsatzfrage lautet: Geht es Russland um einen angemessenen Platz in einer globalen Sicherheitsarchitektur, oder ist Moskau schon seit langem auf einem imperialistischen Trip, der befürchten lassen muss, dass die Russen in fünf Jahren in Berlin stehen?»
So bringt Gabriele Krone-Schmalz [7] die eigentliche Frage auf den Punkt, die zur Einschätzung der Situation letztlich auch jeder für sich beantworten muss.
Was ist los in der Ukraine?
In der internationalen Politik geht es nie um Demokratie oder Menschenrechte, sondern immer um Interessen von Staaten. Diese These stammt von Egon Bahr, einem der Architekten der deutschen Ostpolitik des «Wandels durch Annäherung» aus den 1960er und 70er Jahren. Sie trifft auch auf den Ukraine-Konflikt zu, den handfeste geostrategische und wirtschaftliche Interessen beherrschen, obwohl dort angeblich «unsere Demokratie» verteidigt wird.
Es ist ein wesentliches Element des Ukraine-Narrativs und Teil der Manipulation, die Vorgeschichte des Krieges wegzulassen – mindestens die vor der russischen «Annexion» der Halbinsel Krim im März 2014, aber oft sogar komplett diejenige vor der Invasion Ende Februar 2022. Das Thema ist komplex, aber einige Aspekte, die für eine Beurteilung nicht unwichtig sind, will ich wenigstens kurz skizzieren. [8]
Das Gebiet der heutigen Ukraine und Russlands – die übrigens in der «Kiewer Rus» gemeinsame Wurzeln haben – hat der britische Geostratege Halford Mackinder bereits 1904 als eurasisches «Heartland» bezeichnet, dessen Kontrolle er eine große Bedeutung für die imperiale Strategie Großbritanniens zumaß. Für den ehemaligen Sicherheits- und außenpolitischen Berater mehrerer US-amerikanischer Präsidenten und Mitgründer der Trilateralen Kommission, Zbigniew Brzezinski, war die Ukraine nach der Auflösung der Sowjetunion ein wichtiger Spielstein auf dem «eurasischen Schachbrett», wegen seiner Nähe zu Russland, seiner Bodenschätze und seines Zugangs zum Schwarzen Meer.
Die Ukraine ist seit langem ein gespaltenes Land. Historisch zerrissen als Spielball externer Interessen und geprägt von ethnischen, kulturellen, religiösen und geografischen Unterschieden existiert bis heute, grob gesagt, eine Ost-West-Spaltung, welche die Suche nach einer nationalen Identität stark erschwert.
Insbesondere im Zuge der beiden Weltkriege sowie der Russischen Revolution entstanden tiefe Risse in der Bevölkerung. Ukrainer kämpften gegen Ukrainer, zum Beispiel die einen auf der Seite von Hitlers faschistischer Nazi-Armee und die anderen auf der von Stalins kommunistischer Roter Armee. Die Verbrechen auf beiden Seiten sind nicht vergessen. Dass nach der Unabhängigkeit 1991 versucht wurde, Figuren wie den radikalen Nationalisten Symon Petljura oder den Faschisten und Nazi-Kollaborateur Stepan Bandera als «Nationalhelden» zu installieren, verbessert die Sache nicht.
Während die USA und EU-Staaten zunehmend «ausländische Einmischung» (speziell russische) in «ihre Demokratien» wittern, betreiben sie genau dies seit Jahrzehnten in vielen Ländern der Welt. Die seit den 2000er Jahren bekannten «Farbrevolutionen» in Osteuropa werden oft als Methode des Regierungsumsturzes durch von außen gesteuerte «demokratische» Volksaufstände beschrieben. Diese Strategie geht auf Analysen zum «Schwarmverhalten» [9] seit den 1960er Jahren zurück (Studentenproteste), wo es um die potenzielle Wirksamkeit einer «rebellischen Hysterie» von Jugendlichen bei postmodernen Staatsstreichen geht. Heute nennt sich dieses gezielte Kanalisieren der Massen zur Beseitigung unkooperativer Regierungen «Soft-Power».
In der Ukraine gab es mit der «Orangen Revolution» 2004 und dem «Euromaidan» 2014 gleich zwei solcher «Aufstände». Der erste erzwang wegen angeblicher Unregelmäßigkeiten eine Wiederholung der Wahlen, was mit Wiktor Juschtschenko als neuem Präsidenten endete. Dieser war ehemaliger Direktor der Nationalbank und Befürworter einer Annäherung an EU und NATO. Seine Frau, die First Lady, ist US-amerikanische «Philanthropin» und war Beamtin im Weißen Haus in der Reagan- und der Bush-Administration.
Im Gegensatz zu diesem ersten Event endete der sogenannte Euromaidan unfriedlich und blutig. Die mehrwöchigen Proteste gegen Präsident Wiktor Janukowitsch, in Teilen wegen des nicht unterzeichneten Assoziierungsabkommens mit der EU, wurden zunehmend gewalttätiger und von Nationalisten und Faschisten des «Rechten Sektors» dominiert. Sie mündeten Ende Februar 2014 auf dem Kiewer Unabhängigkeitsplatz (Maidan) in einem Massaker durch Scharfschützen. Dass deren Herkunft und die genauen Umstände nicht geklärt wurden, störte die Medien nur wenig. [10]
Janukowitsch musste fliehen, er trat nicht zurück. Vielmehr handelte es sich um einen gewaltsamen, allem Anschein nach vom Westen inszenierten Putsch. Laut Jeffrey Sachs war das kein Geheimnis, außer vielleicht für die Bürger. Die USA unterstützten die Post-Maidan-Regierung nicht nur, sie beeinflussten auch ihre Bildung. Das geht unter anderem aus dem berühmten «Fuck the EU»-Telefonat der US-Chefdiplomatin für die Ukraine, Victoria Nuland, mit Botschafter Geoffrey Pyatt hervor.
Dieser Bruch der demokratischen Verfassung war letztlich der Auslöser für die anschließenden Krisen auf der Krim und im Donbass (Ostukraine). Angesichts der ukrainischen Geschichte mussten die nationalistischen Tendenzen und die Beteiligung der rechten Gruppen an dem Umsturz bei der russigsprachigen Bevölkerung im Osten ungute Gefühle auslösen. Es gab Kritik an der Übergangsregierung, Befürworter einer Abspaltung und auch für einen Anschluss an Russland.
Ebenso konnte Wladimir Putin in dieser Situation durchaus Bedenken wegen des Status der russischen Militärbasis für seine Schwarzmeerflotte in Sewastopol auf der Krim haben, für die es einen langfristigen Pachtvertrag mit der Ukraine gab. Was im März 2014 auf der Krim stattfand, sei keine Annexion, sondern eine Abspaltung (Sezession) nach einem Referendum gewesen, also keine gewaltsame Aneignung, urteilte der Rechtswissenschaftler Reinhard Merkel in der FAZ sehr detailliert begründet. Übrigens hatte die Krim bereits zu Zeiten der Sowjetunion den Status einer autonomen Republik innerhalb der Ukrainischen SSR.
Anfang April 2014 wurden in der Ostukraine die «Volksrepubliken» Donezk und Lugansk ausgerufen. Die Kiewer Übergangsregierung ging unter der Bezeichnung «Anti-Terror-Operation» (ATO) militärisch gegen diesen, auch von Russland instrumentalisierten Widerstand vor. Zufällig war kurz zuvor CIA-Chef John Brennan in Kiew. Die Maßnahmen gingen unter dem seit Mai neuen ukrainischen Präsidenten, dem Milliardär Petro Poroschenko, weiter. Auch Wolodymyr Selenskyj beendete den Bürgerkrieg nicht, als er 2019 vom Präsidenten-Schauspieler, der Oligarchen entmachtet, zum Präsidenten wurde. Er fuhr fort, die eigene Bevölkerung zu bombardieren.
Mit dem Einmarsch russischer Truppen in die Ostukraine am 24. Februar 2022 begann die zweite Phase des Krieges. Die Wochen und Monate davor waren intensiv. Im November hatte die Ukraine mit den USA ein Abkommen über eine «strategische Partnerschaft» unterzeichnet. Darin sagten die Amerikaner ihre Unterstützung der EU- und NATO-Perspektive der Ukraine sowie quasi für die Rückeroberung der Krim zu. Dagegen ließ Putin der NATO und den USA im Dezember 2021 einen Vertragsentwurf über beiderseitige verbindliche Sicherheitsgarantien zukommen, den die NATO im Januar ablehnte. Im Februar eskalierte laut OSZE die Gewalt im Donbass.
Bereits wenige Wochen nach der Invasion, Ende März 2022, kam es in Istanbul zu Friedensverhandlungen, die fast zu einer Lösung geführt hätten. Dass der Krieg nicht damals bereits beendet wurde, lag daran, dass der Westen dies nicht wollte. Man war der Meinung, Russland durch die Ukraine in diesem Stellvertreterkrieg auf Dauer militärisch schwächen zu können. Angesichts von Hunderttausenden Toten, Verletzten und Traumatisierten, die als Folge seitdem zu beklagen sind, sowie dem Ausmaß der Zerstörung, fehlen einem die Worte.
Hasst der Westen die Russen?
Diese Frage drängt sich auf, wenn man das oft unerträglich feindselige Gebaren beobachtet, das beileibe nicht neu ist und vor Doppelmoral trieft. Russland und speziell die Person Wladimir Putins werden regelrecht dämonisiert, was gleichzeitig scheinbar jede Form von Diplomatie ausschließt.
Russlands militärische Stärke, seine geografische Lage, sein Rohstoffreichtum oder seine unabhängige diplomatische Tradition sind sicher Störfaktoren für das US-amerikanische Bestreben, der Boss in einer unipolaren Welt zu sein. Ein womöglich funktionierender eurasischer Kontinent, insbesondere gute Beziehungen zwischen Russland und Deutschland, war indes schon vor dem Ersten Weltkrieg eine Sorge des britischen Imperiums.
Ein «Vergehen» von Präsident Putin könnte gewesen sein, dass er die neoliberale Schocktherapie à la IWF und den Ausverkauf des Landes (auch an US-Konzerne) beendete, der unter seinem Vorgänger herrschte. Dabei zeigte er sich als Führungspersönlichkeit und als nicht so formbar wie Jelzin. Diese Aspekte allein sind aber heute vermutlich keine ausreichende Erklärung für ein derart gepflegtes Feindbild.
Der Historiker und Philosoph Hauke Ritz erweitert den Fokus der Fragestellung zu: «Warum hasst der Westen die Russen so sehr?», was er zum Beispiel mit dem Medienforscher Michael Meyen und mit der Politikwissenschaftlerin Ulrike Guérot bespricht. Ritz stellt die interessante These [11] auf, dass Russland eine Provokation für den Westen sei, welcher vor allem dessen kulturelles und intellektuelles Potenzial fürchte.
Die Russen sind Europäer aber anders, sagt Ritz. Diese «Fremdheit in der Ähnlichkeit» erzeuge vielleicht tiefe Ablehnungsgefühle. Obwohl Russlands Identität in der europäischen Kultur verwurzelt ist, verbinde es sich immer mit der Opposition in Europa. Als Beispiele nennt er die Kritik an der katholischen Kirche oder die Verbindung mit der Arbeiterbewegung. Christen, aber orthodox; Sozialismus statt Liberalismus. Das mache das Land zum Antagonisten des Westens und zu einer Bedrohung der Machtstrukturen in Europa.
Fazit
Selbstverständlich kann man Geschichte, Ereignisse und Entwicklungen immer auf verschiedene Arten lesen. Dieser Artikel, obwohl viel zu lang, konnte nur einige Aspekte der Ukraine-Tragödie anreißen, die in den offiziellen Darstellungen in der Regel nicht vorkommen. Mindestens dürfte damit jedoch klar geworden sein, dass die Russische Föderation bzw. Wladimir Putin nicht der alleinige Aggressor in diesem Konflikt ist. Das ist ein Stellvertreterkrieg zwischen USA/NATO (gut) und Russland (böse); die Ukraine (edel) wird dabei schlicht verheizt.
Das ist insofern von Bedeutung, als die gesamte europäische Kriegshysterie auf sorgsam kultivierten Freund-Feind-Bildern beruht. Nur so kann Konfrontation und Eskalation betrieben werden, denn damit werden die wahren Hintergründe und Motive verschleiert. Angst und Propaganda sind notwendig, damit die Menschen den Wahnsinn mitmachen. Sie werden belogen, um sie zuerst zu schröpfen und anschließend auf die Schlachtbank zu schicken. Das kann niemand wollen, außer den stets gleichen Profiteuren: die Rüstungs-Lobby und die großen Investoren, die schon immer an Zerstörung und Wiederaufbau verdient haben.
Apropos Investoren: Zu den Top-Verdienern und somit Hauptinteressenten an einer Fortführung des Krieges zählt BlackRock, einer der weltgrößten Vermögensverwalter. Der deutsche Bundeskanzler in spe, Friedrich Merz, der gerne «Taurus»-Marschflugkörper an die Ukraine liefern und die Krim-Brücke zerstören möchte, war von 2016 bis 2020 Aufsichtsratsvorsitzender von BlackRock in Deutschland. Aber das hat natürlich nichts zu sagen, der Mann macht nur seinen Job.
Es ist ein Spiel der Kräfte, es geht um Macht und strategische Kontrolle, um Geheimdienste und die Kontrolle der öffentlichen Meinung, um Bodenschätze, Rohstoffe, Pipelines und Märkte. Das klingt aber nicht sexy, «Demokratie und Menschenrechte» hört sich besser und einfacher an. Dabei wäre eine für alle Seiten förderliche Politik auch nicht so kompliziert; das Handwerkszeug dazu nennt sich Diplomatie. Noch einmal Gabriele Krone-Schmalz:
«Friedliche Politik ist nichts anderes als funktionierender Interessenausgleich. Da geht’s nicht um Moral.»
Die Situation in der Ukraine ist sicher komplex, vor allem wegen der inneren Zerrissenheit. Es dürfte nicht leicht sein, eine friedliche Lösung für das Zusammenleben zu finden, aber die Beteiligten müssen es vor allem wollen. Unter den gegebenen Umständen könnte eine sinnvolle Perspektive mit Neutralität und föderalen Strukturen zu tun haben.
Allen, die sich bis hierher durch die Lektüre gearbeitet (oder auch einfach nur runtergescrollt) haben, wünsche ich frohe Oster-Friedenstage!
[Titelbild: Pixabay; Abb. 1 und 2: nach Ganser/SIPER; Abb. 3: SIPER]
--- Quellen: ---
[1] Albrecht Müller, «Glaube wenig. Hinterfrage alles. Denke selbst.», Westend 2019
[2] Zwei nette Beispiele:
- ARD-faktenfinder (sic), «Viel Aufmerksamkeit für fragwürdige Experten», 03/2023
- Neue Zürcher Zeitung, «Aufstieg und Fall einer Russlandversteherin – die ehemalige ARD-Korrespondentin Gabriele Krone-Schmalz rechtfertigt seit Jahren Putins Politik», 12/2022
[3] George Washington University, «NATO Expansion: What Gorbachev Heard – Declassified documents show security assurances against NATO expansion to Soviet leaders from Baker, Bush, Genscher, Kohl, Gates, Mitterrand, Thatcher, Hurd, Major, and Woerner», 12/2017
[4] Beispielsweise Wladimir Putin bei seiner Rede im Deutschen Bundestag, 25/09/2001
[5] William Engdahl, «Full Spectrum Dominance, Totalitarian Democracy In The New World Order», edition.engdahl 2009
[6] Daniele Ganser, «Illegale Kriege – Wie die NATO-Länder die UNO sabotieren. Eine Chronik von Kuba bis Syrien», Orell Füssli 2016
[7] Gabriele Krone-Schmalz, «Mit Friedensjournalismus gegen ‘Kriegstüchtigkeit’», Vortrag und Diskussion an der Universität Hamburg, veranstaltet von engagierten Studenten, 16/01/2025\ → Hier ist ein ähnlicher Vortrag von ihr (Video), den ich mit spanischer Übersetzung gefunden habe.
[8] Für mehr Hintergrund und Details empfehlen sich z.B. folgende Bücher:
- Mathias Bröckers, Paul Schreyer, «Wir sind immer die Guten», Westend 2019
- Gabriele Krone-Schmalz, «Russland verstehen? Der Kampf um die Ukraine und die Arroganz des Westens», Westend 2023
- Patrik Baab, «Auf beiden Seiten der Front – Meine Reisen in die Ukraine», Fiftyfifty 2023
[9] vgl. Jonathan Mowat, «Washington's New World Order "Democratization" Template», 02/2005 und RAND Corporation, «Swarming and the Future of Conflict», 2000
[10] Bemerkenswert einige Beiträge, von denen man später nichts mehr wissen wollte:
- ARD Monitor, «Todesschüsse in Kiew: Wer ist für das Blutbad vom Maidan verantwortlich», 10/04/2014, Transkript hier
- Telepolis, «Blutbad am Maidan: Wer waren die Todesschützen?», 12/04/2014
- Telepolis, «Scharfschützenmorde in Kiew», 14/12/2014
- Deutschlandfunk, «Gefahr einer Spirale nach unten», Interview mit Günter Verheugen, 18/03/2014
- NDR Panorama, «Putsch in Kiew: Welche Rolle spielen die Faschisten?», 06/03/2014
[11] Hauke Ritz, «Vom Niedergang des Westens zur Neuerfindung Europas», 2024
Dieser Beitrag wurde mit dem Pareto-Client geschrieben.
-
@ 90de72b7:8f68fdc0
2025-04-30 17:55:30PetriNostr. My everyday activity 30/04
PetriNostr never sleep! This is a demo
petrinet ;startDay () -> working ;stopDay working -> () ;startPause working -> paused ;endPause paused -> working ;goSmoke working -> smoking ;endSmoke smoking -> working ;startEating working -> eating ;stopEating eating -> working ;startCall working -> onCall ;endCall onCall -> working ;startMeeting working -> inMeetinga ;endMeeting inMeeting -> working ;logTask working -> working
-
@ 207ad2a0:e7cca7b0
2025-01-07 03:46:04Quick context: I wanted to check out Nostr's longform posts and this blog post seemed like a good one to try and mirror. It's originally from my free to read/share attempt to write a novel, but this post here is completely standalone - just describing how I used AI image generation to make a small piece of the work.
Hold on, put your pitchforks down - outside of using Grammerly & Emacs for grammatical corrections - not a single character was generated or modified by computers; a non-insignificant portion of my first draft originating on pen & paper. No AI is ~~weird and crazy~~ imaginative enough to write like I do. The only successful AI contribution you'll find is a single image, the map, which I heavily edited. This post will go over how I generated and modified an image using AI, which I believe brought some value to the work, and cover a few quick thoughts about AI towards the end.
Let's be clear, I can't draw, but I wanted a map which I believed would improve the story I was working on. After getting abysmal results by prompting AI with text only I decided to use "Diffuse the Rest," a Stable Diffusion tool that allows you to provide a reference image + description to fine tune what you're looking for. I gave it this Microsoft Paint looking drawing:
and after a number of outputs, selected this one to work on:
The image is way better than the one I provided, but had I used it as is, I still feel it would have decreased the quality of my work instead of increasing it. After firing up Gimp I cropped out the top and bottom, expanded the ocean and separated the landmasses, then copied the top right corner of the large landmass to replace the bottom left that got cut off. Now we've got something that looks like concept art: not horrible, and gets the basic idea across, but it's still due for a lot more detail.
The next thing I did was add some texture to make it look more map like. I duplicated the layer in Gimp and applied the "Cartoon" filter to both for some texture. The top layer had a much lower effect strength to give it a more textured look, while the lower layer had a higher effect strength that looked a lot like mountains or other terrain features. Creating a layer mask allowed me to brush over spots to display the lower layer in certain areas, giving it some much needed features.
At this point I'd made it to where I felt it may improve the work instead of detracting from it - at least after labels and borders were added, but the colors seemed artificial and out of place. Luckily, however, this is when PhotoFunia could step in and apply a sketch effect to the image.
At this point I was pretty happy with how it was looking, it was close to what I envisioned and looked very visually appealing while still being a good way to portray information. All that was left was to make the white background transparent, add some minor details, and add the labels and borders. Below is the exact image I wound up using:
Overall, I'm very satisfied with how it turned out, and if you're working on a creative project, I'd recommend attempting something like this. It's not a central part of the work, but it improved the chapter a fair bit, and was doable despite lacking the talent and not intending to allocate a budget to my making of a free to read and share story.
The AI Generated Elephant in the Room
If you've read my non-fiction writing before, you'll know that I think AI will find its place around the skill floor as opposed to the skill ceiling. As you saw with my input, I have absolutely zero drawing talent, but with some elbow grease and an existing creative direction before and after generating an image I was able to get something well above what I could have otherwise accomplished. Outside of the lowest common denominators like stock photos for the sole purpose of a link preview being eye catching, however, I doubt AI will be wholesale replacing most creative works anytime soon. I can assure you that I tried numerous times to describe the map without providing a reference image, and if I used one of those outputs (or even just the unedited output after providing the reference image) it would have decreased the quality of my work instead of improving it.
I'm going to go out on a limb and expect that AI image, text, and video is all going to find its place in slop & generic content (such as AI generated slop replacing article spinners and stock photos respectively) and otherwise be used in a supporting role for various creative endeavors. For people working on projects like I'm working on (e.g. intended budget $0) it's helpful to have an AI capable of doing legwork - enabling projects to exist or be improved in ways they otherwise wouldn't have. I'm also guessing it'll find its way into more professional settings for grunt work - think a picture frame or fake TV show that would exist in the background of an animated project - likely a detail most people probably wouldn't notice, but that would save the creators time and money and/or allow them to focus more on the essential aspects of said work. Beyond that, as I've predicted before: I expect plenty of emails will be generated from a short list of bullet points, only to be summarized by the recipient's AI back into bullet points.
I will also make a prediction counter to what seems mainstream: AI is about to peak for a while. The start of AI image generation was with Google's DeepDream in 2015 - image recognition software that could be run in reverse to "recognize" patterns where there were none, effectively generating an image from digital noise or an unrelated image. While I'm not an expert by any means, I don't think we're too far off from that a decade later, just using very fine tuned tools that develop more coherent images. I guess that we're close to maxing out how efficiently we're able to generate images and video in that manner, and the hard caps on how much creative direction we can have when using AI - as well as the limits to how long we can keep it coherent (e.g. long videos or a chronologically consistent set of images) - will prevent AI from progressing too far beyond what it is currently unless/until another breakthrough occurs.
-
@ 8125b911:a8400883
2025-04-25 07:02:35In Nostr, all data is stored as events. Decentralization is achieved by storing events on multiple relays, with signatures proving the ownership of these events. However, if you truly want to own your events, you should run your own relay to store them. Otherwise, if the relays you use fail or intentionally delete your events, you'll lose them forever.
For most people, running a relay is complex and costly. To solve this issue, I developed nostr-relay-tray, a relay that can be easily run on a personal computer and accessed over the internet.
Project URL: https://github.com/CodyTseng/nostr-relay-tray
This article will guide you through using nostr-relay-tray to run your own relay.
Download
Download the installation package for your operating system from the GitHub Release Page.
| Operating System | File Format | | --------------------- | ---------------------------------- | | Windows |
nostr-relay-tray.Setup.x.x.x.exe
| | macOS (Apple Silicon) |nostr-relay-tray-x.x.x-arm64.dmg
| | macOS (Intel) |nostr-relay-tray-x.x.x.dmg
| | Linux | You should know which one to use |Installation
Since this app isn’t signed, you may encounter some obstacles during installation. Once installed, an ostrich icon will appear in the status bar. Click on the ostrich icon, and you'll see a menu where you can click the "Dashboard" option to open the relay's control panel for further configuration.
macOS Users:
- On first launch, go to "System Preferences > Security & Privacy" and click "Open Anyway."
- If you encounter a "damaged" message, run the following command in the terminal to remove the restrictions:
bash sudo xattr -rd com.apple.quarantine /Applications/nostr-relay-tray.app
Windows Users:
- On the security warning screen, click "More Info > Run Anyway."
Connecting
By default, nostr-relay-tray is only accessible locally through
ws://localhost:4869/
, which makes it quite limited. Therefore, we need to expose it to the internet.In the control panel, click the "Proxy" tab and toggle the switch. You will then receive a "Public address" that you can use to access your relay from anywhere. It's that simple.
Next, add this address to your relay list and position it as high as possible in the list. Most clients prioritize connecting to relays that appear at the top of the list, and relays lower in the list are often ignored.
Restrictions
Next, we need to set up some restrictions to prevent the relay from storing events that are irrelevant to you and wasting storage space. nostr-relay-tray allows for flexible and fine-grained configuration of which events to accept, but some of this is more complex and will not be covered here. If you're interested, you can explore this further later.
For now, I'll introduce a simple and effective strategy: WoT (Web of Trust). You can enable this feature in the "WoT & PoW" tab. Before enabling, you'll need to input your pubkey.
There's another important parameter,
Depth
, which represents the relationship depth between you and others. Someone you follow has a depth of 1, someone they follow has a depth of 2, and so on.- Setting this parameter to 0 means your relay will only accept your own events.
- Setting it to 1 means your relay will accept events from you and the people you follow.
- Setting it to 2 means your relay will accept events from you, the people you follow, and the people they follow.
Currently, the maximum value for this parameter is 2.
Conclusion
You've now successfully run your own relay and set a simple restriction to prevent it from storing irrelevant events.
If you encounter any issues during use, feel free to submit an issue on GitHub, and I'll respond as soon as possible.
Not your relay, not your events.
-
@ e6817453:b0ac3c39
2025-01-05 14:29:17The Rise of Graph RAGs and the Quest for Data Quality
As we enter a new year, it’s impossible to ignore the boom of retrieval-augmented generation (RAG) systems, particularly those leveraging graph-based approaches. The previous year saw a surge in advancements and discussions about Graph RAGs, driven by their potential to enhance large language models (LLMs), reduce hallucinations, and deliver more reliable outputs. Let’s dive into the trends, challenges, and strategies for making the most of Graph RAGs in artificial intelligence.
Booming Interest in Graph RAGs
Graph RAGs have dominated the conversation in AI circles. With new research papers and innovations emerging weekly, it’s clear that this approach is reshaping the landscape. These systems, especially those developed by tech giants like Microsoft, demonstrate how graphs can:
- Enhance LLM Outputs: By grounding responses in structured knowledge, graphs significantly reduce hallucinations.
- Support Complex Queries: Graphs excel at managing linked and connected data, making them ideal for intricate problem-solving.
Conferences on linked and connected data have increasingly focused on Graph RAGs, underscoring their central role in modern AI systems. However, the excitement around this technology has brought critical questions to the forefront: How do we ensure the quality of the graphs we’re building, and are they genuinely aligned with our needs?
Data Quality: The Foundation of Effective Graphs
A high-quality graph is the backbone of any successful RAG system. Constructing these graphs from unstructured data requires attention to detail and rigorous processes. Here’s why:
- Richness of Entities: Effective retrieval depends on graphs populated with rich, detailed entities.
- Freedom from Hallucinations: Poorly constructed graphs amplify inaccuracies rather than mitigating them.
Without robust data quality, even the most sophisticated Graph RAGs become ineffective. As a result, the focus must shift to refining the graph construction process. Improving data strategy and ensuring meticulous data preparation is essential to unlock the full potential of Graph RAGs.
Hybrid Graph RAGs and Variations
While standard Graph RAGs are already transformative, hybrid models offer additional flexibility and power. Hybrid RAGs combine structured graph data with other retrieval mechanisms, creating systems that:
- Handle diverse data sources with ease.
- Offer improved adaptability to complex queries.
Exploring these variations can open new avenues for AI systems, particularly in domains requiring structured and unstructured data processing.
Ontology: The Key to Graph Construction Quality
Ontology — defining how concepts relate within a knowledge domain — is critical for building effective graphs. While this might sound abstract, it’s a well-established field blending philosophy, engineering, and art. Ontology engineering provides the framework for:
- Defining Relationships: Clarifying how concepts connect within a domain.
- Validating Graph Structures: Ensuring constructed graphs are logically sound and align with domain-specific realities.
Traditionally, ontologists — experts in this discipline — have been integral to large enterprises and research teams. However, not every team has access to dedicated ontologists, leading to a significant challenge: How can teams without such expertise ensure the quality of their graphs?
How to Build Ontology Expertise in a Startup Team
For startups and smaller teams, developing ontology expertise may seem daunting, but it is achievable with the right approach:
- Assign a Knowledge Champion: Identify a team member with a strong analytical mindset and give them time and resources to learn ontology engineering.
- Provide Training: Invest in courses, workshops, or certifications in knowledge graph and ontology creation.
- Leverage Partnerships: Collaborate with academic institutions, domain experts, or consultants to build initial frameworks.
- Utilize Tools: Introduce ontology development tools like Protégé, OWL, or SHACL to simplify the creation and validation process.
- Iterate with Feedback: Continuously refine ontologies through collaboration with domain experts and iterative testing.
So, it is not always affordable for a startup to have a dedicated oncologist or knowledge engineer in a team, but you could involve consulters or build barefoot experts.
You could read about barefoot experts in my article :
Even startups can achieve robust and domain-specific ontology frameworks by fostering in-house expertise.
How to Find or Create Ontologies
For teams venturing into Graph RAGs, several strategies can help address the ontology gap:
-
Leverage Existing Ontologies: Many industries and domains already have open ontologies. For instance:
-
Public Knowledge Graphs: Resources like Wikipedia’s graph offer a wealth of structured knowledge.
- Industry Standards: Enterprises such as Siemens have invested in creating and sharing ontologies specific to their fields.
-
Business Framework Ontology (BFO): A valuable resource for enterprises looking to define business processes and structures.
-
Build In-House Expertise: If budgets allow, consider hiring knowledge engineers or providing team members with the resources and time to develop expertise in ontology creation.
-
Utilize LLMs for Ontology Construction: Interestingly, LLMs themselves can act as a starting point for ontology development:
-
Prompt-Based Extraction: LLMs can generate draft ontologies by leveraging their extensive training on graph data.
- Domain Expert Refinement: Combine LLM-generated structures with insights from domain experts to create tailored ontologies.
Parallel Ontology and Graph Extraction
An emerging approach involves extracting ontologies and graphs in parallel. While this can streamline the process, it presents challenges such as:
- Detecting Hallucinations: Differentiating between genuine insights and AI-generated inaccuracies.
- Ensuring Completeness: Ensuring no critical concepts are overlooked during extraction.
Teams must carefully validate outputs to ensure reliability and accuracy when employing this parallel method.
LLMs as Ontologists
While traditionally dependent on human expertise, ontology creation is increasingly supported by LLMs. These models, trained on vast amounts of data, possess inherent knowledge of many open ontologies and taxonomies. Teams can use LLMs to:
- Generate Skeleton Ontologies: Prompt LLMs with domain-specific information to draft initial ontology structures.
- Validate and Refine Ontologies: Collaborate with domain experts to refine these drafts, ensuring accuracy and relevance.
However, for validation and graph construction, formal tools such as OWL, SHACL, and RDF should be prioritized over LLMs to minimize hallucinations and ensure robust outcomes.
Final Thoughts: Unlocking the Power of Graph RAGs
The rise of Graph RAGs underscores a simple but crucial correlation: improving graph construction and data quality directly enhances retrieval systems. To truly harness this power, teams must invest in understanding ontologies, building quality graphs, and leveraging both human expertise and advanced AI tools.
As we move forward, the interplay between Graph RAGs and ontology engineering will continue to shape the future of AI. Whether through adopting existing frameworks or exploring innovative uses of LLMs, the path to success lies in a deep commitment to data quality and domain understanding.
Have you explored these technologies in your work? Share your experiences and insights — and stay tuned for more discussions on ontology extraction and its role in AI advancements. Cheers to a year of innovation!
-
@ a4a6b584:1e05b95b
2025-01-02 18:13:31The Four-Layer Framework
Layer 1: Zoom Out
Start by looking at the big picture. What’s the subject about, and why does it matter? Focus on the overarching ideas and how they fit together. Think of this as the 30,000-foot view—it’s about understanding the "why" and "how" before diving into the "what."
Example: If you’re learning programming, start by understanding that it’s about giving logical instructions to computers to solve problems.
- Tip: Keep it simple. Summarize the subject in one or two sentences and avoid getting bogged down in specifics at this stage.
Once you have the big picture in mind, it’s time to start breaking it down.
Layer 2: Categorize and Connect
Now it’s time to break the subject into categories—like creating branches on a tree. This helps your brain organize information logically and see connections between ideas.
Example: Studying biology? Group concepts into categories like cells, genetics, and ecosystems.
- Tip: Use headings or labels to group similar ideas. Jot these down in a list or simple diagram to keep track.
With your categories in place, you’re ready to dive into the details that bring them to life.
Layer 3: Master the Details
Once you’ve mapped out the main categories, you’re ready to dive deeper. This is where you learn the nuts and bolts—like formulas, specific techniques, or key terminology. These details make the subject practical and actionable.
Example: In programming, this might mean learning the syntax for loops, conditionals, or functions in your chosen language.
- Tip: Focus on details that clarify the categories from Layer 2. Skip anything that doesn’t add to your understanding.
Now that you’ve mastered the essentials, you can expand your knowledge to include extra material.
Layer 4: Expand Your Horizons
Finally, move on to the extra material—less critical facts, trivia, or edge cases. While these aren’t essential to mastering the subject, they can be useful in specialized discussions or exams.
Example: Learn about rare programming quirks or historical trivia about a language’s development.
- Tip: Spend minimal time here unless it’s necessary for your goals. It’s okay to skim if you’re short on time.
Pro Tips for Better Learning
1. Use Active Recall and Spaced Repetition
Test yourself without looking at notes. Review what you’ve learned at increasing intervals—like after a day, a week, and a month. This strengthens memory by forcing your brain to actively retrieve information.
2. Map It Out
Create visual aids like diagrams or concept maps to clarify relationships between ideas. These are particularly helpful for organizing categories in Layer 2.
3. Teach What You Learn
Explain the subject to someone else as if they’re hearing it for the first time. Teaching exposes any gaps in your understanding and helps reinforce the material.
4. Engage with LLMs and Discuss Concepts
Take advantage of tools like ChatGPT or similar large language models to explore your topic in greater depth. Use these tools to:
- Ask specific questions to clarify confusing points.
- Engage in discussions to simulate real-world applications of the subject.
- Generate examples or analogies that deepen your understanding.Tip: Use LLMs as a study partner, but don’t rely solely on them. Combine these insights with your own critical thinking to develop a well-rounded perspective.
Get Started
Ready to try the Four-Layer Method? Take 15 minutes today to map out the big picture of a topic you’re curious about—what’s it all about, and why does it matter? By building your understanding step by step, you’ll master the subject with less stress and more confidence.
-
@ fe32298e:20516265
2024-12-16 20:59:13Today I learned how to install NVapi to monitor my GPUs in Home Assistant.
NVApi is a lightweight API designed for monitoring NVIDIA GPU utilization and enabling automated power management. It provides real-time GPU metrics, supports integration with tools like Home Assistant, and offers flexible power management and PCIe link speed management based on workload and thermal conditions.
- GPU Utilization Monitoring: Utilization, memory usage, temperature, fan speed, and power consumption.
- Automated Power Limiting: Adjusts power limits dynamically based on temperature thresholds and total power caps, configurable per GPU or globally.
- Cross-GPU Coordination: Total power budget applies across multiple GPUs in the same system.
- PCIe Link Speed Management: Controls minimum and maximum PCIe link speeds with idle thresholds for power optimization.
- Home Assistant Integration: Uses the built-in RESTful platform and template sensors.
Getting the Data
sudo apt install golang-go git clone https://github.com/sammcj/NVApi.git cd NVapi go run main.go -port 9999 -rate 1 curl http://localhost:9999/gpu
Response for a single GPU:
[ { "index": 0, "name": "NVIDIA GeForce RTX 4090", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 16, "power_limit_watts": 450, "memory_total_gb": 23.99, "memory_used_gb": 0.46, "memory_free_gb": 23.52, "memory_usage_percent": 2, "temperature": 38, "processes": [], "pcie_link_state": "not managed" } ]
Response for multiple GPUs:
[ { "index": 0, "name": "NVIDIA GeForce RTX 3090", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 14, "power_limit_watts": 350, "memory_total_gb": 24, "memory_used_gb": 0.43, "memory_free_gb": 23.57, "memory_usage_percent": 2, "temperature": 36, "processes": [], "pcie_link_state": "not managed" }, { "index": 1, "name": "NVIDIA RTX A4000", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 10, "power_limit_watts": 140, "memory_total_gb": 15.99, "memory_used_gb": 0.56, "memory_free_gb": 15.43, "memory_usage_percent": 3, "temperature": 41, "processes": [], "pcie_link_state": "not managed" } ]
Start at Boot
Create
/etc/systemd/system/nvapi.service
:``` [Unit] Description=Run NVapi After=network.target
[Service] Type=simple Environment="GOPATH=/home/ansible/go" WorkingDirectory=/home/ansible/NVapi ExecStart=/usr/bin/go run main.go -port 9999 -rate 1 Restart=always User=ansible
Environment="GPU_TEMP_CHECK_INTERVAL=5"
Environment="GPU_TOTAL_POWER_CAP=400"
Environment="GPU_0_LOW_TEMP=40"
Environment="GPU_0_MEDIUM_TEMP=70"
Environment="GPU_0_LOW_TEMP_LIMIT=135"
Environment="GPU_0_MEDIUM_TEMP_LIMIT=120"
Environment="GPU_0_HIGH_TEMP_LIMIT=100"
Environment="GPU_1_LOW_TEMP=45"
Environment="GPU_1_MEDIUM_TEMP=75"
Environment="GPU_1_LOW_TEMP_LIMIT=140"
Environment="GPU_1_MEDIUM_TEMP_LIMIT=125"
Environment="GPU_1_HIGH_TEMP_LIMIT=110"
[Install] WantedBy=multi-user.target ```
Home Assistant
Add to Home Assistant
configuration.yaml
and restart HA (completely).For a single GPU, this works: ``` sensor: - platform: rest name: MYPC GPU Information resource: http://mypc:9999 method: GET headers: Content-Type: application/json value_template: "{{ value_json[0].index }}" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature scan_interval: 1 # seconds
- platform: template sensors: mypc_gpu_0_gpu: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} GPU" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_memory: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Memory" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_power: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_0_power_limit: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_0_temperature: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Temperature" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'temperature') }}" unit_of_measurement: "°C" ```
For multiple GPUs: ``` rest: scan_interval: 1 resource: http://mypc:9999 sensor: - name: "MYPC GPU0 Information" value_template: "{{ value_json[0].index }}" json_attributes_path: "$.0" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature - name: "MYPC GPU1 Information" value_template: "{{ value_json[1].index }}" json_attributes_path: "$.1" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature
-
platform: template sensors: mypc_gpu_0_gpu: friendly_name: "MYPC GPU0 GPU" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_memory: friendly_name: "MYPC GPU0 Memory" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_power: friendly_name: "MYPC GPU0 Power" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_0_power_limit: friendly_name: "MYPC GPU0 Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_0_temperature: friendly_name: "MYPC GPU0 Temperature" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'temperature') }}" unit_of_measurement: "C"
-
platform: template sensors: mypc_gpu_1_gpu: friendly_name: "MYPC GPU1 GPU" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_1_memory: friendly_name: "MYPC GPU1 Memory" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_1_power: friendly_name: "MYPC GPU1 Power" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_1_power_limit: friendly_name: "MYPC GPU1 Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_1_temperature: friendly_name: "MYPC GPU1 Temperature" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'temperature') }}" unit_of_measurement: "C"
```
Basic entity card:
type: entities entities: - entity: sensor.mypc_gpu_0_gpu secondary_info: last-updated - entity: sensor.mypc_gpu_0_memory secondary_info: last-updated - entity: sensor.mypc_gpu_0_power secondary_info: last-updated - entity: sensor.mypc_gpu_0_power_limit secondary_info: last-updated - entity: sensor.mypc_gpu_0_temperature secondary_info: last-updated
Ansible Role
```
-
name: install go become: true package: name: golang-go state: present
-
name: git clone git: repo: "https://github.com/sammcj/NVApi.git" dest: "/home/ansible/NVapi" update: yes force: true
go run main.go -port 9999 -rate 1
-
name: install systemd service become: true copy: src: nvapi.service dest: /etc/systemd/system/nvapi.service
-
name: Reload systemd daemons, enable, and restart nvapi become: true systemd: name: nvapi daemon_reload: yes enabled: yes state: restarted ```
-
@ c631e267:c2b78d3e
2025-04-18 15:53:07Verstand ohne Gefühl ist unmenschlich; \ Gefühl ohne Verstand ist Dummheit. \ Egon Bahr
Seit Jahren werden wir darauf getrimmt, dass Fakten eigentlich gefühlt seien. Aber nicht alles ist relativ und nicht alles ist nach Belieben interpretierbar. Diese Schokoladenhasen beispielsweise, die an Ostern in unseren Gefilden typisch sind, «ostern» zwar nicht, sondern sie sitzen in der Regel, trotzdem verwandelt sie das nicht in «Sitzhasen».
Nichts soll mehr gelten, außer den immer invasiveren Gesetzen. Die eigenen Traditionen und Wurzeln sind potenziell «pfui», um andere Menschen nicht auszuschließen, aber wir mögen uns toleranterweise an die fremden Symbole und Rituale gewöhnen. Dabei ist es mir prinzipiell völlig egal, ob und wann jemand ein Fastenbrechen feiert, am Karsamstag oder jedem anderen Tag oder nie – aber bitte freiwillig.
Und vor allem: Lasst die Finger von den Kindern! In Bern setzten kürzlich Demonstranten ein Zeichen gegen die zunehmende Verbreitung woker Ideologie im Bildungssystem und forderten ein Ende der sexuellen Indoktrination von Schulkindern.
Wenn es nicht wegen des heiklen Themas Migration oder wegen des Regenbogens ist, dann wegen des Klimas. Im Rahmen der «Netto Null»-Agenda zum Kampf gegen das angeblich teuflische CO2 sollen die Menschen ihre Ernährungsgewohnheiten komplett ändern. Nach dem Willen von Produzenten synthetischer Lebensmittel, wie Bill Gates, sollen wir baldmöglichst praktisch auf Fleisch und alle Milchprodukte wie Milch und Käse verzichten. Ein lukratives Geschäftsmodell, das neben der EU aktuell auch von einem britischen Lobby-Konsortium unterstützt wird.
Sollten alle ideologischen Stricke zu reißen drohen, ist da immer noch «der Putin». Die Unions-Europäer offenbaren sich dabei ständig mehr als Vertreter der Rüstungsindustrie. Allen voran zündelt Deutschland an der Kriegslunte, angeführt von einem scheinbar todesmutigen Kanzlerkandidaten Friedrich Merz. Nach dessen erneuter Aussage, «Taurus»-Marschflugkörper an Kiew liefern zu wollen, hat Russland eindeutig klargestellt, dass man dies als direkte Kriegsbeteiligung werten würde – «mit allen sich daraus ergebenden Konsequenzen für Deutschland».
Wohltuend sind Nachrichten über Aktivitäten, die sich der allgemeinen Kriegstreiberei entgegenstellen oder diese öffentlich hinterfragen. Dazu zählt auch ein Kongress kritischer Psychologen und Psychotherapeuten, der letzte Woche in Berlin stattfand. Die vielen Vorträge im Kontext von «Krieg und Frieden» deckten ein breites Themenspektrum ab, darunter Friedensarbeit oder die Notwendigkeit einer «Pädagogik der Kriegsuntüchtigkeit».
Der heutige «stille Freitag», an dem Christen des Leidens und Sterbens von Jesus gedenken, ist vielleicht unabhängig von jeder religiösen oder spirituellen Prägung eine passende Einladung zur Reflexion. In der Ruhe liegt die Kraft. In diesem Sinne wünsche ich Ihnen frohe Ostertage!
[Titelbild: Pixabay]
Dieser Beitrag wurde mit dem Pareto-Client geschrieben und ist zuerst auf Transition News erschienen.
-
@ e691f4df:1099ad65
2025-04-24 18:56:12Viewing Bitcoin Through the Light of Awakening
Ankh & Ohm Capital’s Overview of the Psycho-Spiritual Nature of Bitcoin
Glossary:
I. Preface: The Logos of Our Logo
II. An Oracular Introduction
III. Alchemizing Greed
IV. Layers of Fractalized Thought
V. Permissionless Individuation
VI. Dispelling Paradox Through Resonance
VII. Ego Deflation
VIII. The Coin of Great Price
Preface: The Logos of Our Logo
Before we offer our lens on Bitcoin, it’s important to illuminate the meaning behind Ankh & Ohm’s name and symbol. These elements are not ornamental—they are foundational, expressing the cosmological principles that guide our work.
Our mission is to bridge the eternal with the practical. As a Bitcoin-focused family office and consulting firm, we understand capital not as an end, but as a tool—one that, when properly aligned, becomes a vehicle for divine order. We see Bitcoin not simply as a technological innovation but as an emanation of the Divine Logos—a harmonic expression of truth, transparency, and incorruptible structure. Both the beginning and the end, the Alpha and Omega.
The Ankh (☥), an ancient symbol of eternal life, is a key to the integration of opposites. It unites spirit and matter, force and form, continuity and change. It reminds us that capital, like Life, must not only be generative, but regenerative; sacred. Money must serve Life, not siphon from it.
The Ohm (Ω) holds a dual meaning. In physics, it denotes a unit of electrical resistance—the formative tension that gives energy coherence. In the Vedic tradition, Om (ॐ) is the primordial vibration—the sound from which all existence unfolds. Together, these symbols affirm a timeless truth: resistance and resonance are both sacred instruments of the Creator.
Ankh & Ohm, then, represents our striving for union, for harmony —between the flow of life and intentional structure, between incalculable abundance and measured restraint, between the lightbulb’s electrical impulse and its light-emitting filament. We stand at the threshold where intention becomes action, and where capital is not extracted, but cultivated in rhythm with the cosmos.
We exist to shepherd this transformation, as guides of this threshold —helping families, founders, and institutions align with a deeper order, where capital serves not as the prize, but as a pathway to collective Presence, Purpose, Peace and Prosperity.
An Oracular Introduction
Bitcoin is commonly understood as the first truly decentralized and secure form of digital money—a breakthrough in monetary sovereignty. But this view, while technically correct, is incomplete and spiritually shallow. Bitcoin is more than a tool for economic disruption. Bitcoin represents a mythic threshold: a symbol of the psycho-spiritual shift that many ancient traditions have long foretold.
For millennia, sages and seers have spoken of a coming Golden Age. In the Vedic Yuga cycles, in Plato’s Great Year, in the Eagle and Condor prophecies of the Americas—there exists a common thread: that humanity will emerge from darkness into a time of harmony, cooperation, and clarity. That the veil of illusion (maya, materiality) will thin, and reality will once again become transparent to the transcendent. In such an age, systems based on scarcity, deception, and centralization fall away. A new cosmology takes root—one grounded in balance, coherence, and sacred reciprocity.
But we must ask—how does such a shift happen? How do we cross from the age of scarcity, fear, and domination into one of coherence, abundance, and freedom?
One possible answer lies in the alchemy of incentive.
Bitcoin operates not just on the rules of computer science or Austrian economics, but on something far more old and subtle: the logic of transformation. It transmutes greed—a base instinct rooted in scarcity—into cooperation, transparency, and incorruptibility.
In this light, Bitcoin becomes more than code—it becomes a psychoactive protocol, one that rewires human behavior by aligning individual gain with collective integrity. It is not simply a new form of money. It is a new myth of value. A new operating system for human consciousness.
Bitcoin does not moralize. It harmonizes. It transforms the instinct for self-preservation into a pathway for planetary coherence.
Alchemizing Greed
At the heart of Bitcoin lies the ancient alchemical principle of transmutation: that which is base may be refined into gold.
Greed, long condemned as a vice, is not inherently evil. It is a distorted longing. A warped echo of the drive to preserve life. But in systems built on scarcity and deception, this longing calcifies into hoarding, corruption, and decay.
Bitcoin introduces a new game. A game with memory. A game that makes deception inefficient and truth profitable. It does not demand virtue—it encodes consequence. Its design does not suppress greed; it reprograms it.
In traditional models, game theory often illustrates the fragility of trust. The Prisoner’s Dilemma reveals how self-interest can sabotage collective well-being. But Bitcoin inverts this. It creates an environment where self-interest and integrity converge—where the most rational action is also the most truthful.
Its ledger, immutable and transparent, exposes manipulation for what it is: energetically wasteful and economically self-defeating. Dishonesty burns energy and yields nothing. The network punishes incoherence, not by decree, but by natural law.
This is the spiritual elegance of Bitcoin: it does not suppress greed—it transmutes it. It channels the drive for personal gain into the architecture of collective order. Miners compete not to dominate, but to validate. Nodes collaborate not through trust, but through mathematical proof.
This is not austerity. It is alchemy.
Greed, under Bitcoin, is refined. Tempered. Re-forged into a generative force—no longer parasitic, but harmonic.
Layers of Fractalized Thought Fragments
All living systems are layered. So is the cosmos. So is the human being. So is a musical scale.
At its foundation lies the timechain—the pulsing, incorruptible record of truth. Like the heart, it beats steadily. Every block, like a pulse, affirms its life through continuity. The difficulty adjustment—Bitcoin’s internal calibration—functions like heart rate variability, adapting to pressure while preserving coherence.
Above this base layer is the Lightning Network—a second layer facilitating rapid, efficient transactions. It is the nervous system: transmitting energy, reducing latency, enabling real-time interaction across a distributed whole.
Beyond that, emerging tools like Fedimint and Cashu function like the capillaries—bringing vitality to the extremities, to those underserved by legacy systems. They empower the unbanked, the overlooked, the forgotten. Privacy and dignity in the palms of those the old system refused to see.
And then there is NOSTR—the decentralized protocol for communication and creation. It is the throat chakra, the vocal cords of the “freedom-tech” body. It reclaims speech from the algorithmic overlords, making expression sovereign once more. It is also the reproductive system, as it enables the propagation of novel ideas and protocols in fertile, uncensorable soil.
Each layer plays its part. Not in hierarchy, but in harmony. In holarchy. Bitcoin and other open source protocols grow not through exogenous command, but through endogenous coherence. Like cells in an organism. Like a song.
Imagine the cell as a piece of glass from a shattered holographic plate —by which its perspectival, moving image can be restructured from the single shard. DNA isn’t only a logical script of base pairs, but an evolving progressive song. Its lyrics imbued with wise reflections on relationships. The nucleus sings, the cell responds—not by command, but by memory. Life is not imposed; it is expressed. A reflection of a hidden pattern.
Bitcoin chants this. Each node, a living cell, holds the full timechain—Truth distributed, incorruptible. Remove one, and the whole remains. This isn’t redundancy. It’s a revelation on the power of protection in Truth.
Consensus is communion. Verification becomes a sacred rite—Truth made audible through math.
Not just the signal; the song. A web of self-expression woven from Truth.
No center, yet every point alive with the whole. Like Indra’s Net, each reflects all. This is more than currency and information exchange. It is memory; a self-remembering Mind, unfolding through consensus and code. A Mind reflecting the Truth of reality at the speed of thought.
Heuristics are mental shortcuts—efficient, imperfect, alive. Like cells, they must adapt or decay. To become unbiased is to have self-balancing heuristics which carry feedback loops within them: they listen to the environment, mutate when needed, and survive by resonance with reality. Mutation is not error, but evolution. Its rules are simple, but their expression is dynamic.
What persists is not rigidity, but pattern.
To think clearly is not necessarily to be certain, but to dissolve doubt by listening, adjusting, and evolving thought itself.
To understand Bitcoin is simply to listen—patiently, clearly, as one would to a familiar rhythm returning.
Permissionless Individuation
Bitcoin is a path. One that no one can walk for you.
Said differently, it is not a passive act. It cannot be spoon-fed. Like a spiritual path, it demands initiation, effort, and the willingness to question inherited beliefs.
Because Bitcoin is permissionless, no one can be forced to adopt it. One must choose to engage it—compelled by need, interest, or intuition. Each person who embarks undergoes their own version of the hero’s journey.
Carl Jung called this process Individuation—the reconciliation of fragmented psychic elements into a coherent, mature Self. Bitcoin mirrors this: it invites individuals to confront the unconscious assumptions of the fiat paradigm, and to re-integrate their relationship to time, value, and agency.
In Western traditions—alchemy, Christianity, Kabbalah—the individual is sacred, and salvation is personal. In Eastern systems—Daoism, Buddhism, the Vedas—the self is ultimately dissolved into the cosmic whole. Bitcoin, in a paradoxical way, echoes both: it empowers the individual, while aligning them with a holistic, transcendent order.
To truly see Bitcoin is to allow something false to die. A belief. A habit. A self-concept.
In that death—a space opens for deeper connection with the Divine itSelf.
In that dissolution, something luminous is reborn.
After the passing, Truth becomes resurrected.
Dispelling Paradox Through Resonance
There is a subtle paradox encoded into the hero’s journey: each starts in solidarity, yet the awakening affects the collective.
No one can be forced into understanding Bitcoin. Like a spiritual truth, it must be seen. And yet, once seen, it becomes nearly impossible to unsee—and easier for others to glimpse. The pattern catches.
This phenomenon mirrors the concept of morphic resonance, as proposed and empirically tested by biologist Rupert Sheldrake. Once a critical mass of individuals begins to embody a new behavior or awareness, it becomes easier—instinctive—for others to follow suit. Like the proverbial hundredth monkey who begins to wash the fruit in the sea water, and suddenly, monkeys across islands begin doing the same—without ever meeting.
When enough individuals embody a pattern, it ripples outward. Not through propaganda, but through field effect and wave propagation. It becomes accessible, instinctive, familiar—even across great distance.
Bitcoin spreads in this way. Not through centralized broadcast, but through subtle resonance. Each new node, each individual who integrates the protocol into their life, strengthens the signal for others. The protocol doesn’t shout; it hums, oscillates and vibrates——persistently, coherently, patiently.
One awakens. Another follows. The current builds. What was fringe becomes familiar. What was radical becomes obvious.
This is the sacred geometry of spiritual awakening. One awakens, another follows, and soon the fluidic current is strong enough to carry the rest. One becomes two, two become many, and eventually the many become One again. This tessellation reverberates through the human aura, not as ideology, but as perceivable pattern recognition.
Bitcoin’s most powerful marketing tool is truth. Its most compelling evangelist is reality. Its most unstoppable force is resonance.
Therefore, Bitcoin is not just financial infrastructure—it is psychic scaffolding. It is part of the subtle architecture through which new patterns of coherence ripple across the collective field.
The training wheels from which humanity learns to embody Peace and Prosperity.
Ego Deflation
The process of awakening is not linear, and its beginning is rarely gentle—it usually begins with disruption, with ego inflation and destruction.
To individuate is to shape a center; to recognize peripherals and create boundaries—to say, “I am.” But without integration, the ego tilts—collapsing into void or inflating into noise. Fiat reflects this pathology: scarcity hoarded, abundance simulated. Stagnation becomes disguised as safety, and inflation masquerades as growth.
In other words, to become whole, the ego must first rise—claiming agency, autonomy, and identity. However, when left unbalanced, it inflates, or implodes. It forgets its context. It begins to consume rather than connect. And so the process must reverse: what inflates must deflate.
In the fiat paradigm, this inflation is literal. More is printed, and ethos is diluted. Savings decay. Meaning erodes. Value is abstracted. The economy becomes bloated with inaudible noise. And like the psyche that refuses to confront its own shadow, it begins to collapse under the weight of its own illusions.
But under Bitcoin, time is honored. Value is preserved. Energy is not abstracted but grounded.
Bitcoin is inherently deflationary—in both economic and spiritual senses. With a fixed supply, it reveals what is truly scarce. Not money, not status—but the finite number of heartbeats we each carry.
To see Bitcoin is to feel that limit in one’s soul. To hold Bitcoin is to feel Time’s weight again. To sense the importance of Bitcoin is to feel the value of preserved, potential energy. It is to confront the reality that what matters cannot be printed, inflated, or faked. In this way, Bitcoin gently confronts the ego—not through punishment, but through clarity.
Deflation, rightly understood, is not collapse—it is refinement. It strips away illusion, bloat, and excess. It restores the clarity of essence.
Spiritually, this is liberation.
The Coin of Great Price
There is an ancient parable told by a wise man:
“The kingdom of heaven is like a merchant seeking fine pearls, who, upon finding one of great price, sold all he had and bought it.”
Bitcoin is such a pearl.
But the ledger is more than a chest full of treasure. It is a key to the heart of things.
It is not just software—it is sacrament.
A symbol of what cannot be corrupted. A mirror of divine order etched into code. A map back to the sacred center.
It reflects what endures. It encodes what cannot be falsified. It remembers what we forgot: that Truth, when aligned with form, becomes Light once again.
Its design is not arbitrary. It speaks the language of life itself—
The elliptic orbits of the planets mirrored in its cryptography,
The logarithmic spiral of the nautilus shell discloses its adoption rate,
The interconnectivity of mycelium in soil reflect the network of nodes in cyberspace,
A webbed breadth of neurons across synaptic space fires with each new confirmed transaction.
It is geometry in devotion. Stillness in motion.
It is the Logos clothed in protocol.
What this key unlocks is beyond external riches. It is the eternal gold within us.
Clarity. Sovereignty. The unshakeable knowing that what is real cannot be taken. That what is sacred was never for sale.
Bitcoin is not the destination.
It is the Path.
And we—when we are willing to see it—are the Temple it leads back to.
-
@ 6f6b50bb:a848e5a1
2024-12-15 15:09:52Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
-
Scoperta. Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
-
Efficienza. La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
Scoperta. Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
Efficienza. La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!
Tradotto da:\ https://willwhitney.com/computing-inside-ai.htmlhttps://willwhitney.com/computing-inside-ai.html
-
-
@ 40b9c85f:5e61b451
2025-04-24 15:27:02Introduction
Data Vending Machines (DVMs) have emerged as a crucial component of the Nostr ecosystem, offering specialized computational services to clients across the network. As defined in NIP-90, DVMs operate on an apparently simple principle: "data in, data out." They provide a marketplace for data processing where users request specific jobs (like text translation, content recommendation, or AI text generation)
While DVMs have gained significant traction, the current specification faces challenges that hinder widespread adoption and consistent implementation. This article explores some ideas on how we can apply the reflection pattern, a well established approach in RPC systems, to address these challenges and improve the DVM ecosystem's clarity, consistency, and usability.
The Current State of DVMs: Challenges and Limitations
The NIP-90 specification provides a broad framework for DVMs, but this flexibility has led to several issues:
1. Inconsistent Implementation
As noted by hzrd149 in "DVMs were a mistake" every DVM implementation tends to expect inputs in slightly different formats, even while ostensibly following the same specification. For example, a translation request DVM might expect an event ID in one particular format, while an LLM service could expect a "prompt" input that's not even specified in NIP-90.
2. Fragmented Specifications
The DVM specification reserves a range of event kinds (5000-6000), each meant for different types of computational jobs. While creating sub-specifications for each job type is being explored as a possible solution for clarity, in a decentralized and permissionless landscape like Nostr, relying solely on specification enforcement won't be effective for creating a healthy ecosystem. A more comprehensible approach is needed that works with, rather than against, the open nature of the protocol.
3. Ambiguous API Interfaces
There's no standardized way for clients to discover what parameters a specific DVM accepts, which are required versus optional, or what output format to expect. This creates uncertainty and forces developers to rely on documentation outside the protocol itself, if such documentation exists at all.
The Reflection Pattern: A Solution from RPC Systems
The reflection pattern in RPC systems offers a compelling solution to many of these challenges. At its core, reflection enables servers to provide metadata about their available services, methods, and data types at runtime, allowing clients to dynamically discover and interact with the server's API.
In established RPC frameworks like gRPC, reflection serves as a self-describing mechanism where services expose their interface definitions and requirements. In MCP reflection is used to expose the capabilities of the server, such as tools, resources, and prompts. Clients can learn about available capabilities without prior knowledge, and systems can adapt to changes without requiring rebuilds or redeployments. This standardized introspection creates a unified way to query service metadata, making tools like
grpcurl
possible without requiring precompiled stubs.How Reflection Could Transform the DVM Specification
By incorporating reflection principles into the DVM specification, we could create a more coherent and predictable ecosystem. DVMs already implement some sort of reflection through the use of 'nip90params', which allow clients to discover some parameters, constraints, and features of the DVMs, such as whether they accept encryption, nutzaps, etc. However, this approach could be expanded to provide more comprehensive self-description capabilities.
1. Defined Lifecycle Phases
Similar to the Model Context Protocol (MCP), DVMs could benefit from a clear lifecycle consisting of an initialization phase and an operation phase. During initialization, the client and DVM would negotiate capabilities and exchange metadata, with the DVM providing a JSON schema containing its input requirements. nip-89 (or other) announcements can be used to bootstrap the discovery and negotiation process by providing the input schema directly. Then, during the operation phase, the client would interact with the DVM according to the negotiated schema and parameters.
2. Schema-Based Interactions
Rather than relying on rigid specifications for each job type, DVMs could self-advertise their schemas. This would allow clients to understand which parameters are required versus optional, what type validation should occur for inputs, what output formats to expect, and what payment flows are supported. By internalizing the input schema of the DVMs they wish to consume, clients gain clarity on how to interact effectively.
3. Capability Negotiation
Capability negotiation would enable DVMs to advertise their supported features, such as encryption methods, payment options, or specialized functionalities. This would allow clients to adjust their interaction approach based on the specific capabilities of each DVM they encounter.
Implementation Approach
While building DVMCP, I realized that the RPC reflection pattern used there could be beneficial for constructing DVMs in general. Since DVMs already follow an RPC style for their operation, and reflection is a natural extension of this approach, it could significantly enhance and clarify the DVM specification.
A reflection enhanced DVM protocol could work as follows: 1. Discovery: Clients discover DVMs through existing NIP-89 application handlers, input schemas could also be advertised in nip-89 announcements, making the second step unnecessary. 2. Schema Request: Clients request the DVM's input schema for the specific job type they're interested in 3. Validation: Clients validate their request against the provided schema before submission 4. Operation: The job proceeds through the standard NIP-90 flow, but with clearer expectations on both sides
Parallels with Other Protocols
This approach has proven successful in other contexts. The Model Context Protocol (MCP) implements a similar lifecycle with capability negotiation during initialization, allowing any client to communicate with any server as long as they adhere to the base protocol. MCP and DVM protocols share fundamental similarities, both aim to expose and consume computational resources through a JSON-RPC-like interface, albeit with specific differences.
gRPC's reflection service similarly allows clients to discover service definitions at runtime, enabling generic tools to work with any gRPC service without prior knowledge. In the REST API world, OpenAPI/Swagger specifications document interfaces in a way that makes them discoverable and testable.
DVMs would benefit from adopting these patterns while maintaining the decentralized, permissionless nature of Nostr.
Conclusion
I am not attempting to rewrite the DVM specification; rather, explore some ideas that could help the ecosystem improve incrementally, reducing fragmentation and making the ecosystem more comprehensible. By allowing DVMs to self describe their interfaces, we could maintain the flexibility that makes Nostr powerful while providing the structure needed for interoperability.
For developers building DVM clients or libraries, this approach would simplify consumption by providing clear expectations about inputs and outputs. For DVM operators, it would establish a standard way to communicate their service's requirements without relying on external documentation.
I am currently developing DVMCP following these patterns. Of course, DVMs and MCP servers have different details; MCP includes capabilities such as tools, resources, and prompts on the server side, as well as 'roots' and 'sampling' on the client side, creating a bidirectional way to consume capabilities. In contrast, DVMs typically function similarly to MCP tools, where you call a DVM with an input and receive an output, with each job type representing a different categorization of the work performed.
Without further ado, I hope this article has provided some insight into the potential benefits of applying the reflection pattern to the DVM specification.
-
@ e6817453:b0ac3c39
2024-12-07 15:06:43I started a long series of articles about how to model different types of knowledge graphs in the relational model, which makes on-device memory models for AI agents possible.
We model-directed graphs
Also, graphs of entities
We even model hypergraphs
Last time, we discussed why classical triple and simple knowledge graphs are insufficient for AI agents and complex memory, especially in the domain of time-aware or multi-model knowledge.
So why do we need metagraphs, and what kind of challenge could they help us to solve?
- complex and nested event and temporal context and temporal relations as edges
- multi-mode and multilingual knowledge
- human-like memory for AI agents that has multiple contexts and relations between knowledge in neuron-like networks
MetaGraphs
A meta graph is a concept that extends the idea of a graph by allowing edges to become graphs. Meta Edges connect a set of nodes, which could also be subgraphs. So, at some level, node and edge are pretty similar in properties but act in different roles in a different context.
Also, in some cases, edges could be referenced as nodes.
This approach enables the representation of more complex relationships and hierarchies than a traditional graph structure allows. Let’s break down each term to understand better metagraphs and how they differ from hypergraphs and graphs.Graph Basics
- A standard graph has a set of nodes (or vertices) and edges (connections between nodes).
- Edges are generally simple and typically represent a binary relationship between two nodes.
- For instance, an edge in a social network graph might indicate a “friend” relationship between two people (nodes).
Hypergraph
- A hypergraph extends the concept of an edge by allowing it to connect any number of nodes, not just two.
- Each connection, called a hyperedge, can link multiple nodes.
- This feature allows hypergraphs to model more complex relationships involving multiple entities simultaneously. For example, a hyperedge in a hypergraph could represent a project team, connecting all team members in a single relation.
- Despite its flexibility, a hypergraph doesn’t capture hierarchical or nested structures; it only generalizes the number of connections in an edge.
Metagraph
- A metagraph allows the edges to be graphs themselves. This means each edge can contain its own nodes and edges, creating nested, hierarchical structures.
- In a meta graph, an edge could represent a relationship defined by a graph. For instance, a meta graph could represent a network of organizations where each organization’s structure (departments and connections) is represented by its own internal graph and treated as an edge in the larger meta graph.
- This recursive structure allows metagraphs to model complex data with multiple layers of abstraction. They can capture multi-node relationships (as in hypergraphs) and detailed, structured information about each relationship.
Named Graphs and Graph of Graphs
As you can notice, the structure of a metagraph is quite complex and could be complex to model in relational and classical RDF setups. It could create a challenge of luck of tools and software solutions for your problem.
If you need to model nested graphs, you could use a much simpler model of Named graphs, which could take you quite far.The concept of the named graph came from the RDF community, which needed to group some sets of triples. In this way, you form subgraphs inside an existing graph. You could refer to the subgraph as a regular node. This setup simplifies complex graphs, introduces hierarchies, and even adds features and properties of hypergraphs while keeping a directed nature.
It looks complex, but it is not so hard to model it with a slight modification of a directed graph.
So, the node could host graphs inside. Let's reflect this fact with a location for a node. If a node belongs to a main graph, we could set the location to null or introduce a main node . it is up to youNodes could have edges to nodes in different subgraphs. This structure allows any kind of nesting graphs. Edges stay location-free
Meta Graphs in Relational Model
Let’s try to make several attempts to model different meta-graphs with some constraints.
Directed Metagraph where edges are not used as nodes and could not contain subgraphs
In this case, the edge always points to two sets of nodes. This introduces an overhead of creating a node set for a single node. In this model, we can model empty node sets that could require application-level constraints to prevent such cases.
Directed Metagraph where edges are not used as nodes and could contain subgraphs
Adding a node set that could model a subgraph located in an edge is easy but could be separate from in-vertex or out-vert.
I also do not see a direct need to include subgraphs to a node, as we could just use a node set interchangeably, but it still could be a case.Directed Metagraph where edges are used as nodes and could contain subgraphs
As you can notice, we operate all the time with node sets. We could simply allow the extension node set to elements set that include node and edge IDs, but in this case, we need to use uuid or any other strategy to differentiate node IDs from edge IDs. In this case, we have a collision of ephemeral edges or ephemeral nodes when we want to change the role and purpose of the node as an edge or vice versa.
A full-scale metagraph model is way too complex for a relational database.
So we need a better model.Now, we have more flexibility but loose structural constraints. We cannot show that the element should have one vertex, one vertex, or both. This type of constraint has been moved to the application level. Also, the crucial question is about query and retrieval needs.
Any meta-graph model should be more focused on domain and needs and should be used in raw form. We did it for a pure theoretical purpose. -
@ c631e267:c2b78d3e
2025-04-04 18:47:27Zwei mal drei macht vier, \ widewidewitt und drei macht neune, \ ich mach mir die Welt, \ widewide wie sie mir gefällt. \ Pippi Langstrumpf
Egal, ob Koalitionsverhandlungen oder politischer Alltag: Die Kontroversen zwischen theoretisch verschiedenen Parteien verschwinden, wenn es um den Kampf gegen politische Gegner mit Rückenwind geht. Wer den Alteingesessenen die Pfründe ernsthaft streitig machen könnte, gegen den werden nicht nur «Brandmauern» errichtet, sondern der wird notfalls auch strafrechtlich verfolgt. Doppelstandards sind dabei selbstverständlich inklusive.
In Frankreich ist diese Woche Marine Le Pen wegen der Veruntreuung von EU-Geldern von einem Gericht verurteilt worden. Als Teil der Strafe wurde sie für fünf Jahre vom passiven Wahlrecht ausgeschlossen. Obwohl das Urteil nicht rechtskräftig ist – Le Pen kann in Berufung gehen –, haben die Richter das Verbot, bei Wahlen anzutreten, mit sofortiger Wirkung verhängt. Die Vorsitzende des rechtsnationalen Rassemblement National (RN) galt als aussichtsreiche Kandidatin für die Präsidentschaftswahl 2027.
Das ist in diesem Jahr bereits der zweite gravierende Fall von Wahlbeeinflussung durch die Justiz in einem EU-Staat. In Rumänien hatte Călin Georgescu im November die erste Runde der Präsidentenwahl überraschend gewonnen. Das Ergebnis wurde später annulliert, die behauptete «russische Wahlmanipulation» konnte jedoch nicht bewiesen werden. Die Kandidatur für die Wahlwiederholung im Mai wurde Georgescu kürzlich durch das Verfassungsgericht untersagt.
Die Veruntreuung öffentlicher Gelder muss untersucht und geahndet werden, das steht außer Frage. Diese Anforderung darf nicht selektiv angewendet werden. Hingegen mussten wir in der Vergangenheit bei ungleich schwerwiegenderen Fällen von (mutmaßlichem) Missbrauch ganz andere Vorgehensweisen erleben, etwa im Fall der heutigen EZB-Chefin Christine Lagarde oder im «Pfizergate»-Skandal um die Präsidentin der EU-Kommission Ursula von der Leyen.
Wenngleich derartige Angelegenheiten formal auf einer rechtsstaatlichen Grundlage beruhen mögen, so bleibt ein bitterer Beigeschmack. Es stellt sich die Frage, ob und inwieweit die Justiz politisch instrumentalisiert wird. Dies ist umso interessanter, als die Gewaltenteilung einen essenziellen Teil jeder demokratischen Ordnung darstellt, während die Bekämpfung des politischen Gegners mit juristischen Mitteln gerade bei den am lautesten rufenden Verteidigern «unserer Demokratie» populär zu sein scheint.
Die Delegationen von CDU/CSU und SPD haben bei ihren Verhandlungen über eine Regierungskoalition genau solche Maßnahmen diskutiert. «Im Namen der Wahrheit und der Demokratie» möchte man noch härter gegen «Desinformation» vorgehen und dafür zum Beispiel den Digital Services Act der EU erweitern. Auch soll der Tatbestand der Volksverhetzung verschärft werden – und im Entzug des passiven Wahlrechts münden können. Auf europäischer Ebene würde Friedrich Merz wohl gerne Ungarn das Stimmrecht entziehen.
Der Pegel an Unzufriedenheit und Frustration wächst in großen Teilen der Bevölkerung kontinuierlich. Arroganz, Machtmissbrauch und immer abstrusere Ausreden für offensichtlich willkürliche Maßnahmen werden kaum verhindern, dass den etablierten Parteien die Unterstützung entschwindet. In Deutschland sind die Umfrageergebnisse der AfD ein guter Gradmesser dafür.
[Vorlage Titelbild: Pixabay]
Dieser Beitrag wurde mit dem Pareto-Client geschrieben und ist zuerst auf Transition News erschienen.
-
@ c631e267:c2b78d3e
2025-04-03 07:42:25Spanien bleibt einer der Vorreiter im europäischen Prozess der totalen Überwachung per Digitalisierung. Seit Mittwoch ist dort der digitale Personalausweis verfügbar. Dabei handelt es sich um eine Regierungs-App, die auf dem Smartphone installiert werden muss und in den Stores von Google und Apple zu finden ist. Per Dekret von Regierungschef Pedro Sánchez und Zustimmung des Ministerrats ist diese Maßnahme jetzt in Kraft getreten.
Mit den üblichen Argumenten der Vereinfachung, des Komforts, der Effizienz und der Sicherheit preist das Innenministerium die «Innovation» an. Auch die Beteuerung, dass die digitale Variante parallel zum physischen Ausweis existieren wird und diesen nicht ersetzen soll, fehlt nicht. Während der ersten zwölf Monate wird «der Neue» noch nicht für alle Anwendungsfälle gültig sein, ab 2026 aber schon.
Dass die ganze Sache auch «Risiken und Nebenwirkungen» haben könnte, wird in den Mainstream-Medien eher selten thematisiert. Bestenfalls wird der Aspekt der Datensicherheit angesprochen, allerdings in der Regel direkt mit dem Regierungsvokabular von den «maximalen Sicherheitsgarantien» abgehandelt. Dennoch gibt es einige weitere Aspekte, die Bürger mit etwas Sinn für Privatsphäre bedenken sollten.
Um sich die digitale Version des nationalen Ausweises besorgen zu können (eine App mit dem Namen MiDNI), muss man sich vorab online registrieren. Dabei wird die Identität des Bürgers mit seiner mobilen Telefonnummer verknüpft. Diese obligatorische fixe Verdrahtung kennen wir von diversen anderen Apps und Diensten. Gleichzeitig ist das die Basis für eine perfekte Lokalisierbarkeit der Person.
Für jeden Vorgang der Identifikation in der Praxis wird später «eine Verbindung zu den Servern der Bundespolizei aufgebaut». Die Daten des Individuums werden «in Echtzeit» verifiziert und im Erfolgsfall von der Polizei signiert zurückgegeben. Das Ergebnis ist ein QR-Code mit zeitlich begrenzter Gültigkeit, der an Dritte weitergegeben werden kann.
Bei derartigen Szenarien sträuben sich einem halbwegs kritischen Staatsbürger die Nackenhaare. Allein diese minimale Funktionsbeschreibung lässt die totale Überwachung erkennen, die damit ermöglicht wird. Jede Benutzung des Ausweises wird künftig registriert, hinterlässt also Spuren. Und was ist, wenn die Server der Polizei einmal kein grünes Licht geben? Das wäre spätestens dann ein Problem, wenn der digitale doch irgendwann der einzig gültige Ausweis ist: Dann haben wir den abschaltbaren Bürger.
Dieser neue Vorstoß der Regierung von Pedro Sánchez ist ein weiterer Schritt in Richtung der «totalen Digitalisierung» des Landes, wie diese Politik in manchen Medien – nicht einmal kritisch, sondern sehr naiv – genannt wird. Ebenso verharmlosend wird auch erwähnt, dass sich das spanische Projekt des digitalen Ausweises nahtlos in die Initiativen der EU zu einer digitalen Identität für alle Bürger sowie des digitalen Euro einreiht.
In Zukunft könnte der neue Ausweis «auch in andere staatliche und private digitale Plattformen integriert werden», wie das Medienportal Cope ganz richtig bemerkt. Das ist die Perspektive.
[Titelbild: Pixabay]
Dazu passend:
Nur Abschied vom Alleinfahren? Monströse spanische Überwachungsprojekte gemäß EU-Norm
Dieser Beitrag wurde mit dem Pareto-Client geschrieben und ist zuerst auf Transition News erschienen.
-
@ c230edd3:8ad4a712
2025-04-30 16:19:30Chef's notes
I found this recipe on beyondsweetandsavory.com. The site is incredibly ad infested (like most recipe sites) and its very annoying so I'm copying it to Nostr so all the homemade ice cream people can access it without dealing with that mess. I haven't made it yet. Will report back, when I do.
Details
- ⏲️ Prep time: 20 min
- 🍳 Cook time: 55 min
- 🍽️ Servings: 8
Ingredients
- 2 cups heavy cream
- 1 cup 2% milk
- 8 oz dark chocolate, 70%
- ¼ cup Dutch cocoa
- 2 tbsps loose Earl grey tea leaves
- 4 medium egg yolks
- ¾ cup granulated sugar
- ⅛ tsp salt
- ¼ cup dark chocolate, 70% chopped
Directions
- In a double boiler or a bowl set over a saucepan of simmering water, add the cacao solids and ½ cup of heavy cream. Stir chocolate until melted and smooth. Set melted chocolate aside.
- In a heavy saucepan, combine remaining heavy cream, milk, salt and ½ cup of sugar.
- Put the pan over medium heat and let the mixture boil gently to bubbling just around the edges (gentle simmer) and sugar completely dissolved, about 5 minutes. Remove from heat.
- Add the Earl Grey tea leaves and let it steep for 7-8 minutes until the cream has taken on the tea flavor, stirring occasionally and tasting to make sure it’s not too bitter.
- Whisk in Dutch cocoa until smooth. Add in melted chocolate and whisk until smooth.
- In a medium heatproof bowl, whisk the yolks just to break them up and whisk in remaining sugar. Set aside.
- Put the saucepan back on the stove over low heat and let it warm up for 2 minutes.
- Carefully measure out ½ cup of hot cream mixture.
- While whisking the eggs constantly, whisk the hot cream mixture into the eggs until smooth. Continue tempering the eggs by adding another ½ cup of hot cream to the bowl with the yolks.
- Pour the cream-egg mixture back to the saucepan and cook over medium-low heat, stirring constantly until it is thickened and coats the back of a spatula, about 5 minutes.
- Strain the base through a fine-mesh strainer into a clean container.
- Pour the mixture into a 1-gallon Ziplock freezer bag and submerge the sealed bag in an ice bath until cold, about 30 minutes. Refrigerate the ice cream base for at least 4 hours or overnight.
- Pour the ice cream base into the frozen canister of your ice cream machine and follow the manufacturer’s instructions.
- Spin until thick and creamy about 25-30 minutes.
- Pack the ice cream into a storage container, press a sheet of parchment directly against the surface and seal with an airtight lid. Freeze in the coldest part of your freezer until firm, at least 4 hours.
- When ready to serve, scoop the ice cream into a serving bowl and top with chopped chocolate.
-
@ aa8de34f:a6ffe696
2025-03-31 21:48:50In seinem Beitrag vom 30. März 2025 fragt Henning Rosenbusch auf Telegram angesichts zunehmender digitaler Kontrolle und staatlicher Allmacht:
„Wie soll sich gegen eine solche Tyrannei noch ein Widerstand formieren können, selbst im Untergrund? Sehe ich nicht.“\ (Quelle: t.me/rosenbusch/25228)
Er beschreibt damit ein Gefühl der Ohnmacht, das viele teilen: Eine Welt, in der Totalitarismus nicht mehr mit Panzern, sondern mit Algorithmen kommt. Wo Zugriff auf Geld, Meinungsfreiheit und Teilhabe vom Wohlverhalten abhängt. Der Bürger als kontrollierbare Variable im Code des Staates.\ Die Frage ist berechtigt. Doch die Antwort darauf liegt nicht in alten Widerstandsbildern – sondern in einer neuen Realität.
-- Denn es braucht keinen Untergrund mehr. --
Der Widerstand der Zukunft trägt keinen Tarnanzug. Er ist nicht konspirativ, sondern transparent. Nicht bewaffnet, sondern mathematisch beweisbar. Bitcoin steht nicht am Rand dieser Entwicklung – es ist ihr Fundament. Eine Bastion aus physikalischer Realität, spieltheoretischem Schutz und ökonomischer Wahrheit. Es ist nicht unfehlbar, aber unbestechlich. Nicht perfekt, aber immun gegen zentrale Willkür.
Hier entsteht kein „digitales Gegenreich“, sondern eine dezentrale Renaissance. Keine Revolte aus Wut, sondern eine stille Abkehr: von Zwang zu Freiwilligkeit, von Abhängigkeit zu Selbstverantwortung. Diese Revolution führt keine Kriege. Sie braucht keine Führer. Sie ist ein Netzwerk. Jeder Knoten ein Individuum. Jede Entscheidung ein Akt der Selbstermächtigung.
Weltweit wachsen Freiheits-Zitadellen aus dieser Idee: wirtschaftlich autark, digital souverän, lokal verankert und global vernetzt. Sie sind keine Utopien im luftleeren Raum, sondern konkrete Realitäten – angetrieben von Energie, Code und dem menschlichen Wunsch nach Würde.
Der Globalismus alter Prägung – zentralistisch, monopolistisch, bevormundend – wird an seiner eigenen Hybris zerbrechen. Seine Werkzeuge der Kontrolle werden ihn nicht retten. Im Gegenteil: Seine Geister werden ihn verfolgen und erlegen.
Und während die alten Mächte um Erhalt kämpfen, wächst eine neue Welt – nicht im Schatten, sondern im Offenen. Nicht auf Gewalt gebaut, sondern auf Mathematik, Physik und Freiheit.
Die Tyrannei sieht keinen Widerstand.\ Weil sie nicht erkennt, dass er längst begonnen hat.\ Unwiderruflich. Leise. Überall.
-
@ 1739d937:3e3136ef
2025-04-30 14:39:24MLS over Nostr - 30th April 2025
YO! Exciting stuff in this update so no intro, let's get straight into it.
🚢 Libraries Released
I've created 4 new Rust crates to make implementing NIP-EE (MLS) messaging easy for other projects. These are now part of the rust-nostr project (thanks nostr:npub1drvpzev3syqt0kjrls50050uzf25gehpz9vgdw08hvex7e0vgfeq0eseet) but aren't quite released to crates.io yet. They will be included in the next release of that library. My hope is that these libraries will give nostr developers a simple, safe, and specification-compliant way to work with MLS messaging in their applications.
Here's a quick overview of each:
nostr_mls_storage
One of the challenges of using MLS messaging is that clients have to store quite a lot of state about groups, keys, and messages. Initially, I implemented all of this in White Noise but knew that eventually this would need to be done in a more generalized way.
This crate defines traits and types that are used by the storage implementation crates and sets those up to wrap the OpenMLS storage layer. Now, instead of apps having to implement storage for both OpenMLS and Nostr, you simply pick your storage backend and go from there.
Importantly, because these are generic traits, it allows for the creation of any number of storage implementations for different backend storage providers; postgres, lmdb, nostrdb, etc. To start I've created two implementations; detailed below.
nostr_mls_memory_storage
This is a simple implementation of the nostr_mls_storage traits that uses an in-memory store (that doesn't persist anything to disc). This is principally for testing.
nostr_mls_sqlite_storage
This is a production ready implementation of the nostr_mls_storage traits that uses a persistent local sqlite database to store all data.
nostr_mls
This is the main library that app developers will interact with. Once you've chose a backend and instantiated an instance of NostrMls you can then interact with a simple set of methods to create key packages, create groups, send messages, process welcomes and messages, and more.
If you want to see a complete example of what the interface looks like check out mls_memory.rs.
I'll continue to add to this library over time as I implement more of the MLS protocol features.
🚧 White Noise Refactor
As a result of these new libraries, I was able to remove a huge amount of code from White Noise and refactor large parts of the app to make the codebase easier to understand and maintain. Because of this large refactor and the changes in the underlying storage layer, if you've installed White Noise before you'll need to delete it from your device before you trying to install again.
🖼️ Encrypted Media with Blossom
Let's be honest: Group chat would be basically useless if you couldn't share memes and gifs. Well, now you can in White Noise. Media in groups is encrypted using an MLS secret and uploaded to Blossom with a one-time use keypair. This gives groups a way to have rich conversations with images and documents and anything else while also maintaining the privacy and security of the conversation.
This is still in a rough state but rendering improvements are coming next.
📱 Damn Mobile
The app is still in a semi-broken state on Android and fully broken state on iOS. Now that I have the libraries released and the White Noise core code refactored, I'm focused 100% on fixing these issues. My goal is to have a beta version live on Zapstore in a few weeks.
🧑💻 Join Us
I'm looking for mobile developers on both Android and iOS to join the team and help us build the best possible apps for these platforms. I have grant funding available for the right people. Come and help us build secure, permissionless, censorship-resistant messaging. I can think of few projects that deserve your attention more than securing freedom of speech and freedom of association for the entire world. If you're interested or know someone who might be, please reach out to me directly.
🙏 Thanks to the People
Last but not least: A HUGE thank you to all the folks that have been helping make this project happen. You can check out the people that are directly working on the apps on Following._ (and follow them). There are also a lot of people behind the scenes that have helped in myriad ways to get us this far. Thank you thank you thank you.
🔗 Links
Libraries
White Noise
Other
-
@ 6e64b83c:94102ee8
2025-04-23 20:23:34How to Run Your Own Nostr Relay on Android with Cloudflare Domain
Prerequisites
- Install Citrine on your Android device:
- Visit https://github.com/greenart7c3/Citrine/releases
- Download the latest release using:
- zap.store
- Obtainium
- F-Droid
- Or download the APK directly
-
Note: You may need to enable "Install from Unknown Sources" in your Android settings
-
Domain Requirements:
- Purchase a domain if you don't have one
-
Transfer your domain to Cloudflare if it's not already there (for free SSL certificates and cloudflared support)
-
Tools to use:
- nak (the nostr army knife):
- Download from https://github.com/fiatjaf/nak/releases
- Installation steps:
-
For Linux/macOS: ```bash # Download the appropriate version for your system wget https://github.com/fiatjaf/nak/releases/latest/download/nak-linux-amd64 # for Linux # or wget https://github.com/fiatjaf/nak/releases/latest/download/nak-darwin-amd64 # for macOS
# Make it executable chmod +x nak-*
# Move to a directory in your PATH sudo mv nak-* /usr/local/bin/nak
- For Windows:
batch # Download the Windows version curl -L -o nak.exe https://github.com/fiatjaf/nak/releases/latest/download/nak-windows-amd64.exe# Move to a directory in your PATH (e.g., C:\Windows) move nak.exe C:\Windows\nak.exe
- Verify installation:
bash nak --version ```
Setting Up Citrine
- Open the Citrine app
- Start the server
- You'll see it running on
ws://127.0.0.1:4869
(local network only) - Go to settings and paste your npub into "Accept events signed by" inbox and press the + button. This prevents others from publishing events to your personal relay.
Installing Required Tools
- Install Termux from Google Play Store
- Open Termux and run:
bash pkg update && pkg install wget wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm64.deb dpkg -i cloudflared-linux-arm64.deb
Cloudflare Authentication
- Run the authentication command:
bash cloudflared tunnel login
- Follow the instructions:
- Copy the provided URL to your browser
- Log in to your Cloudflare account
- If the URL expires, copy it again after logging in
Creating the Tunnel
- Create a new tunnel:
bash cloudflared tunnel create <TUNNEL_NAME>
- Choose any name you prefer for your tunnel
-
Copy the tunnel ID after creating the tunnel
-
Create and configure the tunnel config:
bash touch ~/.cloudflared/config.yml nano ~/.cloudflared/config.yml
-
Add this configuration (replace the placeholders with your values): ```yaml tunnel:
credentials-file: /data/data/com.termux/files/home/.cloudflared/ .json ingress: - hostname: nostr.yourdomain.com service: ws://localhost:4869
- service: http_status:404 ```
- Note: In nano editor:
CTRL+O
and Enter to saveCTRL+X
to exit
-
Note: Check the credentials file path in the logs
-
Validate your configuration:
bash cloudflared tunnel validate
-
Start the tunnel:
bash cloudflared tunnel run my-relay
Preventing Android from Killing the Tunnel
Run these commands to maintain tunnel stability:
bash date && apt install termux-tools && termux-setup-storage && termux-wake-lock echo "nameserver 1.1.1.1" > $PREFIX/etc/resolv.conf
Tip: You can open multiple Termux sessions by swiping from the left edge of the screen while keeping your tunnel process running.
Updating Your Outbox Model Relays
Once your relay is running and accessible via your domain, you'll want to update your relay list in the Nostr network. This ensures other clients know about your relay and can connect to it.
Decoding npub (Public Key)
Private keys (nsec) and public keys (npub) are encoded in bech32 format, which includes: - A prefix (like nsec1, npub1 etc.) - The encoded data - A checksum
This format makes keys: - Easy to distinguish - Hard to copy incorrectly
However, most tools require these keys in hexadecimal (hex) format.
To decode an npub string to its hex format:
bash nak decode nostr:npub1dejts0qlva8mqzjlrxqkc2tmvs2t7elszky5upxaf3jha9qs9m5q605uc4
Change it with your own npub.
bash { "pubkey": "6e64b83c1f674fb00a5f19816c297b6414bf67f015894e04dd4c657e94102ee8" }
Copy the pubkey value in quotes.
Create a kind 10002 event with your relay list:
- Include your new relay with write permissions
- Include other relays you want to read from and write to, omit 3rd parameter to make it both read and write
Example format:
json { "kind": 10002, "tags": [ ["r", "wss://your-relay-domain.com", "write"], ["r", "wss://eden.nostr.land/"], ["r", "wss://nos.lol/"], ["r", "wss://nostr.bitcoiner.social/"], ["r", "wss://nostr.mom/"], ["r", "wss://relay.primal.net/"], ["r", "wss://nostr.wine/", "read"], ["r", "wss://relay.damus.io/"], ["r", "wss://relay.nostr.band/"], ["r", "wss://relay.snort.social/"] ], "content": "" }
Save it to a file called
event.json
Note: Add or remove any relays you want. To check your existing 10002 relays: - Visit https://nostr.band/?q=by%3Anpub1dejts0qlva8mqzjlrxqkc2tmvs2t7elszky5upxaf3jha9qs9m5q605uc4+++kind%3A10002 - nostr.band is an indexing service, it probably has your relay list. - Replace
npub1xxx
in the URL with your own npub - Click "VIEW JSON" from the menu to see the raw event - Or use thenak
tool if you know the relaysbash nak req -k 10002 -a <your-pubkey> wss://relay1.com wss://relay2.com
Replace `<your-pubkey>` with your public key in hex format (you can get it using `nak decode <your-npub>`)
- Sign and publish the event:
- Use a Nostr client that supports kind 10002 events
- Or use the
nak
command-line tool:bash nak event --sec ncryptsec1... wss://relay1.com wss://relay2.com $(cat event.json)
Important Security Notes: 1. Never share your nsec (private key) with anyone 2. Consider using NIP-49 encrypted keys for better security 3. Never paste your nsec or private key into the terminal. The command will be saved in your shell history, exposing your private key. To clear the command history: - For bash: use
history -c
- For zsh: usefc -W
to write history to file, thenfc -p
to read it back - Or manually edit your shell history file (e.g.,~/.zsh_history
or~/.bash_history
) 4. if you're usingzsh
, usefc -p
to prevent the next command from being saved to history 5. Or temporarily disable history before running sensitive commands:bash unset HISTFILE nak key encrypt ... set HISTFILE
How to securely create NIP-49 encypted private key
```bash
Read your private key (input will be hidden)
read -s SECRET
Read your password (input will be hidden)
read -s PASSWORD
encrypt command
echo "$SECRET" | nak key encrypt "$PASSWORD"
copy and paste the ncryptsec1 text from the output
read -s ENCRYPTED nak key decrypt "$ENCRYPTED"
clear variables from memory
unset SECRET PASSWORD ENCRYPTED ```
On a Windows command line, to read from stdin and use the variables in
nak
commands, you can use a combination ofset /p
to read input and then use those variables in your command. Here's an example:```bash @echo off set /p "SECRET=Enter your secret key: " set /p "PASSWORD=Enter your password: "
echo %SECRET%| nak key encrypt %PASSWORD%
:: Clear the sensitive variables set "SECRET=" set "PASSWORD=" ```
If your key starts with
ncryptsec1
, thenak
tool will securely prompt you for a password when using the--sec
parameter, unless the command is used with a pipe< >
or|
.bash nak event --sec ncryptsec1... wss://relay1.com wss://relay2.com $(cat event.json)
- Verify the event was published:
- Check if your relay list is visible on other relays
-
Use the
nak
tool to fetch your kind 10002 events:bash nak req -k 10002 -a <your-pubkey> wss://relay1.com wss://relay2.com
-
Testing your relay:
- Try connecting to your relay using different Nostr clients
- Verify you can both read from and write to your relay
- Check if events are being properly stored and retrieved
- Tip: Use multiple Nostr clients to test different aspects of your relay
Note: If anyone in the community has a more efficient method of doing things like updating outbox relays, please share your insights in the comments. Your expertise would be greatly appreciated!
-
@ f32184ee:6d1c17bf
2025-04-23 13:21:52Ads Fueling Freedom
Ross Ulbricht’s "Decentralize Social Media" painted a picture of a user-centric, decentralized future that transcended the limitations of platforms like the tech giants of today. Though focused on social media, his concept provided a blueprint for decentralized content systems writ large. The PROMO Protocol, designed by NextBlock while participating in Sovereign Engineering, embodies this blueprint in the realm of advertising, leveraging Nostr and Bitcoin’s Lightning Network to give individuals control, foster a multi-provider ecosystem, and ensure secure value exchange. In this way, Ulbricht’s 2021 vision can be seen as a prescient prediction of the PROMO Protocol’s structure. This is a testament to the enduring power of his ideas, now finding form in NextBlock’s innovative approach.
[Current Platform-Centric Paradigm, source: Ross Ulbricht's Decentralize Social Media]
Ulbricht’s Vision: A Decentralized Social Protocol
In his 2021 Medium article Ulbricht proposed a revolutionary vision for a decentralized social protocol (DSP) to address the inherent flaws of centralized social media platforms, such as privacy violations and inconsistent content moderation. Writing from prison, Ulbricht argued that decentralization could empower users by giving them control over their own content and the value they create, while replacing single, monolithic platforms with a competitive ecosystem of interface providers, content servers, and advertisers. Though his focus was on social media, Ulbricht’s ideas laid a conceptual foundation that strikingly predicts the structure of NextBlock’s PROMO Protocol, a decentralized advertising system built on the Nostr protocol.
[A Decentralized Social Protocol (DSP), source: Ross Ulbricht's Decentralize Social Media]
Ulbricht’s Principles
Ulbricht’s article outlines several key principles for his DSP: * User Control: Users should own their content and dictate how their data and creations generate value, rather than being subject to the whims of centralized corporations. * Decentralized Infrastructure: Instead of a single platform, multiple interface providers, content hosts, and advertisers interoperate, fostering competition and resilience. * Privacy and Autonomy: Decentralized solutions for profile management, hosting, and interactions would protect user privacy and reduce reliance on unaccountable intermediaries. * Value Creation: Users, not platforms, should capture the economic benefits of their contributions, supported by decentralized mechanisms for transactions.
These ideas were forward-thinking in 2021, envisioning a shift away from the centralized giants dominating social media at the time. While Ulbricht didn’t specifically address advertising protocols, his framework for decentralization and user empowerment extends naturally to other domains, like NextBlock’s open-source offering: the PROMO Protocol.
NextBlock’s Implementation of PROMO Protocol
The PROMO Protocol powers NextBlock's Billboard app, a decentralized advertising protocol built on Nostr, a simple, open protocol for decentralized communication. The PROMO Protocol reimagines advertising by: * Empowering People: Individuals set their own ad prices (e.g., 500 sats/minute), giving them direct control over how their attention or space is monetized. * Marketplace Dynamics: Advertisers set budgets and maximum bids, competing within a decentralized system where a 20% service fee ensures operational sustainability. * Open-Source Flexibility: As an open-source protocol, it allows multiple developers to create interfaces or apps on top of it, avoiding the single-platform bottleneck Ulbricht critiqued. * Secure Payments: Using Strike Integration with Bitcoin Lightning Network, NextBlock enables bot-resistant and intermediary-free transactions, aligning value transfer with each person's control.
This structure decentralizes advertising in a way that mirrors Ulbricht’s broader vision for social systems, with aligned principles showing a specific use case: monetizing attention on Nostr.
Aligned Principles
Ulbricht’s 2021 article didn’t explicitly predict the PROMO Protocol, but its foundational concepts align remarkably well with NextBlock's implementation the protocol’s design: * Autonomy Over Value: Ulbricht argued that users should control their content and its economic benefits. In the PROMO Protocol, people dictate ad pricing, directly capturing the value of their participation. Whether it’s their time, influence, or digital space, rather than ceding it to a centralized ad network. * Ecosystem of Providers: Ulbricht envisioned multiple providers replacing a single platform. The PROMO Protocol’s open-source nature invites a similar diversity: anyone can build interfaces or tools on top of it, creating a competitive, decentralized advertising ecosystem rather than a walled garden. * Decentralized Transactions: Ulbricht’s DSP implied decentralized mechanisms for value exchange. NextBlock delivers this through the Bitcoin Lightning Network, ensuring that payments for ads are secure, instantaneous and final, a practical realization of Ulbricht’s call for user-controlled value flows. * Privacy and Control: While Ulbricht emphasized privacy in social interactions, the PROMO Protocol is public by default. Individuals are fully aware of all data that they generate since all Nostr messages are signed. All participants interact directly via Nostr.
[Blueprint Match, source NextBlock]
Who We Are
NextBlock is a US-based new media company reimagining digital ads for a decentralized future. Our founders, software and strategy experts, were hobbyist podcasters struggling to promote their work online without gaming the system. That sparked an idea: using new tech like Nostr and Bitcoin to build a decentralized attention market for people who value control and businesses seeking real connections.
Our first product, Billboard, is launching this June.
Open for All
Our model’s open-source! Check out the PROMO Protocol, built for promotion and attention trading. Anyone can join this decentralized ad network. Run your own billboard or use ours. This is a growing ecosystem for a new ad economy.
Our Vision
NextBlock wants to help build a new decentralized internet. Our revolutionary and transparent business model will bring honest revenue to companies hosting valuable digital spaces. Together, we will discover what our attention is really worth.
Read our Manifesto to learn more.
NextBlock is registered in Texas, USA.
-
@ c631e267:c2b78d3e
2025-03-31 07:23:05Der Irrsinn ist bei Einzelnen etwas Seltenes – \ aber bei Gruppen, Parteien, Völkern, Zeiten die Regel. \ Friedrich Nietzsche
Erinnern Sie sich an die Horrorkomödie «Scary Movie»? Nicht, dass ich diese Art Filme besonders erinnerungswürdig fände, aber einige Szenen daraus sind doch gewissermaßen Klassiker. Dazu zählt eine, die das Verhalten vieler Protagonisten in Horrorfilmen parodiert, wenn sie in Panik flüchten. Welchen Weg nimmt wohl die Frau in der Situation auf diesem Bild?
Diese Szene kommt mir automatisch in den Sinn, wenn ich aktuelle Entwicklungen in Europa betrachte. Weitreichende Entscheidungen gehen wider jede Logik in die völlig falsche Richtung. Nur ist das hier alles andere als eine Komödie, sondern bitterernst. Dieser Horror ist leider sehr real.
Die Europäische Union hat sich selbst über Jahre konsequent in eine Sackgasse manövriert. Sie hat es versäumt, sich und ihre Politik selbstbewusst und im Einklang mit ihren Wurzeln auf dem eigenen Kontinent zu positionieren. Stattdessen ist sie in blinder Treue den vermeintlichen «transatlantischen Freunden» auf ihrem Konfrontationskurs gen Osten gefolgt.
In den USA haben sich die Vorzeichen allerdings mittlerweile geändert, und die einst hoch gelobten «Freunde und Partner» erscheinen den europäischen «Führern» nicht mehr vertrauenswürdig. Das ist spätestens seit der Münchner Sicherheitskonferenz, der Rede von Vizepräsident J. D. Vance und den empörten Reaktionen offensichtlich. Große Teile Europas wirken seitdem wie ein aufgescheuchter Haufen kopfloser Hühner. Orientierung und Kontrolle sind völlig abhanden gekommen.
Statt jedoch umzukehren oder wenigstens zu bremsen und vielleicht einen Abzweig zu suchen, geben die Crash-Piloten jetzt auf dem Weg durch die Sackgasse erst richtig Gas. Ja sie lösen sogar noch die Sicherheitsgurte und deaktivieren die Airbags. Den vor Angst dauergelähmten Passagieren fällt auch nichts Besseres ein und so schließen sie einfach die Augen. Derweil übertrumpfen sich die Kommentatoren des Events gegenseitig in sensationslüsterner «Berichterstattung».
Wie schon die deutsche Außenministerin mit höchsten UN-Ambitionen, Annalena Baerbock, proklamiert auch die Europäische Kommission einen «Frieden durch Stärke». Zu dem jetzt vorgelegten, selbstzerstörerischen Fahrplan zur Ankurbelung der Rüstungsindustrie, genannt «Weißbuch zur europäischen Verteidigung – Bereitschaft 2030», erklärte die Kommissionspräsidentin, die «Ära der Friedensdividende» sei längst vorbei. Soll das heißen, Frieden bringt nichts ein? Eine umfassende Zusammenarbeit an dauerhaften europäischen Friedenslösungen steht demnach jedenfalls nicht zur Debatte.
Zusätzlich brisant ist, dass aktuell «die ganze EU von Deutschen regiert wird», wie der EU-Parlamentarier und ehemalige UN-Diplomat Michael von der Schulenburg beobachtet hat. Tatsächlich sitzen neben von der Leyen und Strack-Zimmermann noch einige weitere Deutsche in – vor allem auch in Krisenzeiten – wichtigen Spitzenposten der Union. Vor dem Hintergrund der Kriegstreiberei in Deutschland muss eine solche Dominanz mindestens nachdenklich stimmen.
Ihre ursprünglichen Grundwerte wie Demokratie, Freiheit, Frieden und Völkerverständigung hat die EU kontinuierlich in leere Worthülsen verwandelt. Diese werden dafür immer lächerlicher hochgehalten und beschworen.
Es wird dringend Zeit, dass wir, der Souverän, diesem erbärmlichen und gefährlichen Trauerspiel ein Ende setzen und die Fäden selbst in die Hand nehmen. In diesem Sinne fordert uns auch das «European Peace Project» auf, am 9. Mai im Rahmen eines Kunstprojekts den Frieden auszurufen. Seien wir dabei!
[Titelbild: Pixabay]
Dieser Beitrag wurde mit dem Pareto-Client geschrieben und ist zuerst auf Transition News erschienen.
-
@ e6817453:b0ac3c39
2024-12-07 15:03:06Hey folks! Today, let’s dive into the intriguing world of neurosymbolic approaches, retrieval-augmented generation (RAG), and personal knowledge graphs (PKGs). Together, these concepts hold much potential for bringing true reasoning capabilities to large language models (LLMs). So, let’s break down how symbolic logic, knowledge graphs, and modern AI can come together to empower future AI systems to reason like humans.
The Neurosymbolic Approach: What It Means ?
Neurosymbolic AI combines two historically separate streams of artificial intelligence: symbolic reasoning and neural networks. Symbolic AI uses formal logic to process knowledge, similar to how we might solve problems or deduce information. On the other hand, neural networks, like those underlying GPT-4, focus on learning patterns from vast amounts of data — they are probabilistic statistical models that excel in generating human-like language and recognizing patterns but often lack deep, explicit reasoning.
While GPT-4 can produce impressive text, it’s still not very effective at reasoning in a truly logical way. Its foundation, transformers, allows it to excel in pattern recognition, but the models struggle with reasoning because, at their core, they rely on statistical probabilities rather than true symbolic logic. This is where neurosymbolic methods and knowledge graphs come in.
Symbolic Calculations and the Early Vision of AI
If we take a step back to the 1950s, the vision for artificial intelligence was very different. Early AI research was all about symbolic reasoning — where computers could perform logical calculations to derive new knowledge from a given set of rules and facts. Languages like Lisp emerged to support this vision, enabling programs to represent data and code as interchangeable symbols. Lisp was designed to be homoiconic, meaning it treated code as manipulatable data, making it capable of self-modification — a huge leap towards AI systems that could, in theory, understand and modify their own operations.
Lisp: The Earlier AI-Language
Lisp, short for “LISt Processor,” was developed by John McCarthy in 1958, and it became the cornerstone of early AI research. Lisp’s power lay in its flexibility and its use of symbolic expressions, which allowed developers to create programs that could manipulate symbols in ways that were very close to human reasoning. One of the most groundbreaking features of Lisp was its ability to treat code as data, known as homoiconicity, which meant that Lisp programs could introspect and transform themselves dynamically. This ability to adapt and modify its own structure gave Lisp an edge in tasks that required a form of self-awareness, which was key in the early days of AI when researchers were exploring what it meant for machines to “think.”
Lisp was not just a programming language—it represented the vision for artificial intelligence, where machines could evolve their understanding and rewrite their own programming. This idea formed the conceptual basis for many of the self-modifying and adaptive algorithms that are still explored today in AI research. Despite its decline in mainstream programming, Lisp’s influence can still be seen in the concepts used in modern machine learning and symbolic AI approaches.
Prolog: Formal Logic and Deductive Reasoning
In the 1970s, Prolog was developed—a language focused on formal logic and deductive reasoning. Unlike Lisp, based on lambda calculus, Prolog operates on formal logic rules, allowing it to perform deductive reasoning and solve logical puzzles. This made Prolog an ideal candidate for expert systems that needed to follow a sequence of logical steps, such as medical diagnostics or strategic planning.
Prolog, like Lisp, allowed symbols to be represented, understood, and used in calculations, creating another homoiconic language that allows reasoning. Prolog’s strength lies in its rule-based structure, which is well-suited for tasks that require logical inference and backtracking. These features made it a powerful tool for expert systems and AI research in the 1970s and 1980s.
The language is declarative in nature, meaning that you define the problem, and Prolog figures out how to solve it. By using formal logic and setting constraints, Prolog systems can derive conclusions from known facts, making it highly effective in fields requiring explicit logical frameworks, such as legal reasoning, diagnostics, and natural language understanding. These symbolic approaches were later overshadowed during the AI winter — but the ideas never really disappeared. They just evolved.
Solvers and Their Role in Complementing LLMs
One of the most powerful features of Prolog and similar logic-based systems is their use of solvers. Solvers are mechanisms that can take a set of rules and constraints and automatically find solutions that satisfy these conditions. This capability is incredibly useful when combined with LLMs, which excel at generating human-like language but need help with logical consistency and structured reasoning.
For instance, imagine a scenario where an LLM needs to answer a question involving multiple logical steps or a complex query that requires deducing facts from various pieces of information. In this case, a solver can derive valid conclusions based on a given set of logical rules, providing structured answers that the LLM can then articulate in natural language. This allows the LLM to retrieve information and ensure the logical integrity of its responses, leading to much more robust answers.
Solvers are also ideal for handling constraint satisfaction problems — situations where multiple conditions must be met simultaneously. In practical applications, this could include scheduling tasks, generating optimal recommendations, or even diagnosing issues where a set of symptoms must match possible diagnoses. Prolog’s solver capabilities and LLM’s natural language processing power can make these systems highly effective at providing intelligent, rule-compliant responses that traditional LLMs would struggle to produce alone.
By integrating neurosymbolic methods that utilize solvers, we can provide LLMs with a form of deductive reasoning that is missing from pure deep-learning approaches. This combination has the potential to significantly improve the quality of outputs for use-cases that require explicit, structured problem-solving, from legal queries to scientific research and beyond. Solvers give LLMs the backbone they need to not just generate answers but to do so in a way that respects logical rigor and complex constraints.
Graph of Rules for Enhanced Reasoning
Another powerful concept that complements LLMs is using a graph of rules. A graph of rules is essentially a structured collection of logical rules that interconnect in a network-like structure, defining how various entities and their relationships interact. This structured network allows for complex reasoning and information retrieval, as well as the ability to model intricate relationships between different pieces of knowledge.
In a graph of rules, each node represents a rule, and the edges define relationships between those rules — such as dependencies or causal links. This structure can be used to enhance LLM capabilities by providing them with a formal set of rules and relationships to follow, which improves logical consistency and reasoning depth. When an LLM encounters a problem or a question that requires multiple logical steps, it can traverse this graph of rules to generate an answer that is not only linguistically fluent but also logically robust.
For example, in a healthcare application, a graph of rules might include nodes for medical symptoms, possible diagnoses, and recommended treatments. When an LLM receives a query regarding a patient’s symptoms, it can use the graph to traverse from symptoms to potential diagnoses and then to treatment options, ensuring that the response is coherent and medically sound. The graph of rules guides reasoning, enabling LLMs to handle complex, multi-step questions that involve chains of reasoning, rather than merely generating surface-level responses.
Graphs of rules also enable modular reasoning, where different sets of rules can be activated based on the context or the type of question being asked. This modularity is crucial for creating adaptive AI systems that can apply specific sets of logical frameworks to distinct problem domains, thereby greatly enhancing their versatility. The combination of neural fluency with rule-based structure gives LLMs the ability to conduct more advanced reasoning, ultimately making them more reliable and effective in domains where accuracy and logical consistency are critical.
By implementing a graph of rules, LLMs are empowered to perform deductive reasoning alongside their generative capabilities, creating responses that are not only compelling but also logically aligned with the structured knowledge available in the system. This further enhances their potential applications in fields such as law, engineering, finance, and scientific research — domains where logical consistency is as important as linguistic coherence.
Enhancing LLMs with Symbolic Reasoning
Now, with LLMs like GPT-4 being mainstream, there is an emerging need to add real reasoning capabilities to them. This is where neurosymbolic approaches shine. Instead of pitting neural networks against symbolic reasoning, these methods combine the best of both worlds. The neural aspect provides language fluency and recognition of complex patterns, while the symbolic side offers real reasoning power through formal logic and rule-based frameworks.
Personal Knowledge Graphs (PKGs) come into play here as well. Knowledge graphs are data structures that encode entities and their relationships — they’re essentially semantic networks that allow for structured information retrieval. When integrated with neurosymbolic approaches, LLMs can use these graphs to answer questions in a far more contextual and precise way. By retrieving relevant information from a knowledge graph, they can ground their responses in well-defined relationships, thus improving both the relevance and the logical consistency of their answers.
Imagine combining an LLM with a graph of rules that allow it to reason through the relationships encoded in a personal knowledge graph. This could involve using deductive databases to form a sophisticated way to represent and reason with symbolic data — essentially constructing a powerful hybrid system that uses LLM capabilities for language fluency and rule-based logic for structured problem-solving.
My Research on Deductive Databases and Knowledge Graphs
I recently did some research on modeling knowledge graphs using deductive databases, such as DataLog — which can be thought of as a limited, data-oriented version of Prolog. What I’ve found is that it’s possible to use formal logic to model knowledge graphs, ontologies, and complex relationships elegantly as rules in a deductive system. Unlike classical RDF or traditional ontology-based models, which sometimes struggle with complex or evolving relationships, a deductive approach is more flexible and can easily support dynamic rules and reasoning.
Prolog and similar logic-driven frameworks can complement LLMs by handling the parts of reasoning where explicit rule-following is required. LLMs can benefit from these rule-based systems for tasks like entity recognition, logical inferences, and constructing or traversing knowledge graphs. We can even create a graph of rules that governs how relationships are formed or how logical deductions can be performed.
The future is really about creating an AI that is capable of both deep contextual understanding (using the powerful generative capacity of LLMs) and true reasoning (through symbolic systems and knowledge graphs). With the neurosymbolic approach, these AIs could be equipped not just to generate information but to explain their reasoning, form logical conclusions, and even improve their own understanding over time — getting us a step closer to true artificial general intelligence.
Why It Matters for LLM Employment
Using neurosymbolic RAG (retrieval-augmented generation) in conjunction with personal knowledge graphs could revolutionize how LLMs work in real-world applications. Imagine an LLM that understands not just language but also the relationships between different concepts — one that can navigate, reason, and explain complex knowledge domains by actively engaging with a personalized set of facts and rules.
This could lead to practical applications in areas like healthcare, finance, legal reasoning, or even personal productivity — where LLMs can help users solve complex problems logically, providing relevant information and well-justified reasoning paths. The combination of neural fluency with symbolic accuracy and deductive power is precisely the bridge we need to move beyond purely predictive AI to truly intelligent systems.
Let's explore these ideas further if you’re as fascinated by this as I am. Feel free to reach out, follow my YouTube channel, or check out some articles I’ll link below. And if you’re working on anything in this field, I’d love to collaborate!
Until next time, folks. Stay curious, and keep pushing the boundaries of AI!
-
@ 4e616576:43c4fee8
2025-04-30 13:28:18asdfasdfsadfaf
-
@ c631e267:c2b78d3e
2025-03-21 19:41:50Wir werden nicht zulassen, dass technisch manches möglich ist, \ aber der Staat es nicht nutzt. \ Angela Merkel
Die Modalverben zu erklären, ist im Deutschunterricht manchmal nicht ganz einfach. Nicht alle Fremdsprachen unterscheiden zum Beispiel bei der Frage nach einer Möglichkeit gleichermaßen zwischen «können» im Sinne von «die Gelegenheit, Kenntnis oder Fähigkeit haben» und «dürfen» als «die Erlaubnis oder Berechtigung haben». Das spanische Wort «poder» etwa steht für beides.
Ebenso ist vielen Schülern auf den ersten Blick nicht recht klar, dass das logische Gegenteil von «müssen» nicht unbedingt «nicht müssen» ist, sondern vielmehr «nicht dürfen». An den Verkehrsschildern lässt sich so etwas meistens recht gut erklären: Manchmal muss man abbiegen, aber manchmal darf man eben nicht.
Dieses Beispiel soll ein wenig die Verwirrungstaktik veranschaulichen, die in der Politik gerne verwendet wird, um unpopuläre oder restriktive Maßnahmen Stück für Stück einzuführen. Zuerst ist etwas einfach innovativ und bringt viele Vorteile. Vor allem ist es freiwillig, jeder kann selber entscheiden, niemand muss mitmachen. Später kann man zunehmend weniger Alternativen wählen, weil sie verschwinden, und irgendwann verwandelt sich alles andere in «nicht dürfen» – die Maßnahme ist obligatorisch.
Um die Durchsetzung derartiger Initiativen strategisch zu unterstützen und nett zu verpacken, gibt es Lobbyisten, gerne auch NGOs genannt. Dass das «NG» am Anfang dieser Abkürzung übersetzt «Nicht-Regierungs-» bedeutet, ist ein Anachronismus. Das war vielleicht früher einmal so, heute ist eher das Gegenteil gemeint.
In unserer modernen Zeit wird enorm viel Lobbyarbeit für die Digitalisierung praktisch sämtlicher Lebensbereiche aufgewendet. Was das auf dem Sektor der Mobilität bedeuten kann, haben wir diese Woche anhand aktueller Entwicklungen in Spanien beleuchtet. Begründet teilweise mit Vorgaben der Europäischen Union arbeitet man dort fleißig an einer «neuen Mobilität», basierend auf «intelligenter» technologischer Infrastruktur. Derartige Anwandlungen wurden auch schon als «Technofeudalismus» angeprangert.
Nationale Zugangspunkte für Mobilitätsdaten im Sinne der EU gibt es nicht nur in allen Mitgliedsländern, sondern auch in der Schweiz und in Großbritannien. Das Vereinigte Königreich beteiligt sich darüber hinaus an anderen EU-Projekten für digitale Überwachungs- und Kontrollmaßnahmen, wie dem biometrischen Identifizierungssystem für «nachhaltigen Verkehr und Tourismus».
Natürlich marschiert auch Deutschland stracks und euphorisch in Richtung digitaler Zukunft. Ohne vernetzte Mobilität und einen «verlässlichen Zugang zu Daten, einschließlich Echtzeitdaten» komme man in der Verkehrsplanung und -steuerung nicht aus, erklärt die Regierung. Der Interessenverband der IT-Dienstleister Bitkom will «die digitale Transformation der deutschen Wirtschaft und Verwaltung vorantreiben». Dazu bewirbt er unter anderem die Konzepte Smart City, Smart Region und Smart Country und behauptet, deutsche Großstädte «setzen bei Mobilität voll auf Digitalisierung».
Es steht zu befürchten, dass das umfassende Sammeln, Verarbeiten und Vernetzen von Daten, das angeblich die Menschen unterstützen soll (und theoretisch ja auch könnte), eher dazu benutzt wird, sie zu kontrollieren und zu manipulieren. Je elektrischer und digitaler unsere Umgebung wird, desto größer sind diese Möglichkeiten. Im Ergebnis könnten solche Prozesse den Bürger nicht nur einschränken oder überflüssig machen, sondern in mancherlei Hinsicht regelrecht abschalten. Eine gesunde Skepsis ist also geboten.
[Titelbild: Pixabay]
Dieser Beitrag wurde mit dem Pareto-Client geschrieben. Er ist zuerst auf Transition News erschienen.
-
@ e6817453:b0ac3c39
2024-12-07 14:54:46Introduction: Personal Knowledge Graphs and Linked Data
We will explore the world of personal knowledge graphs and discuss how they can be used to model complex information structures. Personal knowledge graphs aren’t just abstract collections of nodes and edges—they encode meaningful relationships, contextualizing data in ways that enrich our understanding of it. While the core structure might be a directed graph, we layer semantic meaning on top, enabling nuanced connections between data points.
The origin of knowledge graphs is deeply tied to concepts from linked data and the semantic web, ideas that emerged to better link scattered pieces of information across the web. This approach created an infrastructure where data islands could connect — facilitating everything from more insightful AI to improved personal data management.
In this article, we will explore how these ideas have evolved into tools for modeling AI’s semantic memory and look at how knowledge graphs can serve as a flexible foundation for encoding rich data contexts. We’ll specifically discuss three major paradigms: RDF (Resource Description Framework), property graphs, and a third way of modeling entities as graphs of graphs. Let’s get started.
Intro to RDF
The Resource Description Framework (RDF) has been one of the fundamental standards for linked data and knowledge graphs. RDF allows data to be modeled as triples: subject, predicate, and object. Essentially, you can think of it as a structured way to describe relationships: “X has a Y called Z.” For instance, “Berlin has a population of 3.5 million.” This modeling approach is quite flexible because RDF uses unique identifiers — usually URIs — to point to data entities, making linking straightforward and coherent.
RDFS, or RDF Schema, extends RDF to provide a basic vocabulary to structure the data even more. This lets us describe not only individual nodes but also relationships among types of data entities, like defining a class hierarchy or setting properties. For example, you could say that “Berlin” is an instance of a “City” and that cities are types of “Geographical Entities.” This kind of organization helps establish semantic meaning within the graph.
RDF and Advanced Topics
Lists and Sets in RDF
RDF also provides tools to model more complex data structures such as lists and sets, enabling the grouping of nodes. This extension makes it easier to model more natural, human-like knowledge, for example, describing attributes of an entity that may have multiple values. By adding RDF Schema and OWL (Web Ontology Language), you gain even more expressive power — being able to define logical rules or even derive new relationships from existing data.
Graph of Graphs
A significant feature of RDF is the ability to form complex nested structures, often referred to as graphs of graphs. This allows you to create “named graphs,” essentially subgraphs that can be independently referenced. For example, you could create a named graph for a particular dataset describing Berlin and another for a different geographical area. Then, you could connect them, allowing for more modular and reusable knowledge modeling.
Property Graphs
While RDF provides a robust framework, it’s not always the easiest to work with due to its heavy reliance on linking everything explicitly. This is where property graphs come into play. Property graphs are less focused on linking everything through triples and allow more expressive properties directly within nodes and edges.
For example, instead of using triples to represent each detail, a property graph might let you store all properties about an entity (e.g., “Berlin”) directly in a single node. This makes property graphs more intuitive for many developers and engineers because they more closely resemble object-oriented structures: you have entities (nodes) that possess attributes (properties) and are connected to other entities through relationships (edges).
The significant benefit here is a condensed representation, which speeds up traversal and queries in some scenarios. However, this also introduces a trade-off: while property graphs are more straightforward to query and maintain, they lack some complex relationship modeling features RDF offers, particularly when connecting properties to each other.
Graph of Graphs and Subgraphs for Entity Modeling
A third approach — which takes elements from RDF and property graphs — involves modeling entities using subgraphs or nested graphs. In this model, each entity can be represented as a graph. This allows for a detailed and flexible description of attributes without exploding every detail into individual triples or lump them all together into properties.
For instance, consider a person entity with a complex employment history. Instead of representing every employment detail in one node (as in a property graph), or as several linked nodes (as in RDF), you can treat the employment history as a subgraph. This subgraph could then contain nodes for different jobs, each linked with specific properties and connections. This approach keeps the complexity where it belongs and provides better flexibility when new attributes or entities need to be added.
Hypergraphs and Metagraphs
When discussing more advanced forms of graphs, we encounter hypergraphs and metagraphs. These take the idea of relationships to a new level. A hypergraph allows an edge to connect more than two nodes, which is extremely useful when modeling scenarios where relationships aren’t just pairwise. For example, a “Project” could connect multiple “People,” “Resources,” and “Outcomes,” all in a single edge. This way, hypergraphs help in reducing the complexity of modeling high-order relationships.
Metagraphs, on the other hand, enable nodes and edges to themselves be represented as graphs. This is an extremely powerful feature when we consider the needs of artificial intelligence, as it allows for the modeling of relationships between relationships, an essential aspect for any system that needs to capture not just facts, but their interdependencies and contexts.
Balancing Structure and Properties
One of the recurring challenges when modeling knowledge is finding the balance between structure and properties. With RDF, you get high flexibility and standardization, but complexity can quickly escalate as you decompose everything into triples. Property graphs simplify the representation by using attributes but lose out on the depth of connection modeling. Meanwhile, the graph-of-graphs approach and hypergraphs offer advanced modeling capabilities at the cost of increased computational complexity.
So, how do you decide which model to use? It comes down to your use case. RDF and nested graphs are strong contenders if you need deep linkage and are working with highly variable data. For more straightforward, engineer-friendly modeling, property graphs shine. And when dealing with very complex multi-way relationships or meta-level knowledge, hypergraphs and metagraphs provide the necessary tools.
The key takeaway is that only some approaches are perfect. Instead, it’s all about the modeling goals: how do you want to query the graph, what relationships are meaningful, and how much complexity are you willing to manage?
Conclusion
Modeling AI semantic memory using knowledge graphs is a challenging but rewarding process. The different approaches — RDF, property graphs, and advanced graph modeling techniques like nested graphs and hypergraphs — each offer unique strengths and weaknesses. Whether you are building a personal knowledge graph or scaling up to AI that integrates multiple streams of linked data, it’s essential to understand the trade-offs each approach brings.
In the end, the choice of representation comes down to the nature of your data and your specific needs for querying and maintaining semantic relationships. The world of knowledge graphs is vast, with many tools and frameworks to explore. Stay connected and keep experimenting to find the balance that works for your projects.
-
@ 4e616576:43c4fee8
2025-04-30 13:27:51asdfasdf
-
@ aa8de34f:a6ffe696
2025-03-21 12:08:3119. März 2025
🔐 1. SHA-256 is Quantum-Resistant
Bitcoin’s proof-of-work mechanism relies on SHA-256, a hashing algorithm. Even with a powerful quantum computer, SHA-256 remains secure because:
- Quantum computers excel at factoring large numbers (Shor’s Algorithm).
- However, SHA-256 is a one-way function, meaning there's no known quantum algorithm that can efficiently reverse it.
- Grover’s Algorithm (which theoretically speeds up brute force attacks) would still require 2¹²⁸ operations to break SHA-256 – far beyond practical reach.
++++++++++++++++++++++++++++++++++++++++++++++++++
🔑 2. Public Key Vulnerability – But Only If You Reuse Addresses
Bitcoin uses Elliptic Curve Digital Signature Algorithm (ECDSA) to generate keys.
- A quantum computer could use Shor’s Algorithm to break SECP256K1, the curve Bitcoin uses.
- If you never reuse addresses, it is an additional security element
- 🔑 1. Bitcoin Addresses Are NOT Public Keys
Many people assume a Bitcoin address is the public key—this is wrong.
- When you receive Bitcoin, it is sent to a hashed public key (the Bitcoin address).
- The actual public key is never exposed because it is the Bitcoin Adress who addresses the Public Key which never reveals the creation of a public key by a spend
- Bitcoin uses Pay-to-Public-Key-Hash (P2PKH) or newer methods like Pay-to-Witness-Public-Key-Hash (P2WPKH), which add extra layers of security.
🕵️♂️ 2.1 The Public Key Never Appears
- When you send Bitcoin, your wallet creates a digital signature.
- This signature uses the private key to prove ownership.
- The Bitcoin address is revealed and creates the Public Key
- The public key remains hidden inside the Bitcoin script and Merkle tree.
This means: ✔ The public key is never exposed. ✔ Quantum attackers have nothing to target, attacking a Bitcoin Address is a zero value game.
+++++++++++++++++++++++++++++++++++++++++++++++++
🔄 3. Bitcoin Can Upgrade
Even if quantum computers eventually become a real threat:
- Bitcoin developers can upgrade to quantum-safe cryptography (e.g., lattice-based cryptography or post-quantum signatures like Dilithium).
- Bitcoin’s decentralized nature ensures a network-wide soft fork or hard fork could transition to quantum-resistant keys.
++++++++++++++++++++++++++++++++++++++++++++++++++
⏳ 4. The 10-Minute Block Rule as a Security Feature
- Bitcoin’s network operates on a 10-minute block interval, meaning:Even if an attacker had immense computational power (like a quantum computer), they could only attempt an attack every 10 minutes.Unlike traditional encryption, where a hacker could continuously brute-force keys, Bitcoin’s system resets the challenge with every new block.This limits the window of opportunity for quantum attacks.
🎯 5. Quantum Attack Needs to Solve a Block in Real-Time
- A quantum attacker must solve the cryptographic puzzle (Proof of Work) in under 10 minutes.
- The problem? Any slight error changes the hash completely, meaning:If the quantum computer makes a mistake (even 0.0001% probability), the entire attack fails.Quantum decoherence (loss of qubit stability) makes error correction a massive challenge.The computational cost of recovering from an incorrect hash is still incredibly high.
⚡ 6. Network Resilience – Even if a Block Is Hacked
- Even if a quantum computer somehow solved a block instantly:The network would quickly recognize and reject invalid transactions.Other miners would continue mining under normal cryptographic rules.51% Attack? The attacker would need to consistently beat the entire Bitcoin network, which is not sustainable.
🔄 7. The Logarithmic Difficulty Adjustment Neutralizes Threats
- Bitcoin adjusts mining difficulty every 2016 blocks (\~2 weeks).
- If quantum miners appeared and suddenly started solving blocks too quickly, the difficulty would adjust upward, making attacks significantly harder.
- This self-correcting mechanism ensures that even quantum computers wouldn't easily overpower the network.
🔥 Final Verdict: Quantum Computers Are Too Slow for Bitcoin
✔ The 10-minute rule limits attack frequency – quantum computers can’t keep up.
✔ Any slight miscalculation ruins the attack, resetting all progress.
✔ Bitcoin’s difficulty adjustment would react, neutralizing quantum advantages.
Even if quantum computers reach their theoretical potential, Bitcoin’s game theory and design make it incredibly resistant. 🚀
-
@ 9bde4214:06ca052b
2025-04-22 18:13:37"It's gonna be permissionless or hell."
Gigi and gzuuus are vibing towards dystopia.
Books & articles mentioned:
- AI 2027
- DVMs were a mistake
- Careless People by Sarah Wynn-Williams
- Takedown by Laila michelwait
- The Ultimate Resource by Julian L. Simon
- Harry Potter by J.K. Rowling
- Momo by Michael Ende
In this dialogue:
- Pablo's Roo Setup
- Tech Hype Cycles
- AI 2027
- Prompt injection and other attacks
- Goose and DVMCP
- Cursor vs Roo Code
- Staying in control thanks to Amber and signing delegation
- Is YOLO mode here to stay?
- What agents to trust?
- What MCP tools to trust?
- What code snippets to trust?
- Everyone will run into the issues of trust and micropayments
- Nostr solves Web of Trust & micropayments natively
- Minimalistic & open usually wins
- DVMCP exists thanks to Totem
- Relays as Tamagochis
- Agents aren't nostr experts, at least not right now
- Fix a mistake once & it's fixed forever
- Giving long-term memory to LLMs
- RAG Databases signed by domain experts
- Human-agent hybrids & Chess
- Nostr beating heart
- Pluggable context & experts
- "You never need an API key for anything"
- Sats and social signaling
- Difficulty-adjusted PoW as a rare-limiting mechanism
- Certificate authorities and centralization
- No solutions to policing speech!
- OAuth and how it centralized
- Login with nostr
- Closed vs open-source models
- Tiny models vs large models
- The minions protocol (Stanford paper)
- Generalist models vs specialized models
- Local compute & encrypted queries
- Blinded compute
- "In the eyes of the state, agents aren't people"
- Agents need identity and money; nostr provides both
- "It's gonna be permissionless or hell"
- We already have marketplaces for MCP stuff, code snippets, and other things
- Most great stuff came from marketplaces (browsers, games, etc)
- Zapstore shows that this is already working
- At scale, central control never works. There's plenty scams and viruses in the app stores.
- Using nostr to archive your user-generated content
- HAVEN, blossom, novia
- The switcharoo from advertisements to training data
- What is Truth?
- What is Real?
- "We're vibing into dystopia"
- Who should be the arbiter of Truth?
- First Amendment & why the Logos is sacred
- Silicon Valley AI bros arrogantly dismiss wisdom and philosophy
- Suicide rates & the meaning crisis
- Are LLMs symbiotic or parasitic?
- The Amish got it right
- Are we gonna make it?
- Careless People by Sarah Wynn-Williams
- Takedown by Laila michelwait
- Harry Potter dementors & Momo's time thieves
- Facebook & Google as non-human (superhuman) agents
- Zapping as a conscious action
- Privacy and the internet
- Plausible deniability thanks to generative models
- Google glasses, glassholes, and Meta's Ray Ben's
- People crave realness
- Bitcoin is the realest money we ever had
- Nostr allows for real and honest expression
- How do we find out what's real?
- Constraints, policing, and chilling effects
- Jesus' plans for DVMCP
- Hzrd's article on how DVMs are broken (DVMs were a mistake)
- Don't believe the hype
- DVMs pre-date MCP tools
- Data Vending Machines were supposed to be stupid: put coin in, get stuff out.
- Self-healing vibe-coding
- IP addresses as scarce assets
- Atomic swaps and the ASS protocol
- More marketplaces, less silos
- The intensity of #SovEng and the last 6 weeks
- If you can vibe-code everything, why build anything?
- Time, the ultimate resource
- What are the LLMs allowed to think?
- Natural language interfaces are inherently dialogical
- Sovereign Engineering is dialogical too
-
@ e6817453:b0ac3c39
2024-12-07 14:52:47The temporal semantics and temporal and time-aware knowledge graphs. We have different memory models for artificial intelligence agents. We all try to mimic somehow how the brain works, or at least how the declarative memory of the brain works. We have the split of episodic memory and semantic memory. And we also have a lot of theories, right?
Declarative Memory of the Human Brain
How is the semantic memory formed? We all know that our brain stores semantic memory quite close to the concept we have with the personal knowledge graphs, that it’s connected entities. They form a connection with each other and all those things. So far, so good. And actually, then we have a lot of concepts, how the episodic memory and our experiences gets transmitted to the semantic:
- hippocampus indexing and retrieval
- sanitization of episodic memories
- episodic-semantic shift theory
They all give a different perspective on how different parts of declarative memory cooperate.
We know that episodic memories get semanticized over time. You have semantic knowledge without the notion of time, and probably, your episodic memory is just decayed.
But, you know, it’s still an open question:
do we want to mimic an AI agent’s memory as a human brain memory, or do we want to create something different?
It’s an open question to which we have no good answer. And if you go to the theory of neuroscience and check how episodic and semantic memory interfere, you will still find a lot of theories, yeah?
Some of them say that you have the hippocampus that keeps the indexes of the memory. Some others will say that you semantic the episodic memory. Some others say that you have some separate process that digests the episodic and experience to the semantics. But all of them agree on the plan that it’s operationally two separate areas of memories and even two separate regions of brain, and the semantic, it’s more, let’s say, protected.
So it’s harder to forget the semantical facts than the episodes and everything. And what I’m thinking about for a long time, it’s this, you know, the semantic memory.
Temporal Semantics
It’s memory about the facts, but you somehow mix the time information with the semantics. I already described a lot of things, including how we could combine time with knowledge graphs and how people do it.
There are multiple ways we could persist such information, but we all hit the wall because the complexity of time and the semantics of time are highly complex concepts.
Time in a Semantic context is not a timestamp.
What I mean is that when you have a fact, and you just mentioned that I was there at this particular moment, like, I don’t know, 15:40 on Monday, it’s already awake because we don’t know which Monday, right? So you need to give the exact date, but usually, you do not have experiences like that.
You do not record your memories like that, except you do the journaling and all of the things. So, usually, you have no direct time references. What I mean is that you could say that I was there and it was some event, blah, blah, blah.
Somehow, we form a chain of events that connect with each other and maybe will be connected to some period of time if we are lucky enough. This means that we could not easily represent temporal-aware information as just a timestamp or validity and all of the things.
For sure, the validity of the knowledge graphs (simple quintuple with start and end dates)is a big topic, and it could solve a lot of things. It could solve a lot of the time cases. It’s super simple because you give the end and start dates, and you are done, but it does not answer facts that have a relative time or time information in facts . It could solve many use cases but struggle with facts in an indirect temporal context. I like the simplicity of this idea. But the problem of this approach that in most cases, we simply don’t have these timestamps. We don’t have the timestamp where this information starts and ends. And it’s not modeling many events in our life, especially if you have the processes or ongoing activities or recurrent events.
I’m more about thinking about the time of semantics, where you have a time model as a hybrid clock or some global clock that does the partial ordering of the events. It’s mean that you have the chain of the experiences and you have the chain of the facts that have the different time contexts.
We could deduct the time from this chain of the events. But it’s a big, big topic for the research. But what I want to achieve, actually, it’s not separation on episodic and semantic memory. It’s having something in between.
Blockchain of connected events and facts
I call it temporal-aware semantics or time-aware knowledge graphs, where we could encode the semantic fact together with the time component.I doubt that time should be the simple timestamp or the region of the two timestamps. For me, it is more a chain for facts that have a partial order and form a blockchain like a database or a partially ordered Acyclic graph of facts that are temporally connected. We could have some notion of time that is understandable to the agent and a model that allows us to order the events and focus on what the agent knows and how to order this time knowledge and create the chains of the events.
Time anchors
We may have a particular time in the chain that allows us to arrange a more concrete time for the rest of the events. But it’s still an open topic for research. The temporal semantics gets split into a couple of domains. One domain is how to add time to the knowledge graphs. We already have many different solutions. I described them in my previous articles.
Another domain is the agent's memory and how the memory of the artificial intelligence treats the time. This one, it’s much more complex. Because here, we could not operate with the simple timestamps. We need to have the representation of time that are understandable by model and understandable by the agent that will work with this model. And this one, it’s way bigger topic for the research.”
-
@ a95c6243:d345522c
2025-03-20 09:59:20Bald werde es verboten, alleine im Auto zu fahren, konnte man dieser Tage in verschiedenen spanischen Medien lesen. Die nationale Verkehrsbehörde (Dirección General de Tráfico, kurz DGT) werde Alleinfahrern das Leben schwer machen, wurde gemeldet. Konkret erörtere die Generaldirektion geeignete Sanktionen für Personen, die ohne Beifahrer im Privatauto unterwegs seien.
Das Alleinfahren sei zunehmend verpönt und ein Mentalitätswandel notwendig, hieß es. Dieser «Luxus» stehe im Widerspruch zu den Maßnahmen gegen Umweltverschmutzung, die in allen europäischen Ländern gefördert würden. In Frankreich sei es «bereits verboten, in der Hauptstadt allein zu fahren», behauptete Noticiastrabajo Huffpost in einer Zwischenüberschrift. Nur um dann im Text zu konkretisieren, dass die sogenannte «Umweltspur» auf der Pariser Ringautobahn gemeint war, die für Busse, Taxis und Fahrgemeinschaften reserviert ist. Ab Mai werden Verstöße dagegen mit einem Bußgeld geahndet.
Die DGT jedenfalls wolle bei der Umsetzung derartiger Maßnahmen nicht hinterherhinken. Diese Medienberichte, inklusive des angeblich bevorstehenden Verbots, beriefen sich auf Aussagen des Generaldirektors der Behörde, Pere Navarro, beim Mobilitätskongress Global Mobility Call im November letzten Jahres, wo es um «nachhaltige Mobilität» ging. Aus diesem Kontext stammt auch Navarros Warnung: «Die Zukunft des Verkehrs ist geteilt oder es gibt keine».
Die «Faktenchecker» kamen der Generaldirektion prompt zu Hilfe. Die DGT habe derlei Behauptungen zurückgewiesen und klargestellt, dass es keine Pläne gebe, Fahrten mit nur einer Person im Auto zu verbieten oder zu bestrafen. Bei solchen Meldungen handele es sich um Fake News. Teilweise wurde der Vorsitzende der spanischen «Rechtsaußen»-Partei Vox, Santiago Abascal, der Urheberschaft bezichtigt, weil er einen entsprechenden Artikel von La Gaceta kommentiert hatte.
Der Beschwichtigungsversuch der Art «niemand hat die Absicht» ist dabei erfahrungsgemäß eher ein Alarmzeichen als eine Beruhigung. Walter Ulbrichts Leugnung einer geplanten Berliner Mauer vom Juni 1961 ist vielen genauso in Erinnerung wie die Fake News-Warnungen des deutschen Bundesgesundheitsministeriums bezüglich Lockdowns im März 2020 oder diverse Äußerungen zu einer Impfpflicht ab 2020.
Aber Aufregung hin, Dementis her: Die Pressemitteilung der DGT zu dem Mobilitätskongress enthält in Wahrheit viel interessantere Informationen als «nur» einen Appell an den «guten» Bürger wegen der Bemühungen um die Lebensqualität in Großstädten oder einen möglichen obligatorischen Abschied vom Alleinfahren. Allerdings werden diese Details von Medien und sogenannten Faktencheckern geflissentlich übersehen, obwohl sie keineswegs versteckt sind. Die Auskünfte sind sehr aufschlussreich, wenn man genauer hinschaut.
Digitalisierung ist der Schlüssel für Kontrolle
Auf dem Kongress stellte die Verkehrsbehörde ihre Initiativen zur Förderung der «neuen Mobilität» vor, deren Priorität Sicherheit und Effizienz sei. Die vier konkreten Ansätze haben alle mit Digitalisierung, Daten, Überwachung und Kontrolle im großen Stil zu tun und werden unter dem Euphemismus der «öffentlich-privaten Partnerschaft» angepriesen. Auch lassen sie die transhumanistische Idee vom unzulänglichen Menschen erkennen, dessen Fehler durch «intelligente» technologische Infrastruktur kompensiert werden müssten.
Die Chefin des Bereichs «Verkehrsüberwachung» erklärte die Funktion des spanischen National Access Point (NAP), wobei sie betonte, wie wichtig Verkehrs- und Infrastrukturinformationen in Echtzeit seien. Der NAP ist «eine essenzielle Web-Applikation, die unter EU-Mandat erstellt wurde», kann man auf der Website der DGT nachlesen.
Das Mandat meint Regelungen zu einem einheitlichen europäischen Verkehrsraum, mit denen die Union mindestens seit 2010 den Aufbau einer digitalen Architektur mit offenen Schnittstellen betreibt. Damit begründet man auch «umfassende Datenbereitstellungspflichten im Bereich multimodaler Reiseinformationen». Jeder Mitgliedstaat musste einen NAP, also einen nationalen Zugangspunkt einrichten, der Zugang zu statischen und dynamischen Reise- und Verkehrsdaten verschiedener Verkehrsträger ermöglicht.
Diese Entwicklung ist heute schon weit fortgeschritten, auch und besonders in Spanien. Auf besagtem Kongress erläuterte die Leiterin des Bereichs «Telematik» die Plattform «DGT 3.0». Diese werde als Integrator aller Informationen genutzt, die von den verschiedenen öffentlichen und privaten Systemen, die Teil der Mobilität sind, bereitgestellt werden.
Es handele sich um eine Vermittlungsplattform zwischen Akteuren wie Fahrzeugherstellern, Anbietern von Navigationsdiensten oder Kommunen und dem Endnutzer, der die Verkehrswege benutzt. Alle seien auf Basis des Internets der Dinge (IOT) anonym verbunden, «um der vernetzten Gemeinschaft wertvolle Informationen zu liefern oder diese zu nutzen».
So sei DGT 3.0 «ein Zugangspunkt für einzigartige, kostenlose und genaue Echtzeitinformationen über das Geschehen auf den Straßen und in den Städten». Damit lasse sich der Verkehr nachhaltiger und vernetzter gestalten. Beispielsweise würden die Karten des Produktpartners Google dank der DGT-Daten 50 Millionen Mal pro Tag aktualisiert.
Des Weiteren informiert die Verkehrsbehörde über ihr SCADA-Projekt. Die Abkürzung steht für Supervisory Control and Data Acquisition, zu deutsch etwa: Kontrollierte Steuerung und Datenerfassung. Mit SCADA kombiniert man Software und Hardware, um automatisierte Systeme zur Überwachung und Steuerung technischer Prozesse zu schaffen. Das SCADA-Projekt der DGT wird von Indra entwickelt, einem spanischen Beratungskonzern aus den Bereichen Sicherheit & Militär, Energie, Transport, Telekommunikation und Gesundheitsinformation.
Das SCADA-System der Behörde umfasse auch eine Videostreaming- und Videoaufzeichnungsplattform, die das Hochladen in die Cloud in Echtzeit ermöglicht, wie Indra erklärt. Dabei gehe es um Bilder, die von Überwachungskameras an Straßen aufgenommen wurden, sowie um Videos aus DGT-Hubschraubern und Drohnen. Ziel sei es, «die sichere Weitergabe von Videos an Dritte sowie die kontinuierliche Aufzeichnung und Speicherung von Bildern zur möglichen Analyse und späteren Nutzung zu ermöglichen».
Letzteres klingt sehr nach biometrischer Erkennung und Auswertung durch künstliche Intelligenz. Für eine bessere Datenübertragung wird derzeit die Glasfaserverkabelung entlang der Landstraßen und Autobahnen ausgebaut. Mit der Cloud sind die Amazon Web Services (AWS) gemeint, die spanischen Daten gehen somit direkt zu einem US-amerikanischen «Big Data»-Unternehmen.
Das Thema «autonomes Fahren», also Fahren ohne Zutun des Menschen, bildet den Abschluss der Betrachtungen der DGT. Zusammen mit dem Interessenverband der Automobilindustrie ANFAC (Asociación Española de Fabricantes de Automóviles y Camiones) sprach man auf dem Kongress über Strategien und Perspektiven in diesem Bereich. Die Lobbyisten hoffen noch in diesem Jahr 2025 auf einen normativen Rahmen zur erweiterten Unterstützung autonomer Technologien.
Wenn man derartige Informationen im Zusammenhang betrachtet, bekommt man eine Idee davon, warum zunehmend alles elektrisch und digital werden soll. Umwelt- und Mobilitätsprobleme in Städten, wie Luftverschmutzung, Lärmbelästigung, Platzmangel oder Staus, sind eine Sache. Mit dem Argument «emissionslos» wird jedoch eine Referenz zum CO2 und dem «menschengemachten Klimawandel» hergestellt, die Emotionen triggert. Und damit wird so ziemlich alles verkauft.
Letztlich aber gilt: Je elektrischer und digitaler unsere Umgebung wird und je freigiebiger wir mit unseren Daten jeder Art sind, desto besser werden wir kontrollier-, steuer- und sogar abschaltbar. Irgendwann entscheiden KI-basierte Algorithmen, ob, wann, wie, wohin und mit wem wir uns bewegen dürfen. Über einen 15-Minuten-Radius geht dann möglicherweise nichts hinaus. Die Projekte auf diesem Weg sind ernst zu nehmen, real und schon weit fortgeschritten.
[Titelbild: Pixabay]
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ 4e616576:43c4fee8
2025-04-30 13:13:51asdffasdf
-
@ a95c6243:d345522c
2025-03-15 10:56:08Was nützt die schönste Schuldenbremse, wenn der Russe vor der Tür steht? \ Wir können uns verteidigen lernen oder alle Russisch lernen. \ Jens Spahn
In der Politik ist buchstäblich keine Idee zu riskant, kein Mittel zu schäbig und keine Lüge zu dreist, als dass sie nicht benutzt würden. Aber der Clou ist, dass diese Masche immer noch funktioniert, wenn nicht sogar immer besser. Ist das alles wirklich so schwer zu durchschauen? Mir fehlen langsam die Worte.
Aktuell werden sowohl in der Europäischen Union als auch in Deutschland riesige Milliardenpakete für die Aufrüstung – also für die Rüstungsindustrie – geschnürt. Die EU will 800 Milliarden Euro locker machen, in Deutschland sollen es 500 Milliarden «Sondervermögen» sein. Verteidigung nennen das unsere «Führer», innerhalb der Union und auch an «unserer Ostflanke», der Ukraine.
Das nötige Feindbild konnte inzwischen signifikant erweitert werden. Schuld an allem und zudem gefährlich ist nicht mehr nur Putin, sondern jetzt auch Trump. Europa müsse sich sowohl gegen Russland als auch gegen die USA schützen und rüsten, wird uns eingetrichtert.
Und während durch Diplomatie genau dieser beiden Staaten gerade endlich mal Bewegung in die Bemühungen um einen Frieden oder wenigstens einen Waffenstillstand in der Ukraine kommt, rasselt man im moralisch überlegenen Zeigefinger-Europa so richtig mit dem Säbel.
Begleitet und gestützt wird der ganze Prozess – wie sollte es anders sein – von den «Qualitätsmedien». Dass Russland einen Angriff auf «Europa» plant, weiß nicht nur der deutsche Verteidigungsminister (und mit Abstand beliebteste Politiker) Pistorius, sondern dank ihnen auch jedes Kind. Uns bleiben nur noch wenige Jahre. Zum Glück bereitet sich die Bundeswehr schon sehr konkret auf einen Krieg vor.
Die FAZ und Corona-Gesundheitsminister Spahn markieren einen traurigen Höhepunkt. Hier haben sich «politische und publizistische Verantwortungslosigkeit propagandistisch gegenseitig befruchtet», wie es bei den NachDenkSeiten heißt. Die Aussage Spahns in dem Interview, «der Russe steht vor der Tür», ist das eine. Die Zeitung verschärfte die Sache jedoch, indem sie das Zitat explizit in den Titel übernahm, der in einer ersten Version scheinbar zu harmlos war.
Eine große Mehrheit der deutschen Bevölkerung findet Aufrüstung und mehr Schulden toll, wie ARD und ZDF sehr passend ermittelt haben wollen. Ähnliches gelte für eine noch stärkere militärische Unterstützung der Ukraine. Etwas skeptischer seien die Befragten bezüglich der Entsendung von Bundeswehrsoldaten dorthin, aber immerhin etwa fifty-fifty.
Eigentlich ist jedoch die Meinung der Menschen in «unseren Demokratien» irrelevant. Sowohl in der Europäischen Union als auch in Deutschland sind die «Eliten» offenbar der Ansicht, der Souverän habe in Fragen von Krieg und Frieden sowie von aberwitzigen astronomischen Schulden kein Wörtchen mitzureden. Frau von der Leyen möchte über 150 Milliarden aus dem Gesamtpaket unter Verwendung von Artikel 122 des EU-Vertrags ohne das Europäische Parlament entscheiden – wenn auch nicht völlig kritiklos.
In Deutschland wollen CDU/CSU und SPD zur Aufweichung der «Schuldenbremse» mehrere Änderungen des Grundgesetzes durch das abgewählte Parlament peitschen. Dieser Versuch, mit dem alten Bundestag eine Zweidrittelmehrheit zu erzielen, die im neuen nicht mehr gegeben wäre, ist mindestens verfassungsrechtlich umstritten.
Das Manöver scheint aber zu funktionieren. Heute haben die Grünen zugestimmt, nachdem Kanzlerkandidat Merz läppische 100 Milliarden für «irgendwas mit Klima» zugesichert hatte. Die Abstimmung im Plenum soll am kommenden Dienstag erfolgen – nur eine Woche, bevor sich der neu gewählte Bundestag konstituieren wird.
Interessant sind die Argumente, die BlackRocker Merz für seine Attacke auf Grundgesetz und Demokratie ins Feld führt. Abgesehen von der angeblichen Eile, «unsere Verteidigungsfähigkeit deutlich zu erhöhen» (ausgelöst unter anderem durch «die Münchner Sicherheitskonferenz und die Ereignisse im Weißen Haus»), ließ uns der CDU-Chef wissen, dass Deutschland einfach auf die internationale Bühne zurück müsse. Merz schwadronierte gefährlich mehrdeutig:
«Die ganze Welt schaut in diesen Tagen und Wochen auf Deutschland. Wir haben in der Europäischen Union und auf der Welt eine Aufgabe, die weit über die Grenzen unseres eigenen Landes hinausgeht.»
[Titelbild: Tag des Sieges]
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ 4e616576:43c4fee8
2025-04-30 13:11:58asdfasdfasfd
-
@ 4ba8e86d:89d32de4
2025-04-21 02:13:56Tutorial feito por nostr:nostr:npub1rc56x0ek0dd303eph523g3chm0wmrs5wdk6vs0ehd0m5fn8t7y4sqra3tk poste original abaixo:
Parte 1 : http://xh6liiypqffzwnu5734ucwps37tn2g6npthvugz3gdoqpikujju525yd.onion/263585/tutorial-debloat-de-celulares-android-via-adb-parte-1
Parte 2 : http://xh6liiypqffzwnu5734ucwps37tn2g6npthvugz3gdoqpikujju525yd.onion/index.php/263586/tutorial-debloat-de-celulares-android-via-adb-parte-2
Quando o assunto é privacidade em celulares, uma das medidas comumente mencionadas é a remoção de bloatwares do dispositivo, também chamado de debloat. O meio mais eficiente para isso sem dúvidas é a troca de sistema operacional. Custom Rom’s como LineageOS, GrapheneOS, Iodé, CalyxOS, etc, já são bastante enxutos nesse quesito, principalmente quanto não é instalado os G-Apps com o sistema. No entanto, essa prática pode acabar resultando em problemas indesejados como a perca de funções do dispositivo, e até mesmo incompatibilidade com apps bancários, tornando este método mais atrativo para quem possui mais de um dispositivo e separando um apenas para privacidade. Pensando nisso, pessoas que possuem apenas um único dispositivo móvel, que são necessitadas desses apps ou funções, mas, ao mesmo tempo, tem essa visão em prol da privacidade, buscam por um meio-termo entre manter a Stock rom, e não ter seus dados coletados por esses bloatwares. Felizmente, a remoção de bloatwares é possível e pode ser realizada via root, ou mais da maneira que este artigo irá tratar, via adb.
O que são bloatwares?
Bloatware é a junção das palavras bloat (inchar) + software (programa), ou seja, um bloatware é basicamente um programa inútil ou facilmente substituível — colocado em seu dispositivo previamente pela fabricante e operadora — que está no seu dispositivo apenas ocupando espaço de armazenamento, consumindo memória RAM e pior, coletando seus dados e enviando para servidores externos, além de serem mais pontos de vulnerabilidades.
O que é o adb?
O Android Debug Brigde, ou apenas adb, é uma ferramenta que se utiliza das permissões de usuário shell e permite o envio de comandos vindo de um computador para um dispositivo Android exigindo apenas que a depuração USB esteja ativa, mas também pode ser usada diretamente no celular a partir do Android 11, com o uso do Termux e a depuração sem fio (ou depuração wifi). A ferramenta funciona normalmente em dispositivos sem root, e também funciona caso o celular esteja em Recovery Mode.
Requisitos:
Para computadores:
• Depuração USB ativa no celular; • Computador com adb; • Cabo USB;
Para celulares:
• Depuração sem fio (ou depuração wifi) ativa no celular; • Termux; • Android 11 ou superior;
Para ambos:
• Firewall NetGuard instalado e configurado no celular; • Lista de bloatwares para seu dispositivo;
Ativação de depuração:
Para ativar a Depuração USB em seu dispositivo, pesquise como ativar as opções de desenvolvedor de seu dispositivo, e lá ative a depuração. No caso da depuração sem fio, sua ativação irá ser necessária apenas no momento que for conectar o dispositivo ao Termux.
Instalação e configuração do NetGuard
O NetGuard pode ser instalado através da própria Google Play Store, mas de preferência instale pela F-Droid ou Github para evitar telemetria.
F-Droid: https://f-droid.org/packages/eu.faircode.netguard/
Github: https://github.com/M66B/NetGuard/releases
Após instalado, configure da seguinte maneira:
Configurações → padrões (lista branca/negra) → ative as 3 primeiras opções (bloquear wifi, bloquear dados móveis e aplicar regras ‘quando tela estiver ligada’);
Configurações → opções avançadas → ative as duas primeiras (administrar aplicativos do sistema e registrar acesso a internet);
Com isso, todos os apps estarão sendo bloqueados de acessar a internet, seja por wifi ou dados móveis, e na página principal do app basta permitir o acesso a rede para os apps que você vai usar (se necessário). Permita que o app rode em segundo plano sem restrição da otimização de bateria, assim quando o celular ligar, ele já estará ativo.
Lista de bloatwares
Nem todos os bloatwares são genéricos, haverá bloatwares diferentes conforme a marca, modelo, versão do Android, e até mesmo região.
Para obter uma lista de bloatwares de seu dispositivo, caso seu aparelho já possua um tempo de existência, você encontrará listas prontas facilmente apenas pesquisando por elas. Supondo que temos um Samsung Galaxy Note 10 Plus em mãos, basta pesquisar em seu motor de busca por:
Samsung Galaxy Note 10 Plus bloatware list
Provavelmente essas listas já terão inclusas todos os bloatwares das mais diversas regiões, lhe poupando o trabalho de buscar por alguma lista mais específica.
Caso seu aparelho seja muito recente, e/ou não encontre uma lista pronta de bloatwares, devo dizer que você acaba de pegar em merda, pois é chato para um caralho pesquisar por cada aplicação para saber sua função, se é essencial para o sistema ou se é facilmente substituível.
De antemão já aviso, que mais para frente, caso vossa gostosura remova um desses aplicativos que era essencial para o sistema sem saber, vai acabar resultando na perda de alguma função importante, ou pior, ao reiniciar o aparelho o sistema pode estar quebrado, lhe obrigando a seguir com uma formatação, e repetir todo o processo novamente.
Download do adb em computadores
Para usar a ferramenta do adb em computadores, basta baixar o pacote chamado SDK platform-tools, disponível através deste link: https://developer.android.com/tools/releases/platform-tools. Por ele, você consegue o download para Windows, Mac e Linux.
Uma vez baixado, basta extrair o arquivo zipado, contendo dentro dele uma pasta chamada platform-tools que basta ser aberta no terminal para se usar o adb.
Download do adb em celulares com Termux.
Para usar a ferramenta do adb diretamente no celular, antes temos que baixar o app Termux, que é um emulador de terminal linux, e já possui o adb em seu repositório. Você encontra o app na Google Play Store, mas novamente recomendo baixar pela F-Droid ou diretamente no Github do projeto.
F-Droid: https://f-droid.org/en/packages/com.termux/
Github: https://github.com/termux/termux-app/releases
Processo de debloat
Antes de iniciarmos, é importante deixar claro que não é para você sair removendo todos os bloatwares de cara sem mais nem menos, afinal alguns deles precisam antes ser substituídos, podem ser essenciais para você para alguma atividade ou função, ou até mesmo são insubstituíveis.
Alguns exemplos de bloatwares que a substituição é necessária antes da remoção, é o Launcher, afinal, é a interface gráfica do sistema, e o teclado, que sem ele só é possível digitar com teclado externo. O Launcher e teclado podem ser substituídos por quaisquer outros, minha recomendação pessoal é por aqueles que respeitam sua privacidade, como Pie Launcher e Simple Laucher, enquanto o teclado pelo OpenBoard e FlorisBoard, todos open-source e disponíveis da F-Droid.
Identifique entre a lista de bloatwares, quais você gosta, precisa ou prefere não substituir, de maneira alguma você é obrigado a remover todos os bloatwares possíveis, modifique seu sistema a seu bel-prazer. O NetGuard lista todos os apps do celular com o nome do pacote, com isso você pode filtrar bem qual deles não remover.
Um exemplo claro de bloatware insubstituível e, portanto, não pode ser removido, é o com.android.mtp, um protocolo onde sua função é auxiliar a comunicação do dispositivo com um computador via USB, mas por algum motivo, tem acesso a rede e se comunica frequentemente com servidores externos. Para esses casos, e melhor solução mesmo é bloquear o acesso a rede desses bloatwares com o NetGuard.
MTP tentando comunicação com servidores externos:
Executando o adb shell
No computador
Faça backup de todos os seus arquivos importantes para algum armazenamento externo, e formate seu celular com o hard reset. Após a formatação, e a ativação da depuração USB, conecte seu aparelho e o pc com o auxílio de um cabo USB. Muito provavelmente seu dispositivo irá apenas começar a carregar, por isso permita a transferência de dados, para que o computador consiga se comunicar normalmente com o celular.
Já no pc, abra a pasta platform-tools dentro do terminal, e execute o seguinte comando:
./adb start-server
O resultado deve ser:
daemon not running; starting now at tcp:5037 daemon started successfully
E caso não apareça nada, execute:
./adb kill-server
E inicie novamente.
Com o adb conectado ao celular, execute:
./adb shell
Para poder executar comandos diretamente para o dispositivo. No meu caso, meu celular é um Redmi Note 8 Pro, codinome Begonia.
Logo o resultado deve ser:
begonia:/ $
Caso ocorra algum erro do tipo:
adb: device unauthorized. This adb server’s $ADB_VENDOR_KEYS is not set Try ‘adb kill-server’ if that seems wrong. Otherwise check for a confirmation dialog on your device.
Verifique no celular se apareceu alguma confirmação para autorizar a depuração USB, caso sim, autorize e tente novamente. Caso não apareça nada, execute o kill-server e repita o processo.
No celular
Após realizar o mesmo processo de backup e hard reset citado anteriormente, instale o Termux e, com ele iniciado, execute o comando:
pkg install android-tools
Quando surgir a mensagem “Do you want to continue? [Y/n]”, basta dar enter novamente que já aceita e finaliza a instalação
Agora, vá até as opções de desenvolvedor, e ative a depuração sem fio. Dentro das opções da depuração sem fio, terá uma opção de emparelhamento do dispositivo com um código, que irá informar para você um código em emparelhamento, com um endereço IP e porta, que será usado para a conexão com o Termux.
Para facilitar o processo, recomendo que abra tanto as configurações quanto o Termux ao mesmo tempo, e divida a tela com os dois app’s, como da maneira a seguir:
Para parear o Termux com o dispositivo, não é necessário digitar o ip informado, basta trocar por “localhost”, já a porta e o código de emparelhamento, deve ser digitado exatamente como informado. Execute:
adb pair localhost:porta CódigoDeEmparelhamento
De acordo com a imagem mostrada anteriormente, o comando ficaria “adb pair localhost:41255 757495”.
Com o dispositivo emparelhado com o Termux, agora basta conectar para conseguir executar os comandos, para isso execute:
adb connect localhost:porta
Obs: a porta que você deve informar neste comando não é a mesma informada com o código de emparelhamento, e sim a informada na tela principal da depuração sem fio.
Pronto! Termux e adb conectado com sucesso ao dispositivo, agora basta executar normalmente o adb shell:
adb shell
Remoção na prática Com o adb shell executado, você está pronto para remover os bloatwares. No meu caso, irei mostrar apenas a remoção de um app (Google Maps), já que o comando é o mesmo para qualquer outro, mudando apenas o nome do pacote.
Dentro do NetGuard, verificando as informações do Google Maps:
Podemos ver que mesmo fora de uso, e com a localização do dispositivo desativado, o app está tentando loucamente se comunicar com servidores externos, e informar sabe-se lá que peste. Mas sem novidades até aqui, o mais importante é que podemos ver que o nome do pacote do Google Maps é com.google.android.apps.maps, e para o remover do celular, basta executar:
pm uninstall –user 0 com.google.android.apps.maps
E pronto, bloatware removido! Agora basta repetir o processo para o resto dos bloatwares, trocando apenas o nome do pacote.
Para acelerar o processo, você pode já criar uma lista do bloco de notas com os comandos, e quando colar no terminal, irá executar um atrás do outro.
Exemplo de lista:
Caso a donzela tenha removido alguma coisa sem querer, também é possível recuperar o pacote com o comando:
cmd package install-existing nome.do.pacote
Pós-debloat
Após limpar o máximo possível o seu sistema, reinicie o aparelho, caso entre no como recovery e não seja possível dar reboot, significa que você removeu algum app “essencial” para o sistema, e terá que formatar o aparelho e repetir toda a remoção novamente, desta vez removendo poucos bloatwares de uma vez, e reiniciando o aparelho até descobrir qual deles não pode ser removido. Sim, dá trabalho… quem mandou querer privacidade?
Caso o aparelho reinicie normalmente após a remoção, parabéns, agora basta usar seu celular como bem entender! Mantenha o NetGuard sempre executando e os bloatwares que não foram possíveis remover não irão se comunicar com servidores externos, passe a usar apps open source da F-Droid e instale outros apps através da Aurora Store ao invés da Google Play Store.
Referências: Caso você seja um Australopithecus e tenha achado este guia difícil, eis uma videoaula (3:14:40) do Anderson do canal Ciberdef, realizando todo o processo: http://odysee.com/@zai:5/Como-remover-at%C3%A9-200-APLICATIVOS-que-colocam-a-sua-PRIVACIDADE-E-SEGURAN%C3%87A-em-risco.:4?lid=6d50f40314eee7e2f218536d9e5d300290931d23
Pdf’s do Anderson citados na videoaula: créditos ao anon6837264 http://eternalcbrzpicytj4zyguygpmkjlkddxob7tptlr25cdipe5svyqoqd.onion/file/3863a834d29285d397b73a4af6fb1bbe67c888d72d30/t-05e63192d02ffd.pdf
Processo de instalação do Termux e adb no celular: https://youtu.be/APolZrPHSms
-
@ a39d19ec:3d88f61e
2024-11-21 12:05:09A state-controlled money supply can influence the development of socialist policies and practices in various ways. Although the relationship is not deterministic, state control over the money supply can contribute to a larger role of the state in the economy and facilitate the implementation of socialist ideals.
Fiscal Policy Capabilities
When the state manages the money supply, it gains the ability to implement fiscal policies that can lead to an expansion of social programs and welfare initiatives. Funding these programs by creating money can enhance the state's influence over the economy and move it closer to a socialist model. The Soviet Union, for instance, had a centralized banking system that enabled the state to fund massive industrialization and social programs, significantly expanding the state's role in the economy.
Wealth Redistribution
Controlling the money supply can also allow the state to influence economic inequality through monetary policies, effectively redistributing wealth and reducing income disparities. By implementing low-interest loans or providing financial assistance to disadvantaged groups, the state can narrow the wealth gap and promote social equality, as seen in many European welfare states.
Central Planning
A state-controlled money supply can contribute to increased central planning, as the state gains more influence over the economy. Central banks, which are state-owned or heavily influenced by the state, play a crucial role in managing the money supply and facilitating central planning. This aligns with socialist principles that advocate for a planned economy where resources are allocated according to social needs rather than market forces.
Incentives for Staff
Staff members working in state institutions responsible for managing the money supply have various incentives to keep the system going. These incentives include job security, professional expertise and reputation, political alignment, regulatory capture, institutional inertia, and legal and administrative barriers. While these factors can differ among individuals, they can collectively contribute to the persistence of a state-controlled money supply system.
In conclusion, a state-controlled money supply can facilitate the development of socialist policies and practices by enabling fiscal policies, wealth redistribution, and central planning. The staff responsible for managing the money supply have diverse incentives to maintain the system, further ensuring its continuation. However, it is essential to note that many factors influence the trajectory of an economic system, and the relationship between state control over the money supply and socialism is not inevitable.
-
@ e096a89e:59351479
2025-04-30 12:59:28Why Oshi?
I had another name for this brand before, but it was hard for folks to say. Then I saw a chance to tap into the #Nostr and #Bitcoin crowd, people who might vibe with what I’m creating, and I knew I needed something that’d stick.
A good name can make a difference. Well, sometimes. Take Blink-182 - it might sound odd, but it worked for them and even has a ring to it. So, why Oshi?
Names mean a lot to me, and Oshi’s got layers. I’m into Japanese culture and Bitcoin, so it fits perfectly with a few meanings baked in:
- It’s a nod to Bitcoin’s visionary, Satoshi Nakamoto.
- In Japanese, “oshi” means cheering on your favorite idol by supporting their work - think of me as the maker, you as the fan.
- It’s short for “oh shiiiitttt” - what most folks say when they taste how good this stuff is.
My goal with Oshi is to share how amazing pecans and dates can be together. Everything I make - Hodl Butter, Hodl Bars, chocolates - is crafted with intention, keeping it simple and nuanced, no overdoing it. It’s healthy snacking without the grains or junk you find in other products.
I’ve got a few bars and jars in stock now. Grab something today and taste the unique flavor for yourself. Visit my website at https://oshigood.us/
foodstr #oshigood #hodlbar #hodlbutter
-
@ ed5774ac:45611c5c
2025-04-19 20:29:31April 20, 2020: The day I saw my so-called friends expose themselves as gutless, brain-dead sheep.
On that day, I shared a video exposing the damning history of the Bill & Melinda Gates Foundation's vaccine campaigns in Africa and the developing world. As Gates was on every TV screen, shilling COVID jabs that didn’t even exist, I called out his blatant financial conflict of interest and pointed out the obvious in my facebook post: "Finally someone is able to explain why Bill Gates runs from TV to TV to promote vaccination. Not surprisingly, it's all about money again…" - referencing his substantial investments in vaccine technology, including BioNTech's mRNA platform that would later produce the COVID vaccines and generate massive profits for his so-called philanthropic foundation.
The conflict of interest was undeniable. I genuinely believed anyone capable of basic critical thinking would at least pause to consider these glaring financial motives. But what followed was a masterclass in human stupidity.
My facebook post from 20 April 2020:
Not only was I branded a 'conspiracy theorist' for daring to question the billionaire who stood to make a fortune off the very vaccines he was shilling, but the brain-dead, logic-free bullshit vomited by the people around me was beyond pathetic. These barely literate morons couldn’t spell "Pfizer" without auto-correct, yet they mindlessly swallowed and repeated every lie the media and government force-fed them, branding anything that cracked their fragile reality as "conspiracy theory." Big Pharma’s rap sheet—fraud, deadly cover-ups, billions in fines—could fill libraries, yet these obedient sheep didn’t bother to open a single book or read a single study before screaming their ignorance, desperate to virtue-signal their obedience. Then, like spineless lab rats, they lined up for an experimental jab rushed to the market in months, too dumb to care that proper vaccine development takes a decade.
The pathetic part is that these idiots spend hours obsessing over reviews for their useless purchases like shoes or socks, but won’t spare 60 seconds to research the experimental cocktail being injected into their veins—or even glance at the FDA’s own damning safety reports. Those same obedient sheep would read every Yelp review for a fucking coffee shop but won't spend five minutes looking up Pfizer's criminal fraud settlements. They would demand absolute obedience to ‘The Science™’—while being unable to define mRNA, explain lipid nanoparticles, or justify why trials were still running as they queued up like cattle for their jab. If they had two brain cells to rub together or spent 30 minutes actually researching, they'd know, but no—they'd rather suck down the narrative like good little slaves, too dumb to question, too weak to think.
Worst of all, they became the system’s attack dogs—not just swallowing the poison, but forcing it down others’ throats. This wasn’t ignorance. It was betrayal. They mutated into medical brownshirts, destroying lives to virtue-signal their obedience—even as their own children’s hearts swelled with inflammation.
One conversation still haunts me to this day—a masterclass in wealth-worship delusion. A close friend, as a response to my facebook post, insisted that Gates’ assumed reading list magically awards him vaccine expertise, while dismissing his billion-dollar investments in the same products as ‘no conflict of interest.’ Worse, he argued that Gates’s $5–10 billion pandemic windfall was ‘deserved.’
This exchange crystallizes civilization’s intellectual surrender: reason discarded with religious fervor, replaced by blind faith in corporate propaganda.
The comment of a friend on my facebook post that still haunts me to this day:
Walking Away from the Herd
After a period of anger and disillusionment, I made a decision: I would no longer waste energy arguing with people who refused to think for themselves. If my circle couldn’t even ask basic questions—like why an untested medical intervention was being pushed with unprecedented urgency—then I needed a new community.
Fortunately, I already knew where to look. For three years, I had been involved in Bitcoin, a space where skepticism wasn’t just tolerated—it was demanded. Here, I’d met some of the most principled and independent thinkers I’d ever encountered. These were people who understood the corrupting influence of centralized power—whether in money, media, or politics—and who valued sovereignty, skepticism, and integrity. Instead of blind trust, bitcoiners practiced relentless verification. And instead of empty rhetoric, they lived by a simple creed: Don’t trust. Verify.
It wasn’t just a philosophy. It was a lifeline. So I chose my side and I walked away from the herd.
Finding My Tribe
Over the next four years, I immersed myself in Bitcoin conferences, meetups, and spaces where ideas were tested, not parroted. Here, I encountered extraordinary people: not only did they share my skepticism toward broken systems, but they challenged me to sharpen it.
No longer adrift in a sea of mindless conformity, I’d found a crew of thinkers who cut through the noise. They saw clearly what most ignored—that at the core of society’s collapse lay broken money, the silent tax on time, freedom, and truth itself. But unlike the complainers I’d left behind, these people built. They coded. They wrote. They risked careers and reputations to expose the rot. Some faced censorship; others, mockery. All understood the stakes.
These weren’t keyboard philosophers. They were modern-day Cassandras, warning of inflation’s theft, the Fed’s lies, and the coming dollar collapse—not for clout, but because they refused to kneel to a dying regime. And in their defiance, I found something rare: a tribe that didn’t just believe in a freer future. They were engineering it.
April 20, 2024: No more herd. No more lies. Only proof-of-work.
On April 20, 2024, exactly four years after my last Facebook post, the one that severed my ties to the herd for good—I stood in front of Warsaw’s iconic Palace of Culture and Science, surrounded by 400 bitcoiners who felt like family. We were there to celebrate Bitcoin’s fourth halving, but it was more than a protocol milestone. It was a reunion of sovereign individuals. Some faces I’d known since the early days; others, I’d met only hours before. We bonded instantly—heated debates, roaring laughter, zero filters on truths or on so called conspiracy theories.
As the countdown to the halving began, it hit me: This was the antithesis of the hollow world I’d left behind. No performative outrage, no coerced consensus—just a room of unyielding minds who’d traded the illusion of safety for the grit of truth. Four years prior, I’d been alone in my resistance. Now, I raised my glass among my people - those who had seen the system's lies and chosen freedom instead. Each had their own story of awakening, their own battles fought, but here we shared the same hard-won truth.
The energy wasn’t just electric. It was alive—the kind that emerges when free people build rather than beg. For the first time, I didn’t just belong. I was home. And in that moment, the halving’s ticking clock mirrored my own journey: cyclical, predictable in its scarcity, revolutionary in its consequences. Four years had burned away the old world. What remained was stronger.
No Regrets
Leaving the herd wasn’t a choice—it was evolution. My soul shouted: "I’d rather stand alone than kneel with the masses!". The Bitcoin community became more than family; they’re living proof that the world still produces warriors, not sheep. Here, among those who forge truth, I found something extinct elsewhere: hope that burns brighter with every halving, every block, every defiant mind that joins the fight.
Change doesn’t come from the crowd. It starts when one person stops applauding.
Today, I stand exactly where I always wanted to be—shoulder-to-shoulder with my true family: the rebels, the builders, the ungovernable. Together, we’re building the decentralized future.
-
@ 6e0ea5d6:0327f353
2025-04-19 15:09:18🩸
The world won’t stop and wait for you to recover.Do your duty regardless of how you feel. That’s the only guarantee you’ll end the day alright.
You’ve heard it before: “The worst workout is the one you didn’t do.” Sometimes you don’t feel like going to the gym. You start bargaining with laziness: “I didn’t sleep well… maybe I should skip today.” But then you go anyway, committing only to the bare minimum your energy allows. And once you start, your body outperforms your mind’s assumptions—it turns out to be one of the best workouts you’ve had in a long time. The feeling of following through, of winning a battle you were losing, gives you the confidence to own the rest of your day. You finally feel good.
And that wouldn’t have happened if you stayed home waiting to feel better. Guilt would’ve joined forces with discouragement, and you’d be crushed by melancholy in a victim mindset. That loss would bleed into the rest of your week, conditioning your mind: because you didn’t spend your energy on the workout, you’d stay up late, wake up worse, and while waiting to feel “ready,” you’d lose a habit that took months of effort to build.
When in doubt, just do your duty. Stick to the plan. Don’t negotiate with your feelings—outsmart them. “Just one page today,” and you’ll end up reading ten. “Only the easy tasks,” and you’ll gain momentum to conquer the hard ones. Laziness is a serpent—you win when you make no deals with it.
A close friend once told me that when he was at his limit during a second job shift, he’d open a picture on his phone—of a fridge or a stove he needed to buy for his home—and that image gave him strength to stay awake. That moment stuck with me forever.
Do you really think the world will have the same mercy on you that you have on yourself? Don’t be surprised when it doesn’t spare you. Move forward even while stitching your wounds: “If you wait for perfect conditions, you’ll never do anything.” (Ecclesiastes 11:4)
Thank you for reading, my friend!
If this message resonated with you, consider leaving your "🥃" as a token of appreciation.
A toast to our family!
-
@ a95c6243:d345522c
2025-03-11 10:22:36«Wir brauchen eine digitale Brandmauer gegen den Faschismus», schreibt der Chaos Computer Club (CCC) auf seiner Website. Unter diesem Motto präsentierte er letzte Woche einen Forderungskatalog, mit dem sich 24 Organisationen an die kommende Bundesregierung wenden. Der Koalitionsvertrag müsse sich daran messen lassen, verlangen sie.
In den drei Kategorien «Bekenntnis gegen Überwachung», «Schutz und Sicherheit für alle» sowie «Demokratie im digitalen Raum» stellen die Unterzeichner, zu denen auch Amnesty International und Das NETTZ gehören, unter anderem die folgenden «Mindestanforderungen»:
- Verbot biometrischer Massenüberwachung des öffentlichen Raums sowie der ungezielten biometrischen Auswertung des Internets.
- Anlasslose und massenhafte Vorratsdatenspeicherung wird abgelehnt.
- Automatisierte Datenanalysen der Informationsbestände der Strafverfolgungsbehörden sowie jede Form von Predictive Policing oder automatisiertes Profiling von Menschen werden abgelehnt.
- Einführung eines Rechts auf Verschlüsselung. Die Bundesregierung soll sich dafür einsetzen, die Chatkontrolle auf europäischer Ebene zu verhindern.
- Anonyme und pseudonyme Nutzung des Internets soll geschützt und ermöglicht werden.
- Bekämpfung «privaten Machtmissbrauchs von Big-Tech-Unternehmen» durch durchsetzungsstarke, unabhängige und grundsätzlich föderale Aufsichtsstrukturen.
- Einführung eines digitalen Gewaltschutzgesetzes, unter Berücksichtigung «gruppenbezogener digitaler Gewalt» und die Förderung von Beratungsangeboten.
- Ein umfassendes Förderprogramm für digitale öffentliche Räume, die dezentral organisiert und quelloffen programmiert sind, soll aufgelegt werden.
Es sei ein Irrglaube, dass zunehmende Überwachung einen Zugewinn an Sicherheit darstelle, ist eines der Argumente der Initiatoren. Sicherheit erfordere auch, dass Menschen anonym und vertraulich kommunizieren können und ihre Privatsphäre geschützt wird.
Gesunde digitale Räume lebten auch von einem demokratischen Diskurs, lesen wir in dem Papier. Es sei Aufgabe des Staates, Grundrechte zu schützen. Dazu gehöre auch, Menschenrechte und demokratische Werte, insbesondere Freiheit, Gleichheit und Solidarität zu fördern sowie den Missbrauch von Maßnahmen, Befugnissen und Infrastrukturen durch «die Feinde der Demokratie» zu verhindern.
Man ist geneigt zu fragen, wo denn die Autoren «den Faschismus» sehen, den es zu bekämpfen gelte. Die meisten der vorgetragenen Forderungen und Argumente finden sicher breite Unterstützung, denn sie beschreiben offenkundig gängige, kritikwürdige Praxis. Die Aushebelung der Privatsphäre, der Redefreiheit und anderer Grundrechte im Namen der Sicherheit wird bereits jetzt massiv durch die aktuellen «demokratischen Institutionen» und ihre «durchsetzungsstarken Aufsichtsstrukturen» betrieben.
Ist «der Faschismus» also die EU und ihre Mitgliedsstaaten? Nein, die «faschistische Gefahr», gegen die man eine digitale Brandmauer will, kommt nach Ansicht des CCC und seiner Partner aus den Vereinigten Staaten. Private Überwachung und Machtkonzentration sind dabei weltweit schon lange Realität, jetzt endlich müssen sie jedoch bekämpft werden. In dem Papier heißt es:
«Die willkürliche und antidemokratische Machtausübung der Tech-Oligarchen um Präsident Trump erfordert einen Paradigmenwechsel in der deutschen Digitalpolitik. (...) Die aktuellen Geschehnisse in den USA zeigen auf, wie Datensammlungen und -analyse genutzt werden können, um einen Staat handstreichartig zu übernehmen, seine Strukturen nachhaltig zu beschädigen, Widerstand zu unterbinden und marginalisierte Gruppen zu verfolgen.»
Wer auf der anderen Seite dieser Brandmauer stehen soll, ist also klar. Es sind die gleichen «Feinde unserer Demokratie», die seit Jahren in diese Ecke gedrängt werden. Es sind die gleichen Andersdenkenden, Regierungskritiker und Friedensforderer, die unter dem großzügigen Dach des Bundesprogramms «Demokratie leben» einem «kontinuierlichen Echt- und Langzeitmonitoring» wegen der Etikettierung «digitaler Hass» unterzogen werden.
Dass die 24 Organisationen praktisch auch die Bekämpfung von Google, Microsoft, Apple, Amazon und anderen fordern, entbehrt nicht der Komik. Diese fallen aber sicher unter das Stichwort «Machtmissbrauch von Big-Tech-Unternehmen». Gleichzeitig verlangen die Lobbyisten implizit zum Beispiel die Förderung des Nostr-Netzwerks, denn hier finden wir dezentral organisierte und quelloffen programmierte digitale Räume par excellence, obendrein zensurresistent. Das wiederum dürfte in der Politik weniger gut ankommen.
[Titelbild: Pixabay]
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ a95c6243:d345522c
2025-03-04 09:40:50Die «Eliten» führen bereits groß angelegte Pilotprojekte für eine Zukunft durch, die sie wollen und wir nicht. Das schreibt der OffGuardian in einem Update zum Thema «EU-Brieftasche für die digitale Identität». Das Portal weist darauf hin, dass die Akteure dabei nicht gerade zimperlich vorgehen und auch keinen Hehl aus ihren Absichten machen. Transition News hat mehrfach darüber berichtet, zuletzt hier und hier.
Mit der EU Digital Identity Wallet (EUDI-Brieftasche) sei eine einzige von der Regierung herausgegebene App geplant, die Ihre medizinischen Daten, Beschäftigungsdaten, Reisedaten, Bildungsdaten, Impfdaten, Steuerdaten, Finanzdaten sowie (potenziell) Kopien Ihrer Unterschrift, Fingerabdrücke, Gesichtsscans, Stimmproben und DNA enthält. So fasst der OffGuardian die eindrucksvolle Liste möglicher Einsatzbereiche zusammen.
Auch Dokumente wie der Personalausweis oder der Führerschein können dort in elektronischer Form gespeichert werden. Bis 2026 sind alle EU-Mitgliedstaaten dazu verpflichtet, Ihren Bürgern funktionierende und frei verfügbare digitale «Brieftaschen» bereitzustellen.
Die Menschen würden diese App nutzen, so das Portal, um Zahlungen vorzunehmen, Kredite zu beantragen, ihre Steuern zu zahlen, ihre Rezepte abzuholen, internationale Grenzen zu überschreiten, Unternehmen zu gründen, Arzttermine zu buchen, sich um Stellen zu bewerben und sogar digitale Verträge online zu unterzeichnen.
All diese Daten würden auf ihrem Mobiltelefon gespeichert und mit den Regierungen von neunzehn Ländern (plus der Ukraine) sowie über 140 anderen öffentlichen und privaten Partnern ausgetauscht. Von der Deutschen Bank über das ukrainische Ministerium für digitalen Fortschritt bis hin zu Samsung Europe. Unternehmen und Behörden würden auf diese Daten im Backend zugreifen, um «automatisierte Hintergrundprüfungen» durchzuführen.
Der Bundesverband der Verbraucherzentralen und Verbraucherverbände (VZBV) habe Bedenken geäußert, dass eine solche App «Risiken für den Schutz der Privatsphäre und der Daten» berge, berichtet das Portal. Die einzige Antwort darauf laute: «Richtig, genau dafür ist sie ja da!»
Das alles sei keine Hypothese, betont der OffGuardian. Es sei vielmehr «Potential». Damit ist ein EU-Projekt gemeint, in dessen Rahmen Dutzende öffentliche und private Einrichtungen zusammenarbeiten, «um eine einheitliche Vision der digitalen Identität für die Bürger der europäischen Länder zu definieren». Dies ist nur eines der groß angelegten Pilotprojekte, mit denen Prototypen und Anwendungsfälle für die EUDI-Wallet getestet werden. Es gibt noch mindestens drei weitere.
Den Ball der digitalen ID-Systeme habe die Covid-«Pandemie» über die «Impfpässe» ins Rollen gebracht. Seitdem habe das Thema an Schwung verloren. Je näher wir aber der vollständigen Einführung der EUid kämen, desto mehr Propaganda der Art «Warum wir eine digitale Brieftasche brauchen» könnten wir in den Mainstream-Medien erwarten, prognostiziert der OffGuardian. Vielleicht müssten wir schon nach dem nächsten großen «Grund», dem nächsten «katastrophalen katalytischen Ereignis» Ausschau halten. Vermutlich gebe es bereits Pläne, warum die Menschen plötzlich eine digitale ID-Brieftasche brauchen würden.
Die Entwicklung geht jedenfalls stetig weiter in genau diese Richtung. Beispielsweise hat Jordanien angekündigt, die digitale biometrische ID bei den nächsten Wahlen zur Verifizierung der Wähler einzuführen. Man wolle «den Papierkrieg beenden und sicherstellen, dass die gesamte Kette bis zu den nächsten Parlamentswahlen digitalisiert wird», heißt es. Absehbar ist, dass dabei einige Wahlberechtigte «auf der Strecke bleiben» werden, wie im Fall von Albanien geschehen.
Derweil würden die Briten gerne ihre Privatsphäre gegen Effizienz eintauschen, behauptet Tony Blair. Der Ex-Premier drängte kürzlich erneut auf digitale Identitäten und Gesichtserkennung. Blair ist Gründer einer Denkfabrik für globalen Wandel, Anhänger globalistischer Technokratie und «moderner Infrastruktur».
Abschließend warnt der OffGuardian vor der Illusion, Trump und Musk würden den US-Bürgern «diesen Schlamassel ersparen». Das Department of Government Efficiency werde sich auf die digitale Identität stürzen. Was könne schließlich «effizienter» sein als eine einzige App, die für alles verwendet wird? Der Unterschied bestehe nur darin, dass die US-Version vielleicht eher privat als öffentlich sei – sofern es da überhaupt noch einen wirklichen Unterschied gebe.
[Titelbild: Screenshot OffGuardian]
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ a95c6243:d345522c
2025-03-01 10:39:35Ständige Lügen und Unterstellungen, permanent falsche Fürsorge \ können Bausteine von emotionaler Manipulation sein. Mit dem Zweck, \ Macht und Kontrolle über eine andere Person auszuüben. \ Apotheken Umschau
Irgendetwas muss passiert sein: «Gaslighting» ist gerade Thema in vielen Medien. Heute bin ich nach längerer Zeit mal wieder über dieses Stichwort gestolpert. Das war in einem Artikel von Norbert Häring über Manipulationen des Deutschen Wetterdienstes (DWD). In diesem Fall ging es um eine Pressemitteilung vom Donnerstag zum «viel zu warmen» Winter 2024/25.
Häring wirft der Behörde vor, dreist zu lügen und Dinge auszulassen, um die Klimaangst wach zu halten. Was der Leser beim DWD nicht erfahre, sei, dass dieser Winter kälter als die drei vorangegangenen und kälter als der Durchschnitt der letzten zehn Jahre gewesen sei. Stattdessen werde der falsche Eindruck vermittelt, es würde ungebremst immer wärmer.
Wem also der zu Ende gehende Winter eher kalt vorgekommen sein sollte, mit dessen Empfinden stimme wohl etwas nicht. Das jedenfalls wolle der DWD uns einreden, so der Wirtschaftsjournalist. Und damit sind wir beim Thema Gaslighting.
Als Gaslighting wird eine Form psychischer Manipulation bezeichnet, mit der die Opfer desorientiert und zutiefst verunsichert werden, indem ihre eigene Wahrnehmung als falsch bezeichnet wird. Der Prozess führt zu Angst und Realitätsverzerrung sowie zur Zerstörung des Selbstbewusstseins. Die Bezeichnung kommt von dem britischen Theaterstück «Gas Light» aus dem Jahr 1938, in dem ein Mann mit grausamen Psychotricks seine Frau in den Wahnsinn treibt.
Damit Gaslighting funktioniert, muss das Opfer dem Täter vertrauen. Oft wird solcher Psychoterror daher im privaten oder familiären Umfeld beschrieben, ebenso wie am Arbeitsplatz. Jedoch eignen sich die Prinzipien auch perfekt zur Manipulation der Massen. Vermeintliche Autoritäten wie Ärzte und Wissenschaftler, oder «der fürsorgliche Staat» und Institutionen wie die UNO oder die WHO wollen uns doch nichts Böses. Auch Staatsmedien, Faktenchecker und diverse NGOs wurden zu «vertrauenswürdigen Quellen» erklärt. Das hat seine Wirkung.
Warum das Thema Gaslighting derzeit scheinbar so populär ist, vermag ich nicht zu sagen. Es sind aber gerade in den letzten Tagen und Wochen auffällig viele Artikel dazu erschienen, und zwar nicht nur von Psychologen. Die Frankfurter Rundschau hat gleich mehrere publiziert, und Anwälte interessieren sich dafür offenbar genauso wie Apotheker.
Die Apotheken Umschau machte sogar auf «Medical Gaslighting» aufmerksam. Davon spreche man, wenn Mediziner Symptome nicht ernst nähmen oder wenn ein gesundheitliches Problem vom behandelnden Arzt «schnöde heruntergespielt» oder abgetan würde. Kommt Ihnen das auch irgendwie bekannt vor? Der Begriff sei allerdings irreführend, da er eine manipulierende Absicht unterstellt, die «nicht gewährleistet» sei.
Apropos Gaslighting: Die noch amtierende deutsche Bundesregierung meldete heute, es gelte, «weiter [sic!] gemeinsam daran zu arbeiten, einen gerechten und dauerhaften Frieden für die Ukraine zu erreichen». Die Ukraine, wo sich am Montag «der völkerrechtswidrige Angriffskrieg zum dritten Mal jährte», verteidige ihr Land und «unsere gemeinsamen Werte».
Merken Sie etwas? Das Demokratieverständnis mag ja tatsächlich inzwischen in beiden Ländern ähnlich traurig sein. Bezüglich Friedensbemühungen ist meine Wahrnehmung jedoch eine andere. Das muss an meinem Gedächtnis liegen.
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ a39d19ec:3d88f61e
2024-11-17 10:48:56This week's functional 3d print is the "Dino Clip".
Dino Clip
I printed it some years ago for my son, so he would have his own clip for cereal bags.
Now it is used to hold a bag of dog food close.
The design by "Sneaks" is a so called "print in place". This means that the whole clip with moving parts is printed in one part, without the need for assembly after the print.
The clip is very strong, and I would print it again if I need a "heavy duty" clip for more rigid or big bags. Link to the file at Printables
-
@ a367f9eb:0633efea
2024-11-05 08:48:41Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?
As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
Open-source matters When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
Open-source is for your friends, and enemies In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
Limiting open-source threatens our own advancement If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.
The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?
As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
Open-source matters
When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
Open-source is for your friends, and enemies
In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
Limiting open-source threatens our own advancement
If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.
The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
Originally published on the website of the Consumer Choice Center.
-
@ 4e616576:43c4fee8
2025-04-30 12:53:36sdfafd
-
@ 6e0ea5d6:0327f353
2025-04-19 15:02:55My friend, let yourself be deluded for a moment, and reality will see to it that your fantasy is shattered—like a hammer crushing marble. The real world grants no mercy; it will relentlessly tear down your aspirations, casting them into the abyss of disillusionment and burying your dreams under the unbearable weight of your own expectations. It’s an inescapable fate—but the outcome is still in your hands: perish at the bottom like a wretch or turn the pit into a trench.
Davvero, everyone must eventually face something that breaks them. It is in devastation that man discovers what he is made of, and in the silence of defeat that he hears the finest advice. Yet the weak would rather embrace the convenient lie of self-pity, blaming life for failures that are, in truth, the result of their own negligence and cowardly choices. If you hide behind excuses because you fear the painful truth, know this: the responsibility has always been yours.
Ascolta bene! Just remain steadfast, even when everything feels like an endless maze. The difficulties you face today—those you believe you’ll never overcome—will one day seem insignificant under the light of time and experience. Tomorrow, you’ll look back and laugh at yourself for ever letting these storms seem so overwhelming.
Now, it’s up to you to fight your own battle—for the evil day spares no one. Don’t let yourself be paralyzed by shock or bow before adversity. Be strong and of good courage—not as one who waits for relief, but as one prepared to face the inevitable and turn pain into glory.
Thank you for reading, my friend!
If this message resonated with you, consider leaving your "🥃" as a token of appreciation.
A toast to our family!
-
@ 3589b793:ad53847e
2025-04-30 12:40:42※本記事は別サービスで2022年6月24日に公開した記事の移植です。
どうも、「NostrはLNがWeb統合されマネーのインターネットプロトコルとしてのビットコインが本気出す具体行動のショーケースと見做せばOK」です、こんばんは。
またまた実験的な試みがNostrで行われているのでレポートします。本シリーズはライブ感を重視しており、例によって(?)プルリクエストなどはレビュー段階なのでご承知おきください。
今回の主役はあくまでLightningNetworkの新提案(ただし以前からあるLSATからのリブランディング)となるLightning HTTP 402 Protocol(略称: L402)です。そのショーケースの一つとしてNostrが活用されているというものになります。
Lightning HTTP 402 Protocol(略称: L402)とは何か
bLIPに今月挙がったプロポーザル内容です。
https://github.com/lightning/blips/pull/26
L402について私はまだ完全に理解した段階ではあるのですがなんとか一言で説明しようとすると「Authトークンのように"Paid"トークンをHTTPヘッダーにアタッチして有料リソースへのHTTPリクエストの受け入れ判断を行えるようにする」ものだと解釈しました。
Authenticationでは、HTTPヘッダーにAuthトークンを添付し、その検証が通ればHTTPリクエストを許可し、通らなければ
401 Unauthorized
コードをエラーとして返すように定められています。https://developer.mozilla.org/ja/docs/Web/HTTP/Status/401
L402では、同じように、HTTPヘッダーに支払い済みかどうかを示す"Paid"トークンを添付し、その検証が通ればHTTPリクエストを許可し、通らなければ
402 Payment Required
コードをエラーとして返すようにしています。なお、"Paid"トークンという用語は私の造語となります。便宜上本記事では使わせていただきますが、実際はAuthも入ってくるのが必至ですし、プルリクエストでも用語をどう定めるかは議論になっていることをご承知おきください。("API key", "credentials", "token", らが登場しています)
この402ステータスコードは従来から定義されていましたが、MDNのドキュメントでも記載されているように「実験的」なものでした。つまり、器は用意されているがこれまで活用されてこなかったものとなり、本プロトコルの物語性を体現しているものとなります。
https://developer.mozilla.org/ja/docs/Web/HTTP/Status/402
幻であったHTTPステータスコード402 Payment Requiredを実装する
この物語性は、上述のbLIPのスペックにも詳述されていますが、以下のスライドが簡潔です。
402 Payment Required
は予約されていましたが、けっきょくのところWorldWideWebはペイメントプロトコルを実装しなかったので、Bitcoinの登場まで待つことになった、というのが要旨になります。このWorldWideWebにおける決済機能実装に関する歴史話はクリプト界隈でもたびたび話題に上がりますが、そこを繋いでくる文脈にこれこそマネーのインターネットプロトコルだなと痺れました。https://x.com/AlyseKilleen/status/1671342634307297282
この"Paid"トークンによって実現できることとして、第一にAIエージェントがBitcoin/LNを自律的に利用できるようになるM2M(MachineToMachine)的な話が挙げられていますが、ユースケースは想像力がいろいろ要るところです。実際のところは「有料リソースへの認可」を可能にすることが主になると理解しました。本連載では、繰り返しNostrクライアントにLNプロトコルを直接搭載せずにLightningNetworkを利用可能にする組み込み方法を見てきましたが、本件もインボイス文字列 & preimage程度の露出になりアプリケーション側でノードやウォレットの実装が要らないので、その文脈で位置付ける解釈もできるかと思います。
Snortでのサンプル実装
LN組み込み業界のリーディングプロダクトであるSnortのサンプル実装では、L402を有料コンテンツの購読に活用しています。具体的には画像や動画を投稿するときに有料のロックをかける、いわゆるペイウォールの一種となります。もともとアップローダもSnortが自前で用意しているので、そこにL402を組み込んでみたということのようです。
体験方法の詳細はこちらにあります。 https://njump.me/nevent1qqswr2pshcpawk9ny5q5kcgmhak24d92qzdy98jm8xcxlgxstruecccpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhszrnhwden5te0dehhxtnvdakz78pvlzg
上記を試してみた結果が以下になります。まず、ペイウォールでロックした画像がNostrに投稿されている状態です。まったくビューワーが実装されておらず、ただのNotFound状態になっていますが、支払い前なのでロックされているということです。
次にこのHTTP通信の内容です。
通信自体はエラーになっているわけですが、ステータスコードが402で、レスポンスヘッダーのWWW-AuthenticateにInvoice文字列が返ってきています。つまり、このインボイスを支払えば"Paid"トークンが付与されて、その"Paid"トークンがあれば最初の画像がアンロックされることとなります。残念ながら現在は日本で利用不可のStrikeAppでしか払込みができないためここまでとなりますが、本懐である
402 Payment Required
とインボイス文字列は確認できました。今確認できることは以上ですが、AmethystやDamusなどの他のNostrクライアントが実装するにあたり、インラインメディアを巡ってL402の仕様をアップデートする必要性や同じくHTTPヘッダーへのAuthトークンとなるNIP-98と組み合わせるなどの議論が行われている最中です。
LinghtningNetworkであるからこそのL402の実現
"Paid"トークンを実現するためにはLightningNetworkのファイナリティが重要な要素となっています。逆に言うと、reorgによるひっくり返しがあり得るBitcoinではできなくもないけど不便なわけです。LightningNetworkなら、当事者である二者間で支払いが確認されたら「同期的」にその証であるハッシュ値を用いて"Paid"トークンを作成することができます。しかもハッシュ値を提出するだけで台帳などで過去の履歴を確認する必要がありません。加えて言うと、受金者側が複数のノードを建てていて支払いを受け取るノードがどれか一つになる状況でも、つまり、スケーリングされている状況でも、"Paid"トークンそのものはどのノードかを気にすることなくステートレスで利用できるとのことです。(ここは単にreverse proxyとしてAuthサーバががんばっているだけと解釈することもできますがずいぶんこの機能にも力点を置いていて大規模なユースケースが重要になっているのだなという印象を抱きました)
Macaroonの本領発揮か?それとも詳細定義しすぎか?
HTTP通信ではWWW-Authenticateの実値にmacaroonの記述が確認できます。また現在のL402スペックでも"Paid"トークンにはmacaroonの利用が前提になっています。
このmacaroonとは(たぶん)googleで研究開発され、LNDノードソフトウェアで活用されているCookieを超えるという触れ込みのデータストアになります。しかし、あまり普及しなかった技術でもあり、個人の感想ですがなんとも微妙なものになっています。
https://research.google/pubs/macaroons-cookies-with-contextual-caveats-for-decentralized-authorization-in-the-cloud/
macaroonの強みは、Cookieを超えるという触れ込みのようにブラウザが無くてもプロセス間通信でデータ共有できる点に加えて、HMACチェーンで動的に認証認可を更新し続けられるところが挙げられます。しかし、そのようなユースケースがあまり無く、静的な認可となるOAuthやJWTで十分となっているのが現状かと思います。
L402では、macaroonの動的な更新が可能である点を活かして、"Paid"トークンを更新するケースが挙げられています。わかりやすいのは上記のスライド資料でも挙げられている"Dynamic Pricing"でしょうか。プロポーザルではloop©️LightningLabsにおいて月間の最大取引量を認可する"Paid"トークンを発行した上でその条件を動向に応じて動的に変更できる例が解説されています。とはいえ、そんなことしなくても再発行すればええやんけという話もなくもないですし、プルリクエストでも仕様レベルでmacaroonを指定するのは「具体」が過ぎるのではないか、もっと「抽象」し単なる"Opaque Token"程度の粒度にして他の実装も許容するべきではないか、という然るべきツッコミが入っています。
個人的にはそのツッコミが妥当と思いつつも、なんだかんだ初めてmacaroonの良さを実感できて感心した次第です。
-
@ 3ffac3a6:2d656657
2025-04-15 14:49:31🏅 Como Criar um Badge Épico no Nostr com
nak
+ badges.pageRequisitos:
- Ter o
nak
instalado (https://github.com/fiatjaf/nak) - Ter uma chave privada Nostr (
nsec...
) - Acesso ao site https://badges.page
- Um relay ativo (ex:
wss://relay.primal.net
)
🔧 Passo 1 — Criar o badge em badges.page
- Acesse o site https://badges.page
-
Clique em "New Badge" no canto superior direito
-
Preencha os campos:
- Nome (ex:
Teste Épico
) - Descrição
-
Imagem e thumbnail
-
Após criar, você será redirecionado para a página do badge.
🔍 Passo 2 — Copiar o
naddr
do badgeNa barra de endereços, copie o identificador que aparece após
/a/
— este é o naddr do seu badge.Exemplo:
nostr:naddr1qq94getnw3jj63tsd93k7q3q8lav8fkgt8424rxamvk8qq4xuy9n8mltjtgztv2w44hc5tt9vetsxpqqqp6njkq3sd0
Copie:
naddr1qq94getnw3jj63tsd93k7q3q8lav8fkgt8424rxamvk8qq4xuy9n8mltjtgztv2w44hc5tt9vetsxpqqqp6njkq3sd0
🧠 Passo 3 — Decodificar o naddr com
nak
Abra seu terminal (ou Cygwin no Windows) e rode:
bash nak decode naddr1qq94getnw3jj63tsd93k7q3q8lav8fkgt8424rxamvk8qq4xuy9n8mltjtgztv2w44hc5tt9vetsxpqqqp6njkq3sd0
Você verá algo assim:
json { "pubkey": "3ffac3a6c859eaaa8cdddb2c7002a6e10b33efeb92d025b14ead6f8a2d656657", "kind": 30009, "identifier": "Teste-Epico" }
Grave o campo
"identifier"
— nesse caso: Teste-Epico
🛰️ Passo 4 — Consultar o evento no relay
Agora vamos pegar o evento do badge no relay:
bash nak req -d "Teste-Epico" wss://relay.primal.net
Você verá o conteúdo completo do evento do badge, algo assim:
json { "kind": 30009, "tags": [["d", "Teste-Epico"], ["name", "Teste Épico"], ...] }
💥 Passo 5 — Minerar o evento como "épico" (PoW 31)
Agora vem a mágica: minerar com proof-of-work (PoW 31) para que o badge seja classificado como épico!
bash nak req -d "Teste-Epico" wss://relay.primal.net | nak event --pow 31 --sec nsec1SEU_NSEC_AQUI wss://relay.primal.net wss://nos.lol wss://relay.damus.io
Esse comando: - Resgata o evento original - Gera um novo com PoW de dificuldade 31 - Assina com sua chave privada
nsec
- E publica nos relays wss://relay.primal.net, wss://nos.lol e wss://relay.damus.io⚠️ Substitua
nsec1SEU_NSEC_AQUI
pela sua chave privada Nostr.
✅ Resultado
Se tudo der certo, o badge será atualizado com um evento de PoW mais alto e aparecerá como "Epic" no site!
- Ter o
-
@ a95c6243:d345522c
2025-02-21 19:32:23Europa – das Ganze ist eine wunderbare Idee, \ aber das war der Kommunismus auch. \ Loriot
«Europa hat fertig», könnte man unken, und das wäre nicht einmal sehr verwegen. Mit solch einer Einschätzung stünden wir nicht alleine, denn die Stimmen in diese Richtung mehren sich. Der französische Präsident Emmanuel Macron warnte schon letztes Jahr davor, dass «unser Europa sterben könnte». Vermutlich hatte er dabei andere Gefahren im Kopf als jetzt der ungarische Ministerpräsident Viktor Orbán, der ein «baldiges Ende der EU» prognostizierte. Das Ergebnis könnte allerdings das gleiche sein.
Neben vordergründigen Themenbereichen wie Wirtschaft, Energie und Sicherheit ist das eigentliche Problem jedoch die obskure Mischung aus aufgegebener Souveränität und geschwollener Arroganz, mit der europäische Politiker:innende unterschiedlicher Couleur aufzutreten pflegen. Und das Tüpfelchen auf dem i ist die bröckelnde Legitimation politischer Institutionen dadurch, dass die Stimmen großer Teile der Bevölkerung seit Jahren auf vielfältige Weise ausgegrenzt werden.
Um «UnsereDemokratie» steht es schlecht. Dass seine Mandate immer schwächer werden, merkt natürlich auch unser «Führungspersonal». Entsprechend werden die Maßnahmen zur Gängelung, Überwachung und Manipulation der Bürger ständig verzweifelter. Parallel dazu plustern sich in Paris Macron, Scholz und einige andere noch einmal mächtig in Sachen Verteidigung und «Kriegstüchtigkeit» auf.
Momentan gilt es auch, das Überschwappen covidiotischer und verschwörungsideologischer Auswüchse aus den USA nach Europa zu vermeiden. So ein «MEGA» (Make Europe Great Again) können wir hier nicht gebrauchen. Aus den Vereinigten Staaten kommen nämlich furchtbare Nachrichten. Beispielsweise wurde einer der schärfsten Kritiker der Corona-Maßnahmen kürzlich zum Gesundheitsminister ernannt. Dieser setzt sich jetzt für eine Neubewertung der mRNA-«Impfstoffe» ein, was durchaus zu einem Entzug der Zulassungen führen könnte.
Der europäischen Version von «Verteidigung der Demokratie» setzte der US-Vizepräsident J. D. Vance auf der Münchner Sicherheitskonferenz sein Verständnis entgegen: «Demokratie stärken, indem wir unseren Bürgern erlauben, ihre Meinung zu sagen». Das Abschalten von Medien, das Annullieren von Wahlen oder das Ausschließen von Menschen vom politischen Prozess schütze gar nichts. Vielmehr sei dies der todsichere Weg, die Demokratie zu zerstören.
In der Schweiz kamen seine Worte deutlich besser an als in den meisten europäischen NATO-Ländern. Bundespräsidentin Karin Keller-Sutter lobte die Rede und interpretierte sie als «Plädoyer für die direkte Demokratie». Möglicherweise zeichne sich hier eine außenpolitische Kehrtwende in Richtung integraler Neutralität ab, meint mein Kollege Daniel Funk. Das wären doch endlich mal ein paar gute Nachrichten.
Von der einstigen Idee einer europäischen Union mit engeren Beziehungen zwischen den Staaten, um Konflikte zu vermeiden und das Wohlergehen der Bürger zu verbessern, sind wir meilenweit abgekommen. Der heutige korrupte Verbund unter technokratischer Leitung ähnelt mehr einem Selbstbedienungsladen mit sehr begrenztem Zugang. Die EU-Wahlen im letzten Sommer haben daran ebenso wenig geändert, wie die Bundestagswahl am kommenden Sonntag darauf einen Einfluss haben wird.
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ e3ba5e1a:5e433365
2025-04-15 11:03:15Prelude
I wrote this post differently than any of my others. It started with a discussion with AI on an OPSec-inspired review of separation of powers, and evolved into quite an exciting debate! I asked Grok to write up a summary in my overall writing style, which it got pretty well. I've decided to post it exactly as-is. Ultimately, I think there are two solid ideas driving my stance here:
- Perfect is the enemy of the good
- Failure is the crucible of success
Beyond that, just some hard-core belief in freedom, separation of powers, and operating from self-interest.
Intro
Alright, buckle up. I’ve been chewing on this idea for a while, and it’s time to spit it out. Let’s look at the U.S. government like I’d look at a codebase under a cybersecurity audit—OPSEC style, no fluff. Forget the endless debates about what politicians should do. That’s noise. I want to talk about what they can do, the raw powers baked into the system, and why we should stop pretending those powers are sacred. If there’s a hole, either patch it or exploit it. No half-measures. And yeah, I’m okay if the whole thing crashes a bit—failure’s a feature, not a bug.
The Filibuster: A Security Rule with No Teeth
You ever see a firewall rule that’s more theater than protection? That’s the Senate filibuster. Everyone acts like it’s this untouchable guardian of democracy, but here’s the deal: a simple majority can torch it any day. It’s not a law; it’s a Senate preference, like choosing tabs over spaces. When people call killing it the “nuclear option,” I roll my eyes. Nuclear? It’s a button labeled “press me.” If a party wants it gone, they’ll do it. So why the dance?
I say stop playing games. Get rid of the filibuster. If you’re one of those folks who thinks it’s the only thing saving us from tyranny, fine—push for a constitutional amendment to lock it in. That’s a real patch, not a Post-it note. Until then, it’s just a vulnerability begging to be exploited. Every time a party threatens to nuke it, they’re admitting it’s not essential. So let’s stop pretending and move on.
Supreme Court Packing: Because Nine’s Just a Number
Here’s another fun one: the Supreme Court. Nine justices, right? Sounds official. Except it’s not. The Constitution doesn’t say nine—it’s silent on the number. Congress could pass a law tomorrow to make it 15, 20, or 42 (hitchhiker’s reference, anyone?). Packing the court is always on the table, and both sides know it. It’s like a root exploit just sitting there, waiting for someone to log in.
So why not call the bluff? If you’re in power—say, Trump’s back in the game—say, “I’m packing the court unless we amend the Constitution to fix it at nine.” Force the issue. No more shadowboxing. And honestly? The court’s got way too much power anyway. It’s not supposed to be a super-legislature, but here we are, with justices’ ideologies driving the bus. That’s a bug, not a feature. If the court weren’t such a kingmaker, packing it wouldn’t even matter. Maybe we should be talking about clipping its wings instead of just its size.
The Executive Should Go Full Klingon
Let’s talk presidents. I’m not saying they should wear Klingon armor and start shouting “Qapla’!”—though, let’s be real, that’d be awesome. I’m saying the executive should use every scrap of power the Constitution hands them. Enforce the laws you agree with, sideline the ones you don’t. If Congress doesn’t like it, they’ve got tools: pass new laws, override vetoes, or—here’s the big one—cut the budget. That’s not chaos; that’s the system working as designed.
Right now, the real problem isn’t the president overreaching; it’s the bureaucracy. It’s like a daemon running in the background, eating CPU and ignoring the user. The president’s supposed to be the one steering, but the administrative state’s got its own agenda. Let the executive flex, push the limits, and force Congress to check it. Norms? Pfft. The Constitution’s the spec sheet—stick to it.
Let the System Crash
Here’s where I get a little spicy: I’m totally fine if the government grinds to a halt. Deadlock isn’t a disaster; it’s a feature. If the branches can’t agree, let the president veto, let Congress starve the budget, let enforcement stall. Don’t tell me about “essential services.” Nothing’s so critical it can’t take a breather. Shutdowns force everyone to the table—debate, compromise, or expose who’s dropping the ball. If the public loses trust? Good. They’ll vote out the clowns or live with the circus they elected.
Think of it like a server crash. Sometimes you need a hard reboot to clear the cruft. If voters keep picking the same bad admins, well, the country gets what it deserves. Failure’s the best teacher—way better than limping along on autopilot.
States Are the Real MVPs
If the feds fumble, states step up. Right now, states act like junior devs waiting for the lead engineer to sign off. Why? Federal money. It’s a leash, and it’s tight. Cut that cash, and states will remember they’re autonomous. Some will shine, others will tank—looking at you, California. And I’m okay with that. Let people flee to better-run states. No bailouts, no excuses. States are like competing startups: the good ones thrive, the bad ones pivot or die.
Could it get uneven? Sure. Some states might turn into sci-fi utopias while others look like a post-apocalyptic vidya game. That’s the point—competition sorts it out. Citizens can move, markets adjust, and failure’s a signal to fix your act.
Chaos Isn’t the Enemy
Yeah, this sounds messy. States ignoring federal law, external threats poking at our seams, maybe even a constitutional crisis. I’m not scared. The Supreme Court’s there to referee interstate fights, and Congress sets the rules for state-to-state play. But if it all falls apart? Still cool. States can sort it without a babysitter—it’ll be ugly, but freedom’s worth it. External enemies? They’ll either unify us or break us. If we can’t rally, we don’t deserve the win.
Centralizing power to avoid this is like rewriting your app in a single thread to prevent race conditions—sure, it’s simpler, but you’re begging for a deadlock. Decentralized chaos lets states experiment, lets people escape, lets markets breathe. States competing to cut regulations to attract businesses? That’s a race to the bottom for red tape, but a race to the top for innovation—workers might gripe, but they’ll push back, and the tension’s healthy. Bring it—let the cage match play out. The Constitution’s checks are enough if we stop coddling the system.
Why This Matters
I’m not pitching a utopia. I’m pitching a stress test. The U.S. isn’t a fragile porcelain doll; it’s a rugged piece of hardware built to take some hits. Let it fail a little—filibuster, court, feds, whatever. Patch the holes with amendments if you want, or lean into the grind. Either way, stop fearing the crash. It’s how we debug the republic.
So, what’s your take? Ready to let the system rumble, or got a better way to secure the code? Hit me up—I’m all ears.
-
@ a95c6243:d345522c
2025-02-19 09:23:17Die «moralische Weltordnung» – eine Art Astrologie. Friedrich Nietzsche
Das Treffen der BRICS-Staaten beim Gipfel im russischen Kasan war sicher nicht irgendein politisches Event. Gastgeber Wladimir Putin habe «Hof gehalten», sagen die Einen, China und Russland hätten ihre Vorstellung einer multipolaren Weltordnung zelebriert, schreiben Andere.
In jedem Fall zeigt die Anwesenheit von über 30 Delegationen aus der ganzen Welt, dass von einer geostrategischen Isolation Russlands wohl keine Rede sein kann. Darüber hinaus haben sowohl die Anreise von UN-Generalsekretär António Guterres als auch die Meldungen und Dementis bezüglich der Beitrittsbemühungen des NATO-Staats Türkei für etwas Aufsehen gesorgt.
Im Spannungsfeld geopolitischer und wirtschaftlicher Umbrüche zeigt die neue Allianz zunehmendes Selbstbewusstsein. In Sachen gemeinsamer Finanzpolitik schmiedet man interessante Pläne. Größere Unabhängigkeit von der US-dominierten Finanzordnung ist dabei ein wichtiges Ziel.
Beim BRICS-Wirtschaftsforum in Moskau, wenige Tage vor dem Gipfel, zählte ein nachhaltiges System für Finanzabrechnungen und Zahlungsdienste zu den vorrangigen Themen. Während dieses Treffens ging der russische Staatsfonds eine Partnerschaft mit dem Rechenzentrumsbetreiber BitRiver ein, um Bitcoin-Mining-Anlagen für die BRICS-Länder zu errichten.
Die Initiative könnte ein Schritt sein, Bitcoin und andere Kryptowährungen als Alternativen zu traditionellen Finanzsystemen zu etablieren. Das Projekt könnte dazu führen, dass die BRICS-Staaten den globalen Handel in Bitcoin abwickeln. Vor dem Hintergrund der Diskussionen über eine «BRICS-Währung» wäre dies eine Alternative zu dem ursprünglich angedachten Korb lokaler Währungen und zu goldgedeckten Währungen sowie eine mögliche Ergänzung zum Zahlungssystem BRICS Pay.
Dient der Bitcoin also der Entdollarisierung? Oder droht er inzwischen, zum Gegenstand geopolitischer Machtspielchen zu werden? Angesichts der globalen Vernetzungen ist es oft schwer zu durchschauen, «was eine Show ist und was im Hintergrund von anderen Strippenziehern insgeheim gesteuert wird». Sicher können Strukturen wie Bitcoin auch so genutzt werden, dass sie den Herrschenden dienlich sind. Aber die Grundeigenschaft des dezentralisierten, unzensierbaren Peer-to-Peer Zahlungsnetzwerks ist ihm schließlich nicht zu nehmen.
Wenn es nach der EZB oder dem IWF geht, dann scheint statt Instrumentalisierung momentan eher der Kampf gegen Kryptowährungen angesagt. Jürgen Schaaf, Senior Manager bei der Europäischen Zentralbank, hat jedenfalls dazu aufgerufen, Bitcoin «zu eliminieren». Der Internationale Währungsfonds forderte El Salvador, das Bitcoin 2021 als gesetzliches Zahlungsmittel eingeführt hat, kürzlich zu begrenzenden Maßnahmen gegen das Kryptogeld auf.
Dass die BRICS-Staaten ein freiheitliches Ansinnen im Kopf haben, wenn sie Kryptowährungen ins Spiel bringen, darf indes auch bezweifelt werden. Im Abschlussdokument bekennen sich die Gipfel-Teilnehmer ausdrücklich zur UN, ihren Programmen und ihrer «Agenda 2030». Ernst Wolff nennt das «eine Bankrotterklärung korrupter Politiker, die sich dem digital-finanziellen Komplex zu 100 Prozent unterwerfen».
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ 3589b793:ad53847e
2025-04-30 12:28:25※本記事は別サービスで2023年4月19日に公開した記事の移植です。
どうも、「NostrはLNがWeb統合されマネーのインターネットプロトコルとしてのビットコインが本気出す具体行動のショーケースと見做せばOK」です、こんにちは。
前回まで投げ銭や有料購読の組み込み方法を見てきました。
zapsという投げ銭機能が各種クライアントに一通り実装されて活用が進んでいることで、統合は次の段階へ移り始めています。「作戦名: ウォレットをNostrクライアントに組み込め」です。今回はそちらをまとめます。
投げ銭する毎にいちいちウォレットを開いてまた元のNostrクライアントに手動で戻らないといけない is PAIN
LNとNostrはインボイス文字列で繋がっているだけの疎結合ですが、投稿に投げ銭するためには何かのLNウォレットを開いて支払いをして、また元のNostrクライアントに戻る操作をユーザーが手作業でする必要があります。お試しで一回やる程度では気になりませんが普段使いしているとこれはけっこうな煩わしさを感じるUXです。特にスマホでは大変にだるい状況になります。連打できない!
2月の実装以来、zapsは順調に定着して日々投げられています。
https://stats.nostr.band/#daily_zaps
なので、NostrクライアントにLNウォレットの接続を組み込み、支払いのために他のアプリに遷移せずにNostクライアント単独で完結できるようなアップデートが始まっています。
Webクライアント
NostrのLN組み込み業界のリーディングプレイヤーであるSnortでの例です。以下のようにヘッダーのウォレットアイコンをクリックすると連携ウォレットの選択ができます。
もともとNostrに限らずウェブアプリケーションとの連携をするために、WebLNという規格があります。簡単に言うと、ブラウザのグローバル領域を介して、LNウォレットの拡張機能と、タブで開いているウェブアプリが、お互いに連携するためのインターフェースを定めているものです。これに対応していると、LNによる支払いをウェブアプリが拡張機能に依頼できるようになります。さらにオプションで「確認無し」をオンにすると、拡張機能画面がポップアップせずにバックグラウンドで実行できるようになり、ノールック投げ銭ができるようになります。
似たようなものにNostrではNIP-07があります。NIP-07はNostrの秘密鍵を拡張機能に退避して、Nostrクライアントは秘密鍵を知らない状態で署名や複合を拡張機能に移譲できるようにしているものです。
Albyの拡張機能ではWebLNとNIP-07のどちらにも対応しています。
実はSnortはzapsが来る前からWebLNには対応していたのですが、さらに一歩進み、拡張機能ウォレットだけでなく、LNノードや拡張機能以外のLNウォレットと連携設定できるようになってきています。
umbrelなどでノードを立てている人ならLND with LNCでノードと直接繋げます。またLNDHubに対応したウォレットなどのアプリケーションとも繋げます。これらの接続は、WebLNにラップされて拡張機能ウォレットとインターフェースを揃えられた上で、Snort上でのインボイスの支払いに活用されます。
なお、LNCのpairingPhrase/passwordやLNDHubの接続情報などのクレデンシャルは、ブラウザのローカルストレージに保存されています。Nostrのリレーサーバなどには送られませんので、端末ごとに設定が必要です。
スマホアプリ
今回のメインです。なお、例によって(?)スペックは絶賛議論中でまだフィックスしていない中で記事を書いています。ディテールは変わるかもしれないので悪しからずです。
スマホアプリで上記のことをやるためには、後半のLNCやLNDHubはすでにzeusなどがやっているようにできますが、あくまでネイティブウォレットのラッパーです。Nostrでは限られた用途になるので1-click支払いのようなものを行うためにはそこから各スマホアプリが作り込む必要があります。まあこれはこれでやればいいという話でもあるのですが、LNノードやLNウォレットのアプリケーション側へのインターフェースの共通仕様は定められていないので、LNDとcore-lightningとeclairではすべて実装方法が違いますし、ウォレットもバラバラなので大変です。
そこで、多種多様なノードやウォレットの接続を取りまとめ一般アプリケーションへ統一したインターフェースを媒介するLN Adapter業界のリーディングカンパニーであるAlbyが動きました。AndroidアプリのAmethystで試験公開されていますが、スマホアプリでも上記のSnortのような連携が可能になるようなSDKが開発されました。
リリース記事 https://blog.getalby.com/native-zapping-in-amethyst/
"Unstoppable zapping for users"なんて段落見出しが付けられているように、スマホで別のアプリに切り替えてまた元に戻らなくても良いようにして、Nostr上でマイクロペイメントを滑らかにする、つまり、連打できることを繰り返し強調しています。
具体的にやっていることを見ていきます。以下の画像群はリリース記事の動画から抜粋しています。各投稿のzapsボタン⚡️をタップしたときの画面です。
上の赤枠が従来の投げ銭の詳細を決める場所で、下の赤枠の「Wallet Connect Service」が新たに追加されたAlby提供のSDKを用いたコネクト設定画面です。基本的にはOAuth2.0ベースのAlbyのAPIを活用していて、右上のAlbyアイコンをタップすると以下のようなOAuthの認可画面に飛びます。(ただし後述するように通常のOAuthとは一部異なります。)
画面デザインは違いますが、まあ他のアプリでよく目にするTwitter連携やGoogleアカウント連携とやっていることは同じです。
このOAuthベースのAPIはNostr専用のエンドポイントが建てられています。Nostr以外のECショップやマーケットプレイスなどへのAlbyのOAuthは汎用のエンドポイントが用意されています。よって通常のAlbyの設定とは別にセッション詳細を以下のサイトで作成する必要があります。
https://nwc.getalby.com/ (サブドメインのnwcはNostr Wallet Connectの略)
なぜNostrだけは特別なのかというところが完全には理解しきれていないですが、以下のところまで確認できています。一番にあるのは、Nostrクライアントにウォレットを組み込まずに、かつ、ノードやウォレットへの接続をNostrリレーサーバ以外は挟まずに"decentralized"にしたいというところだと理解しています。
- 上記のnwcのURLはalbyのカストディアルウォレットusername@getalby.comをNostrに繋ぐもの(たぶん)
- umbrelのLNノードを繋ぐためにはやはり専用のアプリがumbrelストアに上がっている。https://github.com/getAlby/umbrel-community-app-store
- 要するにOAuthの1stPartyの役割をウォレットやノードごとにそれぞれ建てる。
- OAuthのシークレットはクライアントに保存するので設定は各クライアント毎に必要。しかし使い回しすることは可能っぽい。通常のOAuthと異なり、1stParty側で3rdPartyのドメインはトラストしていないようなので。
- Nostrクライアントにウォレットを組み込まずに、さらにウォレットやノードへの接続をNostrリレーサーバ以外には挟まなくて良いようにするために、「NIP-47 Nostr Wallet Connect」というプロポーザルが起こされていて、絶賛議論中である。https://github.com/nostr-protocol/nips/pull/406
- このWallet Connect専用のアドホックなリレーサーバが建てられる。その情報が上記画像の赤枠の「Wallet Connect Service」の下半分のpub keyやらrelayURL。どうもNostrクライアントはNIP-47イベントについてはこのリレーサーバにしか送らないようにするらしい。(なんかNostrの基本設計を揺るがすユースケースの気がする...)
- Wallet Connect専用のNostrイベントでは、ペイメント情報をNostrアカウントと切り離すために、Nostrの秘密鍵とは別の秘密鍵が利用できるようにしている。
Imagin the Future
今回取り上げたNostrクライアントにウォレット接続を組み込む話を、Webのペイメントの歴史で類推してみましょう。
Snortでやっていることは、各サイトごとにクレジットカードを打ち込み各サイトがその情報を保持していたようなWeb1.0の時代に近いです。そうなるとクレジットカードの情報は各サービスごとに漏洩リスクなどがあり、Web1.0の時代はECが普及する壁の一つになっていました。(今でもAmazonなどの大手はそうですが)
Webではその後にPayPalをはじめとして、銀行口座やクレジットカードを各サイトから切り出して一括管理し、各ウェブサイトに支払いだけを連携するサービスが出てきて一般化しています。日本ではケータイのキャリア決済が利用者の心理的障壁を取り除きEC普及の後押しになりました。
後半のNostr Wallet ConnectはそれをNostrの中でやろうとしている試みになります。クレジットカードからLNに変える理由はビットコインの話になるので詳細は割愛しますが、現実世界の金(ゴールド)に類した価値保存や交換ができるインターネットマネーだからです。
とはいえ、Nostrの中だけならまだしも、これをNostr外のサービスで利用するためには、他のECショップやブログやSaaSがNostrを喋れる必要があります。そんな未来が来るわけないだろと思うかもしれませんが、言ってみればStripeはまさにそのようなサービスとなっていて、サイト内にクレジット決済のモジュールを組み込むための主流となっています。
果たして、Nostrを、他のECショップやブログやSaaSが喋るようになるのか!?
以上、「NostrはLNがWeb統合されマネーのインターネットプロトコルとしてのビットコインが本気出す具体行動のショーケースと見做せばOK」がお送りしました。
-
@ 09fbf8f3:fa3d60f0
2024-11-02 08:00:29> ### 第三方API合集:
免责申明:
在此推荐的 OpenAI API Key 由第三方代理商提供,所以我们不对 API Key 的 有效性 和 安全性 负责,请你自行承担购买和使用 API Key 的风险。
| 服务商 | 特性说明 | Proxy 代理地址 | 链接 | | --- | --- | --- | --- | | AiHubMix | 使用 OpenAI 企业接口,全站模型价格为官方 86 折(含 GPT-4 )| https://aihubmix.com/v1 | 官网 | | OpenAI-HK | OpenAI的API官方计费模式为,按每次API请求内容和返回内容tokens长度来定价。每个模型具有不同的计价方式,以每1,000个tokens消耗为单位定价。其中1,000个tokens约为750个英文单词(约400汉字)| https://api.openai-hk.com/ | 官网 | | CloseAI | CloseAI是国内规模最大的商用级OpenAI代理平台,也是国内第一家专业OpenAI中转服务,定位于企业级商用需求,面向企业客户的线上服务提供高质量稳定的官方OpenAI API 中转代理,是百余家企业和多家科研机构的专用合作平台。 | https://api.openai-proxy.org | 官网 | | OpenAI-SB | 需要配合Telegram 获取api key | https://api.openai-sb.com | 官网 |
持续更新。。。
推广:
访问不了openai,去
低调云
购买VPN。官网:https://didiaocloud.xyz
邀请码:
w9AjVJit
价格低至1元。
-
@ 4c48cf05:07f52b80
2024-10-30 01:03:42I believe that five years from now, access to artificial intelligence will be akin to what access to the Internet represents today. It will be the greatest differentiator between the haves and have nots. Unequal access to artificial intelligence will exacerbate societal inequalities and limit opportunities for those without access to it.
Back in April, the AI Index Steering Committee at the Institute for Human-Centered AI from Stanford University released The AI Index 2024 Annual Report.
Out of the extensive report (502 pages), I chose to focus on the chapter dedicated to Public Opinion. People involved with AI live in a bubble. We all know and understand AI and therefore assume that everyone else does. But, is that really the case once you step out of your regular circles in Seattle or Silicon Valley and hit Main Street?
Two thirds of global respondents have a good understanding of what AI is
The exact number is 67%. My gut feeling is that this number is way too high to be realistic. At the same time, 63% of respondents are aware of ChatGPT so maybe people are confounding AI with ChatGPT?
If so, there is so much more that they won't see coming.
This number is important because you need to see every other questions and response of the survey through the lens of a respondent who believes to have a good understanding of what AI is.
A majority are nervous about AI products and services
52% of global respondents are nervous about products and services that use AI. Leading the pack are Australians at 69% and the least worried are Japanise at 23%. U.S.A. is up there at the top at 63%.
Japan is truly an outlier, with most countries moving between 40% and 60%.
Personal data is the clear victim
Exaclty half of the respondents believe that AI companies will protect their personal data. And the other half believes they won't.
Expected benefits
Again a majority of people (57%) think that it will change how they do their jobs. As for impact on your life, top hitters are getting things done faster (54%) and more entertainment options (51%).
The last one is a head scratcher for me. Are people looking forward to AI generated movies?
Concerns
Remember the 57% that thought that AI will change how they do their jobs? Well, it looks like 37% of them expect to lose it. Whether or not this is what will happen, that is a very high number of people who have a direct incentive to oppose AI.
Other key concerns include:
- Misuse for nefarious purposes: 49%
- Violation of citizens' privacy: 45%
Conclusion
This is the first time I come across this report and I wil make sure to follow future annual reports to see how these trends evolve.
Overall, people are worried about AI. There are many things that could go wrong and people perceive that both jobs and privacy are on the line.
Full citation: Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.
The AI Index 2024 Annual Report by Stanford University is licensed under Attribution-NoDerivatives 4.0 International.
-
@ a95c6243:d345522c
2025-02-15 19:05:38Auf der diesjährigen Münchner Sicherheitskonferenz geht es vor allem um die Ukraine. Protagonisten sind dabei zunächst die US-Amerikaner. Präsident Trump schockierte die Europäer kurz vorher durch ein Telefonat mit seinem Amtskollegen Wladimir Putin, während Vizepräsident Vance mit seiner Rede über Demokratie und Meinungsfreiheit für versteinerte Mienen und Empörung sorgte.
Die Bemühungen der Europäer um einen Frieden in der Ukraine halten sich, gelinde gesagt, in Grenzen. Größeres Augenmerk wird auf militärische Unterstützung, die Pflege von Feindbildern sowie Eskalation gelegt. Der deutsche Bundeskanzler Scholz reagierte auf die angekündigten Verhandlungen über einen möglichen Frieden für die Ukraine mit der Forderung nach noch höheren «Verteidigungsausgaben». Auch die amtierende Außenministerin Baerbock hatte vor der Münchner Konferenz klargestellt:
«Frieden wird es nur durch Stärke geben. (...) Bei Corona haben wir gesehen, zu was Europa fähig ist. Es braucht erneut Investitionen, die der historischen Wegmarke, vor der wir stehen, angemessen sind.»
Die Rüstungsindustrie freut sich in jedem Fall über weltweit steigende Militärausgaben. Die Kriege in der Ukraine und in Gaza tragen zu Rekordeinnahmen bei. Jetzt «winkt die Aussicht auf eine jahrelange große Nachrüstung in Europa», auch wenn der Ukraine-Krieg enden sollte, so hört man aus Finanzkreisen. In der Konsequenz kennt «die Aktie des deutschen Vorzeige-Rüstungskonzerns Rheinmetall in ihrem Anstieg offenbar gar keine Grenzen mehr». «Solche Friedensversprechen» wie das jetzige hätten in der Vergangenheit zu starken Kursverlusten geführt.
Für manche Leute sind Kriegswaffen und sonstige Rüstungsgüter Waren wie alle anderen, jedenfalls aus der Perspektive von Investoren oder Managern. Auch in diesem Bereich gibt es Startups und man spricht von Dingen wie innovativen Herangehensweisen, hocheffizienten Produktionsanlagen, skalierbaren Produktionstechniken und geringeren Stückkosten.
Wir lesen aktuell von Massenproduktion und gesteigerten Fertigungskapazitäten für Kriegsgerät. Der Motor solcher Dynamik und solchen Wachstums ist die Aufrüstung, die inzwischen permanent gefordert wird. Parallel wird die Bevölkerung verbal eingestimmt und auf Kriegstüchtigkeit getrimmt.
Das Rüstungs- und KI-Startup Helsing verkündete kürzlich eine «dezentrale Massenproduktion für den Ukrainekrieg». Mit dieser Expansion positioniere sich das Münchner Unternehmen als einer der weltweit führenden Hersteller von Kampfdrohnen. Der nächste «Meilenstein» steht auch bereits an: Man will eine Satellitenflotte im Weltraum aufbauen, zur Überwachung von Gefechtsfeldern und Truppenbewegungen.
Ebenfalls aus München stammt das als DefenseTech-Startup bezeichnete Unternehmen ARX Robotics. Kürzlich habe man in der Region die größte europäische Produktionsstätte für autonome Verteidigungssysteme eröffnet. Damit fahre man die Produktion von Militär-Robotern hoch. Diese Expansion diene auch der Lieferung der «größten Flotte unbemannter Bodensysteme westlicher Bauart» in die Ukraine.
Rüstung boomt und scheint ein Zukunftsmarkt zu sein. Die Hersteller und Vermarkter betonen, mit ihren Aktivitäten und Produkten solle die europäische Verteidigungsfähigkeit erhöht werden. Ihre Strategien sollten sogar «zum Schutz demokratischer Strukturen beitragen».
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ c631e267:c2b78d3e
2025-02-07 19:42:11Nur wenn wir aufeinander zugehen, haben wir die Chance \ auf Überwindung der gegenseitigen Ressentiments! \ Dr. med. dent. Jens Knipphals
In Wolfsburg sollte es kürzlich eine Gesprächsrunde von Kritikern der Corona-Politik mit Oberbürgermeister Dennis Weilmann und Vertretern der Stadtverwaltung geben. Der Zahnarzt und langjährige Maßnahmenkritiker Jens Knipphals hatte diese Einladung ins Rathaus erwirkt und publiziert. Seine Motivation:
«Ich möchte die Spaltung der Gesellschaft überwinden. Dazu ist eine umfassende Aufarbeitung der Corona-Krise in der Öffentlichkeit notwendig.»
Schon früher hatte Knipphals Antworten von den Kommunalpolitikern verlangt, zum Beispiel bei öffentlichen Bürgerfragestunden. Für das erwartete Treffen im Rathaus formulierte er Fragen wie: Warum wurden fachliche Argumente der Kritiker ignoriert? Weshalb wurde deren Ausgrenzung, Diskreditierung und Entmenschlichung nicht entgegengetreten? In welcher Form übernehmen Rat und Verwaltung in Wolfsburg persönlich Verantwortung für die erheblichen Folgen der politischen Corona-Krise?
Der Termin fand allerdings nicht statt – der Bürgermeister sagte ihn kurz vorher wieder ab. Knipphals bezeichnete Weilmann anschließend als Wiederholungstäter, da das Stadtoberhaupt bereits 2022 zu einem Runden Tisch in der Sache eingeladen hatte, den es dann nie gab. Gegenüber Multipolar erklärte der Arzt, Weilmann wolle scheinbar eine öffentliche Aufarbeitung mit allen Mitteln verhindern. Er selbst sei «inzwischen absolut desillusioniert» und die einzige Lösung sei, dass die Verantwortlichen gingen.
Die Aufarbeitung der Plandemie beginne bei jedem von uns selbst, sei aber letztlich eine gesamtgesellschaftliche Aufgabe, schreibt Peter Frey, der den «Fall Wolfsburg» auch in seinem Blog behandelt. Diese Aufgabe sei indes deutlich größer, als viele glaubten. Erfreulicherweise sei der öffentliche Informationsraum inzwischen größer, trotz der weiterhin unverfrorenen Desinformations-Kampagnen der etablierten Massenmedien.
Frey erinnert daran, dass Dennis Weilmann mitverantwortlich für gravierende Grundrechtseinschränkungen wie die 2021 eingeführten 2G-Regeln in der Wolfsburger Innenstadt zeichnet. Es sei naiv anzunehmen, dass ein Funktionär einzig im Interesse der Bürger handeln würde. Als früherer Dezernent des Amtes für Wirtschaft, Digitalisierung und Kultur der Autostadt kenne Weilmann zum Beispiel die Verknüpfung von Fördergeldern mit politischen Zielsetzungen gut.
Wolfsburg wurde damals zu einem Modellprojekt des Bundesministeriums des Innern (BMI) und war Finalist im Bitkom-Wettbewerb «Digitale Stadt». So habe rechtzeitig vor der Plandemie das Projekt «Smart City Wolfsburg» anlaufen können, das der Stadt «eine Vorreiterrolle für umfassende Vernetzung und Datenerfassung» aufgetragen habe, sagt Frey. Die Vereinten Nationen verkauften dann derartige «intelligente» Überwachungs- und Kontrollmaßnahmen ebenso als Rettung in der Not wie das Magazin Forbes im April 2020:
«Intelligente Städte können uns helfen, die Coronavirus-Pandemie zu bekämpfen. In einer wachsenden Zahl von Ländern tun die intelligenten Städte genau das. Regierungen und lokale Behörden nutzen Smart-City-Technologien, Sensoren und Daten, um die Kontakte von Menschen aufzuspüren, die mit dem Coronavirus infiziert sind. Gleichzeitig helfen die Smart Cities auch dabei, festzustellen, ob die Regeln der sozialen Distanzierung eingehalten werden.»
Offensichtlich gibt es viele Aspekte zu bedenken und zu durchleuten, wenn es um die Aufklärung und Aufarbeitung der sogenannten «Corona-Pandemie» und der verordneten Maßnahmen geht. Frustration und Desillusion sind angesichts der Realitäten absolut verständlich. Gerade deswegen sind Initiativen wie die von Jens Knipphals so bewundernswert und so wichtig – ebenso wie eine seiner Kernthesen: «Wir müssen aufeinander zugehen, da hilft alles nichts».
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ dc4cd086:cee77c06
2024-10-18 04:08:33Have you ever wanted to learn from lengthy educational videos but found it challenging to navigate through hours of content? Our new tool addresses this problem by transforming long-form video lectures into easily digestible, searchable content.
Key Features:
Video Processing:
- Automatically downloads YouTube videos, transcripts, and chapter information
- Splits transcripts into sections based on video chapters
Content Summarization:
- Utilizes language models to transform spoken content into clear, readable text
- Formats output in AsciiDoc for improved readability and navigation
- Highlights key terms and concepts with [[term]] notation for potential cross-referencing
Diagram Extraction:
- Analyzes video entropy to identify static diagram/slide sections
- Provides a user-friendly GUI for manual selection of relevant time ranges
- Allows users to pick representative frames from selected ranges
Going Forward:
Currently undergoing a rewrite to improve organization and functionality, but you are welcome to try the current version, though it might not work on every machine. Will support multiple open and closed language models for user choice Free and open-source, allowing for personal customization and integration with various knowledge bases. Just because we might not have it on our official Alexandria knowledge base, you are still welcome to use it on you own personal or community knowledge bases! We want to help find connections between ideas that exist across relays, allowing individuals and groups to mix and match knowledge bases between each other, allowing for any degree of openness you care.
While designed with #Alexandria users in mind, it's available for anyone to use and adapt to their own learning needs.
Screenshots
Frame Selection
This is a screenshot of the frame selection interface. You'll see a signal that represents frame entropy over time. The vertical lines indicate the start and end of a chapter. Within these chapters you can select the frames by clicking and dragging the mouse over the desired range where you think diagram is in that chapter. At the bottom is an option that tells the program to select a specific number of frames from that selection.
Diagram Extraction
This is a screenshot of the diagram extraction interface. For every selection you've made, there will be a set of frames that you can choose from. You can select and deselect as many frames as you'd like to save.
Links
- repo: https://github.com/limina1/video_article_converter
- Nostr Apps 101: https://www.youtube.com/watch?v=Flxa_jkErqE
Output
And now, we have a demonstration of the final result of this tool, with some quick cleaning up. The video we will be using this tool on is titled Nostr Apps 101 by nostr:npub1nxy4qpqnld6kmpphjykvx2lqwvxmuxluddwjamm4nc29ds3elyzsm5avr7 during Nostrasia. The following thread is an analog to the modular articles we are constructing for Alexandria, and I hope it conveys the functionality we want to create in the knowledge space. Note, this tool is the first step! You could use a different prompt that is most appropriate for the specific context of the transcript you are working with, but you can also manually clean up any discrepancies that don't portray the video accurately.
nostr:nevent1qvzqqqqqqypzp5r5hd579v2sszvvzfel677c8dxgxm3skl773sujlsuft64c44ncqy2hwumn8ghj7un9d3shjtnyv9kh2uewd9hj7qgwwaehxw309ahx7uewd3hkctcpzemhxue69uhhyetvv9ujumt0wd68ytnsw43z7qghwaehxw309aex2mrp0yhxummnw3ezucnpdejz7qgewaehxw309aex2mrp0yh8xmn0wf6zuum0vd5kzmp0qqsxunmjy20mvlq37vnrcshkf6sdrtkfjtjz3anuetmcuv8jswhezgc7hglpn
Or view on Coracle nostr:nevent1qqsxunmjy20mvlq37vnrcshkf6sdrtkfjtjz3anuetmcuv8jswhezgcppemhxue69uhkummn9ekx7mp0qgsdqa9md83tz5yqnrqjw07hhkpmfjpkuv9hlh5v8yhu8z274w9dv7qnnq0s3
-
@ 3589b793:ad53847e
2025-04-30 12:10:06前回の続きです。
特に「Snortで試験的にノート単位に投げ銭できる機能」について。実は記事書いた直後にリリースされて慌ててw追記してたんですが追い付かないということで別記事にしました。
今回のここがすごい!
「Snortで試験的にノート単位に投げ銭できる機能」では一つブレイクスルーが起こっています。それは「ウォレットにインボイスを放り投げた後に払い込み完了を提示できる」ようになったことです。これによりペイメントのライフサイクルが一通りカバーされたことになります。
Snortの画面
なにを当たり前のことをという向きもあるかもしれませんが、Nostrクライアントで払い込み完了を追跡することはとても難しいです。基本的にNostrとLNウォレットはまったく別のアプリケーションで両者の間を繋ぐのはインボイス文字列だけです。ウォレットもNostrからキックされずに、インボイス文字列をコピペするなりQRコードで読み取ったものを渡されるだけかもしれません。またその場でリアルタイムに処理される前提もありません。
なのでNostrクライアントでその後をトラックすることは難しく、これまではあくまで請求書を送付したり(LNインボイス)振り込み口座を提示する(LNアドレス)という一方的に放り投げてただけだったわけです。といっても魔法のようにNostrクライアントがトラックできるようになったわけではなく、今回の対応方法もインボイスを発行/お金を振り込まれるサービス側(LNURL)にNostrカスタマイズを入れさせるというものになります。
プロポーザルの概要について
前回の記事ではよくわからんで終わっていましたが、当日夜(日本時間)にスペックをまとめたプロポーザルも起こされました(早い!)。LNURLが、Nostr用のインボイスを発行して、さらにNostrイベントの発行を行っていることがポイントでした。名称は"Lightning Zaps"で確定のようです。プロポーザルは、NostrとLNURLの双方の発明者であるfiatjaf氏からツッコミが入り、またそれが妥当な指摘のために、エンドポイントURLのインターフェースなどは変わりそうなのですが、概要はそう変わらないだろうということで簡単にまとめてみます。
全体の流れ
図は、Nostrクライアント上に提示されているLNアドレスへ投げ銭が開始してから、Nostrクライアント上に払い込み完了したイベントが表示されるまでの流れを示しています。
- 投げ銭の内容が固まったらNostrイベントデータを添付してインボイスの発行を依頼する
- 説明欄にNostr用のデータを記載したインボイスを発行して返却する
- Nostrクライアントで提示されたインボイスをユーザーが何かしらの手段でウォレットに渡す
- ウォレットがLNに支払いを実行する
- インボイスの発行者であるLNURLが管理しているLNノードにsatoshiが届く
- LNURLサーバが投げ銭成功のNostrイベントを発行する
- Nostrクライアントがイベントを受信して投げ銭履歴を表示する
特にポイントとなるところを補足します。
対応しているLNアドレスの識別
LNアドレスに投げ銭する場合は、LNアドレスの有効状態やインボイス発行依頼する先の情報を
https://[domain]/.well-known/lnurlp/[username]
から取得しています。そのレスポンス内容にNostr対応を示す情報を追加しています。ただし、ここに突っ込み入っていてlnurlp=LNURL Payから独立させるためにzaps専用のエンドポイントに変わりそうです。(2/15 追記 マージされましたが変更無しでした。PRのディスカッションが盛り上がっているので興味ある方は覗いてみてください。)インボイスの説明欄に書き込むNostrイベント(kind:9734)
これは投げ銭する側のNostrイベントです。投げ銭される者や対象ノートのIDや金額、そしてこのイベントを作成している者が投げ銭したということを「表明」するものになります。表明であって証明でないところは、インボイスを別の人が払っちゃう事態がありえるからですね。この内容をエンクリプトするパターンも用意されていたが複雑になり過ぎるという理由で今回は外され追加提案に回されました。また、このイベントはデータを作成しただけです。支払いを検知した後にLNURLが発行するイベントに添付されることになります。そのため投げ銭する者にちゃんと届くように作成者のリレーサーバリストも書き込まれています。
支払いを検知した後に発行するNostrイベント(kind:9735)
これが実際にNostrリレーサーバに発行されるイベントです。LNURL側はウォッチしているLNノードにsatoshiが届くと、インボイスの説明欄に書かれているNostrイベントを取り出して、いわば受領イベントを作成して発行します。以下のようにNostイベントのkind:9734とkind:9735が親子になったイベントとなります。
json { "pubkey": "LNURLが持っているNostrアカウントの公開鍵", "kind": 9735, "tags": [ [ "p", "投げ銭された者の公開鍵" ], [ "bolt11", "インボイスの文字列lnbc〜" ], [ "description", "投げ銭した者が作成したkind:9734のNostrイベント" ], [ "preimage", "インボイスのpreimage" ] ], }
所感
とにかくNostrクライアントはLNノードを持たないしLNプロトコルとも直接喋らずにインボイス文字列だけで取り扱えるようになっているところがおもしろいと思っています。NostrとLNという二つのデセントライズドなオープンプロトコルが協調できていますし、前回も述べましたがどんなアプリでも簡単に真似できます。
とはいえ、さすがに払込完了のトラックは難しく、今回はLNURL側にそのすり合わせの責務が寄せられることになりました。しかし、LNURLもLNの上に作られたオープンプロトコル/スペックの位置付けになるため、他のLNURLのスペックに干渉するという懸念から、本提案のNIP-57に変更依頼が出されています。LN、LNURL、Nostrの3つのオープンプロトコルの責務分担が難しいですね。アーキテクチャ層のスタックにおいて3つの中ではNostrが一番上になるため、Nostrに相当するレイヤーの他のwebサービスでやるときはLNプロトコルを喋るなりLNノードを持つようにして、今回LNURLが寄せられた責務を吸収するのが無難かもしれません。
また、NIP-57の変更依頼理由の一つにはBOLT-12を見越した抽象化も挙げられています。他のLNURLのスペックを削ぎ落としてzapsだけにすることでBOLT-12にも載りやすくなるだろうと。LNURLの多くはBOLT-12に取り込まれる運命なわけですが、LNアドレス以外の点でもNostrではBOLT-12のOfferやInvoiceRequestのユースケースをやりたいという声が挙がっているため、NostrによりBOLT-12が進む展開もありそうだなあ、あってほしい。
-
@ c4b5369a:b812dbd6
2025-04-15 07:26:16Offline transactions with Cashu
Over the past few weeks, I've been busy implementing offline capabilities into nutstash. I think this is one of the key value propositions of ecash, beinga a bearer instrument that can be used without internet access.
It does however come with limitations, which can lead to a bit of confusion. I hope this article will clear some of these questions up for you!
What is ecash/Cashu?
Ecash is the first cryptocurrency ever invented. It was created by David Chaum in 1983. It uses a blind signature scheme, which allows users to prove ownership of a token without revealing a link to its origin. These tokens are what we call ecash. They are bearer instruments, meaning that anyone who possesses a copy of them, is considered the owner.
Cashu is an implementation of ecash, built to tightly interact with Bitcoin, more specifically the Bitcoin lightning network. In the Cashu ecosystem,
Mints
are the gateway to the lightning network. They provide the infrastructure to access the lightning network, pay invoices and receive payments. Instead of relying on a traditional ledger scheme like other custodians do, the mint issues ecash tokens, to represent the value held by the users.How do normal Cashu transactions work?
A Cashu transaction happens when the sender gives a copy of his ecash token to the receiver. This can happen by any means imaginable. You could send the token through email, messenger, or even by pidgeon. One of the common ways to transfer ecash is via QR code.
The transaction is however not finalized just yet! In order to make sure the sender cannot double-spend their copy of the token, the receiver must do what we call a
swap
. A swap is essentially exchanging an ecash token for a new one at the mint, invalidating the old token in the process. This ensures that the sender can no longer use the same token to spend elsewhere, and the value has been transferred to the receiver.What about offline transactions?
Sending offline
Sending offline is very simple. The ecash tokens are stored on your device. Thus, no internet connection is required to access them. You can litteraly just take them, and give them to someone. The most convenient way is usually through a local transmission protocol, like NFC, QR code, Bluetooth, etc.
The one thing to consider when sending offline is that ecash tokens come in form of "coins" or "notes". The technical term we use in Cashu is
Proof
. It "proofs" to the mint that you own a certain amount of value. Since these proofs have a fixed value attached to them, much like UTXOs in Bitcoin do, you would need proofs with a value that matches what you want to send. You can mix and match multiple proofs together to create a token that matches the amount you want to send. But, if you don't have proofs that match the amount, you would need to go online and swap for the needed proofs at the mint.Another limitation is, that you cannot create custom proofs offline. For example, if you would want to lock the ecash to a certain pubkey, or add a timelock to the proof, you would need to go online and create a new custom proof at the mint.
Receiving offline
You might think: well, if I trust the sender, I don't need to be swapping the token right away!
You're absolutely correct. If you trust the sender, you can simply accept their ecash token without needing to swap it immediately.
This is already really useful, since it gives you a way to receive a payment from a friend or close aquaintance without having to worry about connectivity. It's almost just like physical cash!
It does however not work if the sender is untrusted. We have to use a different scheme to be able to receive payments from someone we don't trust.
Receiving offline from an untrusted sender
To be able to receive payments from an untrusted sender, we need the sender to create a custom proof for us. As we've seen before, this requires the sender to go online.
The sender needs to create a token that has the following properties, so that the receciver can verify it offline:
- It must be locked to ONLY the receiver's public key
- It must include an
offline signature proof
(DLEQ proof) - If it contains a timelock & refund clause, it must be set to a time in the future that is acceptable for the receiver
- It cannot contain duplicate proofs (double-spend)
- It cannot contain proofs that the receiver has already received before (double-spend)
If all of these conditions are met, then the receiver can verify the proof offline and accept the payment. This allows us to receive payments from anyone, even if we don't trust them.
At first glance, this scheme seems kinda useless. It requires the sender to go online, which defeats the purpose of having an offline payment system.
I beleive there are a couple of ways this scheme might be useful nonetheless:
-
Offline vending machines: Imagine you have an offline vending machine that accepts payments from anyone. The vending machine could use this scheme to verify payments without needing to go online itself. We can assume that the sender is able to go online and create a valid token, but the receiver doesn't need to be online to verify it.
-
Offline marketplaces: Imagine you have an offline marketplace where buyers and sellers can trade goods and services. Before going to the marketplace the sender already knows where he will be spending the money. The sender could create a valid token before going to the marketplace, using the merchants public key as a lock, and adding a refund clause to redeem any unspent ecash after it expires. In this case, neither the sender nor the receiver needs to go online to complete the transaction.
How to use this
Pretty much all cashu wallets allow you to send tokens offline. This is because all that the wallet needs to do is to look if it can create the desired amount from the proofs stored locally. If yes, it will automatically create the token offline.
Receiving offline tokens is currently only supported by nutstash (experimental).
To create an offline receivable token, the sender needs to lock it to the receiver's public key. Currently there is no refund clause! So be careful that you don't get accidentally locked out of your funds!
The receiver can then inspect the token and decide if it is safe to accept without a swap. If all checks are green, they can accept the token offline without trusting the sender.
The receiver will see the unswapped tokens on the wallet homescreen. They will need to manually swap them later when they are online again.
Later when the receiver is online again, they can swap the token for a fresh one.
Summary
We learned that offline transactions are possible with ecash, but there are some limitations. It either requires trusting the sender, or relying on either the sender or receiver to be online to verify the tokens, or create tokens that can be verified offline by the receiver.
I hope this short article was helpful in understanding how ecash works and its potential for offline transactions.
Cheers,
Gandlaf
-
@ 266815e0:6cd408a5
2025-04-15 06:58:14Its been a little over a year since NIP-90 was written and merged into the nips repo and its been a communication mess.
Every DVM implementation expects the inputs in slightly different formats, returns the results in mostly the same format and there are very few DVM actually running.
NIP-90 is overloaded
Why does a request for text translation and creating bitcoin OP_RETURNs share the same input
i
tag? and why is there anoutput
tag on requests when only one of them will return an output?Each DVM request kind is for requesting completely different types of compute with diffrent input and output requirements, but they are all using the same spec that has 4 different types of inputs (
text
,url
,event
,job
) and an undefined number ofoutput
types.Let me show a few random DVM requests and responses I found on
wss://relay.damus.io
to demonstrate what I mean:This is a request to translate an event to English
json { "kind": 5002, "content": "", "tags": [ // NIP-90 says there can be multiple inputs, so how would a DVM handle translatting multiple events at once? [ "i", "<event-id>", "event" ], [ "param", "language", "en" ], // What other type of output would text translations be? image/jpeg? [ "output", "text/plain" ], // Do we really need to define relays? cant the DVM respond on the relays it saw the request on? [ "relays", "wss://relay.unknown.cloud/", "wss://nos.lol/" ] ] }
This is a request to generate text using an LLM model
json { "kind": 5050, // Why is the content empty? wouldn't it be better to have the prompt in the content? "content": "", "tags": [ // Why use an indexable tag? are we ever going to lookup prompts? // Also the type "prompt" isn't in NIP-90, this should probably be "text" [ "i", "What is the capital of France?", "prompt" ], [ "p", "c4878054cff877f694f5abecf18c7450f4b6fdf59e3e9cb3e6505a93c4577db2" ], [ "relays", "wss://relay.primal.net" ] ] }
This is a request for content recommendation
json { "kind": 5300, "content": "", "tags": [ // Its fine ignoring this param, but what if the client actually needs exactly 200 "results" [ "param", "max_results", "200" ], // The spec never mentions requesting content for other users. // If a DVM didn't understand this and responded to this request it would provide bad data [ "param", "user", "b22b06b051fd5232966a9344a634d956c3dc33a7f5ecdcad9ed11ddc4120a7f2" ], [ "relays", "wss://relay.primal.net", ], [ "p", "ceb7e7d688e8a704794d5662acb6f18c2455df7481833dd6c384b65252455a95" ] ] }
This is a request to create a OP_RETURN message on bitcoin
json { "kind": 5901, // Again why is the content empty when we are sending human readable text? "content": "", "tags": [ // and again, using an indexable tag on an input that will never need to be looked up ["i", "09/01/24 SEC Chairman on the brink of second ETF approval", "text"] ] }
My point isn't that these event schema's aren't understandable but why are they using the same schema? each use-case is different but are they all required to use the same
i
tag format as input and could support all 4 types of inputs.Lack of libraries
With all these different types of inputs, params, and outputs its verify difficult if not impossible to build libraries for DVMs
If a simple text translation request can have an
event
ortext
as inputs, apayment-required
status at any point in the flow, partial results, or responses from 10+ DVMs whats the best way to build a translation library for other nostr clients to use?And how do I build a DVM framework for the server side that can handle multiple inputs of all four types (
url
,text
,event
,job
) and clients are sending all the requests in slightly differently.Supporting payments is impossible
The way NIP-90 is written there isn't much details about payments. only a
payment-required
status and a genericamount
tagBut the way things are now every DVM is implementing payments differently. some send a bolt11 invoice, some expect the client to NIP-57 zap the request event (or maybe the status event), and some even ask for a subscription. and we haven't even started implementing NIP-61 nut zaps or cashu A few are even formatting the
amount
number wrong or denominating it in sats and not mili-satsBuilding a client or a library that can understand and handle all of these payment methods is very difficult. for the DVM server side its worse. A DVM server presumably needs to support all 4+ types of payments if they want to get the most sats for their services and support the most clients.
All of this is made even more complicated by the fact that a DVM can ask for payment at any point during the job process. this makes sense for some types of compute, but for others like translations or user recommendation / search it just makes things even more complicated.
For example, If a client wanted to implement a timeline page that showed the notes of all the pubkeys on a recommended list. what would they do when the selected DVM asks for payment at the start of the job? or at the end? or worse, only provides half the pubkeys and asks for payment for the other half. building a UI that could handle even just two of these possibilities is complicated.
NIP-89 is being abused
NIP-89 is "Recommended Application Handlers" and the way its describe in the nips repo is
a way to discover applications that can handle unknown event-kinds
Not "a way to discover everything"
If I wanted to build an application discovery app to show all the apps that your contacts use and let you discover new apps then it would have to filter out ALL the DVM advertisement events. and that's not just for making requests from relays
If the app shows the user their list of "recommended applications" then it either has to understand that everything in the 5xxx kind range is a DVM and to show that is its own category or show a bunch of unknown "favorites" in the list which might be confusing for the user.
In conclusion
My point in writing this article isn't that the DVMs implementations so far don't work, but that they will never work well because the spec is too broad. even with only a few DVMs running we have already lost interoperability.
I don't want to be completely negative though because some things have worked. the "DVM feeds" work, although they are limited to a single page of results. text / event translations also work well and kind
5970
Event PoW delegation could be cool. but if we want interoperability, we are going to need to change a few things with NIP-90I don't think we can (or should) abandon NIP-90 entirely but it would be good to break it up into small NIPs or specs. break each "kind" of DVM request out into its own spec with its own definitions for expected inputs, outputs and flow.
Then if we have simple, clean definitions for each kind of compute we want to distribute. we might actually see markets and services being built and used.
-
@ 3589b793:ad53847e
2025-04-30 12:02:13※本記事は別サービスで2023年2月5日に公開した記事の移植です。
Nostrクライアントは多種ありますがメジャーなものはだいたいLNの支払いが用意されています。現時点でどんな組み込み方法になっているか調べました。この記事では主にSnortを対象にしています。
LN活用場面
大きくLNアドレスとLNインボイスの2つの形式があります。
1. LNアドレスで投げ銭をセットできる
LNURLのLNアドレスをセットすると、プロフィールやノート(ツイートに相当)からLN支払いができます。別クライアントのastralなどではプリミティブなLNURLの投げ銭形式
lnurl1dp68~
でもセットできます。[追記]さらに本日、Snortで試験的にノート単位に投げ銭できる機能が追加されています。
2. LNインボイスが投稿できる
投稿でLNインボイスを貼り付ければ上記のように他の発言と同じようにタイムラインに流れます。Payボタンを押すと各自の端末にあるLNウォレットが立ち上がります。
3. DMでLNインボイスを送る
メッセージにLNインボイスが組み込まれているという点では2とほぼ同じですが、ユースケースが異なります。発表されたばかりですがリレーサーバの有料化が始まっていて、その決済をDMにLNインボイスを送付して行うことが試されています。2だとパブリックに投稿されますが、こちらはプライベートなので購入希望者のみにLNインボイスを届けられます。
おまけ: Nostrのユーザ名をLNアドレスと同じにする
直接は関係ないですが、Nostrはユーザー名をemail形式にすることができます。LNアドレスも自分でドメイン取って作れるのでNostrのユーザー名と投げ銭のアドレスを同じにできます。
LNウォレットのAlbyのドメインをNostrのユーザ名にも活用している様子 [Not found]
実装方法
LNアドレスもLNインボイスも非常にシンプルな話ですが軽くまとめます。 Snortリポジトリ
LNアドレス
- セットされたLNアドレスを分解して
https://[domain]/.well-known/lnurlp/[username]
にリクエストする - 成功したら投げ銭量を決めるUIを提示する
- Get Inoviceボタンを押したら1のレスポンスにあるcallbackにリクエストしてインボイスを取得する
- 成功したらPayボタンを提示する
LNインボイス
- 投稿内容がLNインボイス識別子
lnbc10m〜
だとわかると、識別子の中の文字列から情報(金額、説明、有効期限)を取り出し、表示用のUIを作成する - 有効期限内だったらPayボタンを提示し、期限が切れていたらExpiredでロックする
支払い
Payボタンを押された後の動きはアドレス、インボイスとも同じです。
- ブラウザでWebLN(Albyなど)がセットされていて、window.weblnオブジェクトがenableになっていると、拡張機能の支払い画面が立ち上がる。クライアント側で支払い成功をキャッチすることも可能。
- Open Walletボタンを押すと
lightning:lnbc10m~
のURI形式でwindow.openされ、Mac/Windows/iPhone/Androidなど各OSに応じたアプリケーション呼び出しが行われ、URIに対応しているLNウォレットが立ち上がる
[追記]ノート単位の投げ銭
Snort周辺の数人(strike社員っぽい人が一人いて本件のメイン実装している)で試験的にやっているようで現時点では実装レベルでしか詳細わかりませんでした。strikeのzapといまいち区別が付かなかったのですが、実装を見るとnostrプロトコルにzapイベントが追加されています。ソースコメントではこの後NIP(nostr improvement proposal)が起こされるようでかなりハッキーです。zap=tipの方言なんでしょうか?
ノートやプロフィールやリアクションなどに加えて新たにnostrイベントに追加しているものは以下2つです。
- zapRequest 投げ銭した側が対象イベントIDと量を記録する
- zapReceipt 投げ銭を受け取った側用のイベント
一つでできそうと思ったけど、nostrは自己主権のプロトコルでイベント作成するには発行者の署名が必須なので2つに分かれているのでしょう。
所感
現状はクライアントだけで完結する非常にシンプルな方法になっています。リレーサーバも経由しないしクライアントにLNノードを組み込むこともしていません。サードパーティへのhttpリクエストやローカルのアプリに受け渡すだけなので、実はどんな一般アプリでもそんなに知識もコストも要らずにパッとできるものです。
現状ちょっと不便だなと思っていることは、タイムラインに流れるインボイスの有効期限内の支払い済みがわからないことです。Payボタンを押してエラーにならないとわかりません。ウォレットアプリに放り投げていてこのトレースするためには、ウォレット側で支払い成功したらNostrイベントを書き込むなどの対応しない限りは、サービス側でインボイスを定期的に一つ一つLNに投げてチェックするなどが必要だと思われるので、他のサービスでマネするときは留意しておくとよさそうです。
一方で、DMによるLNインボイス送付は活用方法が広がりそうな予感があります。Nostrの公開鍵による本人特定と、LNインボイスのメモ欄のテキスト情報による突き合わせだけでも、かんたんな決済機能として用いれそうだからです。もっとNostrに判断材料を追加したければイベント追加も簡単にできることをSnortが示しています。とくにリレーサーバ購読やPROメニューなどのNostr周辺の支払いはやりやすそうなので、DM活用ではなくなにかしらの決済メニューを搭載したNostrクライアントもすぐに出てきそう気がします。
- セットされたLNアドレスを分解して
-
@ a95c6243:d345522c
2025-01-31 20:02:25Im Augenblick wird mit größter Intensität, großer Umsicht \ das deutsche Volk belogen. \ Olaf Scholz im FAZ-Interview
Online-Wahlen stärken die Demokratie, sind sicher, und 61 Prozent der Wahlberechtigten sprechen sich für deren Einführung in Deutschland aus. Das zumindest behauptet eine aktuelle Umfrage, die auch über die Agentur Reuters Verbreitung in den Medien gefunden hat. Demnach würden außerdem 45 Prozent der Nichtwähler bei der Bundestagswahl ihre Stimme abgeben, wenn sie dies zum Beispiel von Ihrem PC, Tablet oder Smartphone aus machen könnten.
Die telefonische Umfrage unter gut 1000 wahlberechtigten Personen sei repräsentativ, behauptet der Auftraggeber – der Digitalverband Bitkom. Dieser präsentiert sich als eingetragener Verein mit einer beeindruckenden Liste von Mitgliedern, die Software und IT-Dienstleistungen anbieten. Erklärtes Vereinsziel ist es, «Deutschland zu einem führenden Digitalstandort zu machen und die digitale Transformation der deutschen Wirtschaft und Verwaltung voranzutreiben».
Durchgeführt hat die Befragung die Bitkom Servicegesellschaft mbH, also alles in der Familie. Die gleiche Erhebung hatte der Verband übrigens 2021 schon einmal durchgeführt. Damals sprachen sich angeblich sogar 63 Prozent für ein derartiges «Demokratie-Update» aus – die Tendenz ist demgemäß fallend. Dennoch orakelt mancher, der Gang zur Wahlurne gelte bereits als veraltet.
Die spanische Privat-Uni mit Globalisten-Touch, IE University, berichtete Ende letzten Jahres in ihrer Studie «European Tech Insights», 67 Prozent der Europäer befürchteten, dass Hacker Wahlergebnisse verfälschen könnten. Mehr als 30 Prozent der Befragten glaubten, dass künstliche Intelligenz (KI) bereits Wahlentscheidungen beeinflusst habe. Trotzdem würden angeblich 34 Prozent der unter 35-Jährigen einer KI-gesteuerten App vertrauen, um in ihrem Namen für politische Kandidaten zu stimmen.
Wie dauerhaft wird wohl das Ergebnis der kommenden Bundestagswahl sein? Diese Frage stellt sich angesichts der aktuellen Entwicklung der Migrations-Debatte und der (vorübergehend) bröckelnden «Brandmauer» gegen die AfD. Das «Zustrombegrenzungsgesetz» der Union hat das Parlament heute Nachmittag überraschenderweise abgelehnt. Dennoch muss man wohl kein ausgesprochener Pessimist sein, um zu befürchten, dass die Entscheidungen der Bürger von den selbsternannten Verteidigern der Demokratie künftig vielleicht nicht respektiert werden, weil sie nicht gefallen.
Bundesweit wird jetzt zu «Brandmauer-Demos» aufgerufen, die CDU gerät unter Druck und es wird von Übergriffen auf Parteibüros und Drohungen gegen Mitarbeiter berichtet. Sicherheitsbehörden warnen vor Eskalationen, die Polizei sei «für ein mögliches erhöhtes Aufkommen von Straftaten gegenüber Politikern und gegen Parteigebäude sensibilisiert».
Der Vorwand «unzulässiger Einflussnahme» auf Politik und Wahlen wird als Argument schon seit einiger Zeit aufgebaut. Der Manipulation schuldig befunden wird neben Putin und Trump auch Elon Musk, was lustigerweise ausgerechnet Bill Gates gerade noch einmal bekräftigt und als «völlig irre» bezeichnet hat. Man stelle sich die Diskussionen um die Gültigkeit von Wahlergebnissen vor, wenn es Online-Verfahren zur Stimmabgabe gäbe. In der Schweiz wird «E-Voting» seit einigen Jahren getestet, aber wohl bisher mit wenig Erfolg.
Die politische Brandstiftung der letzten Jahre zahlt sich immer mehr aus. Anstatt dringende Probleme der Menschen zu lösen – zu denen auch in Deutschland die weit verbreitete Armut zählt –, hat die Politik konsequent polarisiert und sich auf Ausgrenzung und Verhöhnung großer Teile der Bevölkerung konzentriert. Basierend auf Ideologie und Lügen werden abweichende Stimmen unterdrückt und kriminalisiert, nicht nur und nicht erst in diesem Augenblick. Die nächsten Wochen dürften ausgesprochen spannend werden.
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ 91bea5cd:1df4451c
2025-04-15 06:27:28Básico
bash lsblk # Lista todos os diretorios montados.
Para criar o sistema de arquivos:
bash mkfs.btrfs -L "ThePool" -f /dev/sdx
Criando um subvolume:
bash btrfs subvolume create SubVol
Montando Sistema de Arquivos:
bash mount -o compress=zlib,subvol=SubVol,autodefrag /dev/sdx /mnt
Lista os discos formatados no diretório:
bash btrfs filesystem show /mnt
Adiciona novo disco ao subvolume:
bash btrfs device add -f /dev/sdy /mnt
Lista novamente os discos do subvolume:
bash btrfs filesystem show /mnt
Exibe uso dos discos do subvolume:
bash btrfs filesystem df /mnt
Balancea os dados entre os discos sobre raid1:
bash btrfs filesystem balance start -dconvert=raid1 -mconvert=raid1 /mnt
Scrub é uma passagem por todos os dados e metadados do sistema de arquivos e verifica as somas de verificação. Se uma cópia válida estiver disponível (perfis de grupo de blocos replicados), a danificada será reparada. Todas as cópias dos perfis replicados são validadas.
iniciar o processo de depuração :
bash btrfs scrub start /mnt
ver o status do processo de depuração Btrfs em execução:
bash btrfs scrub status /mnt
ver o status do scrub Btrfs para cada um dos dispositivos
bash btrfs scrub status -d / data btrfs scrub cancel / data
Para retomar o processo de depuração do Btrfs que você cancelou ou pausou:
btrfs scrub resume / data
Listando os subvolumes:
bash btrfs subvolume list /Reports
Criando um instantâneo dos subvolumes:
Aqui, estamos criando um instantâneo de leitura e gravação chamado snap de marketing do subvolume de marketing.
bash btrfs subvolume snapshot /Reports/marketing /Reports/marketing-snap
Além disso, você pode criar um instantâneo somente leitura usando o sinalizador -r conforme mostrado. O marketing-rosnap é um instantâneo somente leitura do subvolume de marketing
bash btrfs subvolume snapshot -r /Reports/marketing /Reports/marketing-rosnap
Forçar a sincronização do sistema de arquivos usando o utilitário 'sync'
Para forçar a sincronização do sistema de arquivos, invoque a opção de sincronização conforme mostrado. Observe que o sistema de arquivos já deve estar montado para que o processo de sincronização continue com sucesso.
bash btrfs filsystem sync /Reports
Para excluir o dispositivo do sistema de arquivos, use o comando device delete conforme mostrado.
bash btrfs device delete /dev/sdc /Reports
Para sondar o status de um scrub, use o comando scrub status com a opção -dR .
bash btrfs scrub status -dR / Relatórios
Para cancelar a execução do scrub, use o comando scrub cancel .
bash $ sudo btrfs scrub cancel / Reports
Para retomar ou continuar com uma depuração interrompida anteriormente, execute o comando de cancelamento de depuração
bash sudo btrfs scrub resume /Reports
mostra o uso do dispositivo de armazenamento:
btrfs filesystem usage /data
Para distribuir os dados, metadados e dados do sistema em todos os dispositivos de armazenamento do RAID (incluindo o dispositivo de armazenamento recém-adicionado) montados no diretório /data , execute o seguinte comando:
sudo btrfs balance start --full-balance /data
Pode demorar um pouco para espalhar os dados, metadados e dados do sistema em todos os dispositivos de armazenamento do RAID se ele contiver muitos dados.
Opções importantes de montagem Btrfs
Nesta seção, vou explicar algumas das importantes opções de montagem do Btrfs. Então vamos começar.
As opções de montagem Btrfs mais importantes são:
**1. acl e noacl
**ACL gerencia permissões de usuários e grupos para os arquivos/diretórios do sistema de arquivos Btrfs.
A opção de montagem acl Btrfs habilita ACL. Para desabilitar a ACL, você pode usar a opção de montagem noacl .
Por padrão, a ACL está habilitada. Portanto, o sistema de arquivos Btrfs usa a opção de montagem acl por padrão.
**2. autodefrag e noautodefrag
**Desfragmentar um sistema de arquivos Btrfs melhorará o desempenho do sistema de arquivos reduzindo a fragmentação de dados.
A opção de montagem autodefrag permite a desfragmentação automática do sistema de arquivos Btrfs.
A opção de montagem noautodefrag desativa a desfragmentação automática do sistema de arquivos Btrfs.
Por padrão, a desfragmentação automática está desabilitada. Portanto, o sistema de arquivos Btrfs usa a opção de montagem noautodefrag por padrão.
**3. compactar e compactar-forçar
**Controla a compactação de dados no nível do sistema de arquivos do sistema de arquivos Btrfs.
A opção compactar compacta apenas os arquivos que valem a pena compactar (se compactar o arquivo economizar espaço em disco).
A opção compress-force compacta todos os arquivos do sistema de arquivos Btrfs, mesmo que a compactação do arquivo aumente seu tamanho.
O sistema de arquivos Btrfs suporta muitos algoritmos de compactação e cada um dos algoritmos de compactação possui diferentes níveis de compactação.
Os algoritmos de compactação suportados pelo Btrfs são: lzo , zlib (nível 1 a 9) e zstd (nível 1 a 15).
Você pode especificar qual algoritmo de compactação usar para o sistema de arquivos Btrfs com uma das seguintes opções de montagem:
- compress=algoritmo:nível
- compress-force=algoritmo:nível
Para obter mais informações, consulte meu artigo Como habilitar a compactação do sistema de arquivos Btrfs .
**4. subvol e subvolid
**Estas opções de montagem são usadas para montar separadamente um subvolume específico de um sistema de arquivos Btrfs.
A opção de montagem subvol é usada para montar o subvolume de um sistema de arquivos Btrfs usando seu caminho relativo.
A opção de montagem subvolid é usada para montar o subvolume de um sistema de arquivos Btrfs usando o ID do subvolume.
Para obter mais informações, consulte meu artigo Como criar e montar subvolumes Btrfs .
**5. dispositivo
A opção de montagem de dispositivo** é usada no sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs.
Em alguns casos, o sistema operacional pode falhar ao detectar os dispositivos de armazenamento usados em um sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs. Nesses casos, você pode usar a opção de montagem do dispositivo para especificar os dispositivos que deseja usar para o sistema de arquivos de vários dispositivos Btrfs ou RAID.
Você pode usar a opção de montagem de dispositivo várias vezes para carregar diferentes dispositivos de armazenamento para o sistema de arquivos de vários dispositivos Btrfs ou RAID.
Você pode usar o nome do dispositivo (ou seja, sdb , sdc ) ou UUID , UUID_SUB ou PARTUUID do dispositivo de armazenamento com a opção de montagem do dispositivo para identificar o dispositivo de armazenamento.
Por exemplo,
- dispositivo=/dev/sdb
- dispositivo=/dev/sdb,dispositivo=/dev/sdc
- dispositivo=UUID_SUB=490a263d-eb9a-4558-931e-998d4d080c5d
- device=UUID_SUB=490a263d-eb9a-4558-931e-998d4d080c5d,device=UUID_SUB=f7ce4875-0874-436a-b47d-3edef66d3424
**6. degraded
A opção de montagem degradada** permite que um RAID Btrfs seja montado com menos dispositivos de armazenamento do que o perfil RAID requer.
Por exemplo, o perfil raid1 requer a presença de 2 dispositivos de armazenamento. Se um dos dispositivos de armazenamento não estiver disponível em qualquer caso, você usa a opção de montagem degradada para montar o RAID mesmo que 1 de 2 dispositivos de armazenamento esteja disponível.
**7. commit
A opção commit** mount é usada para definir o intervalo (em segundos) dentro do qual os dados serão gravados no dispositivo de armazenamento.
O padrão é definido como 30 segundos.
Para definir o intervalo de confirmação para 15 segundos, você pode usar a opção de montagem commit=15 (digamos).
**8. ssd e nossd
A opção de montagem ssd** informa ao sistema de arquivos Btrfs que o sistema de arquivos está usando um dispositivo de armazenamento SSD, e o sistema de arquivos Btrfs faz a otimização SSD necessária.
A opção de montagem nossd desativa a otimização do SSD.
O sistema de arquivos Btrfs detecta automaticamente se um SSD é usado para o sistema de arquivos Btrfs. Se um SSD for usado, a opção de montagem de SSD será habilitada. Caso contrário, a opção de montagem nossd é habilitada.
**9. ssd_spread e nossd_spread
A opção de montagem ssd_spread** tenta alocar grandes blocos contínuos de espaço não utilizado do SSD. Esse recurso melhora o desempenho de SSDs de baixo custo (baratos).
A opção de montagem nossd_spread desativa o recurso ssd_spread .
O sistema de arquivos Btrfs detecta automaticamente se um SSD é usado para o sistema de arquivos Btrfs. Se um SSD for usado, a opção de montagem ssd_spread será habilitada. Caso contrário, a opção de montagem nossd_spread é habilitada.
**10. descarte e nodiscard
Se você estiver usando um SSD que suporte TRIM enfileirado assíncrono (SATA rev3.1), a opção de montagem de descarte** permitirá o descarte de blocos de arquivos liberados. Isso melhorará o desempenho do SSD.
Se o SSD não suportar TRIM enfileirado assíncrono, a opção de montagem de descarte prejudicará o desempenho do SSD. Nesse caso, a opção de montagem nodiscard deve ser usada.
Por padrão, a opção de montagem nodiscard é usada.
**11. norecovery
Se a opção de montagem norecovery** for usada, o sistema de arquivos Btrfs não tentará executar a operação de recuperação de dados no momento da montagem.
**12. usebackuproot e nousebackuproot
Se a opção de montagem usebackuproot for usada, o sistema de arquivos Btrfs tentará recuperar qualquer raiz de árvore ruim/corrompida no momento da montagem. O sistema de arquivos Btrfs pode armazenar várias raízes de árvore no sistema de arquivos. A opção de montagem usebackuproot** procurará uma boa raiz de árvore e usará a primeira boa que encontrar.
A opção de montagem nousebackuproot não verificará ou recuperará raízes de árvore inválidas/corrompidas no momento da montagem. Este é o comportamento padrão do sistema de arquivos Btrfs.
**13. space_cache, space_cache=version, nospace_cache e clear_cache
A opção de montagem space_cache** é usada para controlar o cache de espaço livre. O cache de espaço livre é usado para melhorar o desempenho da leitura do espaço livre do grupo de blocos do sistema de arquivos Btrfs na memória (RAM).
O sistema de arquivos Btrfs suporta 2 versões do cache de espaço livre: v1 (padrão) e v2
O mecanismo de cache de espaço livre v2 melhora o desempenho de sistemas de arquivos grandes (tamanho de vários terabytes).
Você pode usar a opção de montagem space_cache=v1 para definir a v1 do cache de espaço livre e a opção de montagem space_cache=v2 para definir a v2 do cache de espaço livre.
A opção de montagem clear_cache é usada para limpar o cache de espaço livre.
Quando o cache de espaço livre v2 é criado, o cache deve ser limpo para criar um cache de espaço livre v1 .
Portanto, para usar o cache de espaço livre v1 após a criação do cache de espaço livre v2 , as opções de montagem clear_cache e space_cache=v1 devem ser combinadas: clear_cache,space_cache=v1
A opção de montagem nospace_cache é usada para desabilitar o cache de espaço livre.
Para desabilitar o cache de espaço livre após a criação do cache v1 ou v2 , as opções de montagem nospace_cache e clear_cache devem ser combinadas: clear_cache,nosapce_cache
**14. skip_balance
Por padrão, a operação de balanceamento interrompida/pausada de um sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs será retomada automaticamente assim que o sistema de arquivos Btrfs for montado. Para desabilitar a retomada automática da operação de equilíbrio interrompido/pausado em um sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs, você pode usar a opção de montagem skip_balance .**
**15. datacow e nodatacow
A opção datacow** mount habilita o recurso Copy-on-Write (CoW) do sistema de arquivos Btrfs. É o comportamento padrão.
Se você deseja desabilitar o recurso Copy-on-Write (CoW) do sistema de arquivos Btrfs para os arquivos recém-criados, monte o sistema de arquivos Btrfs com a opção de montagem nodatacow .
**16. datasum e nodatasum
A opção datasum** mount habilita a soma de verificação de dados para arquivos recém-criados do sistema de arquivos Btrfs. Este é o comportamento padrão.
Se você não quiser que o sistema de arquivos Btrfs faça a soma de verificação dos dados dos arquivos recém-criados, monte o sistema de arquivos Btrfs com a opção de montagem nodatasum .
Perfis Btrfs
Um perfil Btrfs é usado para informar ao sistema de arquivos Btrfs quantas cópias dos dados/metadados devem ser mantidas e quais níveis de RAID devem ser usados para os dados/metadados. O sistema de arquivos Btrfs contém muitos perfis. Entendê-los o ajudará a configurar um RAID Btrfs da maneira que você deseja.
Os perfis Btrfs disponíveis são os seguintes:
single : Se o perfil único for usado para os dados/metadados, apenas uma cópia dos dados/metadados será armazenada no sistema de arquivos, mesmo se você adicionar vários dispositivos de armazenamento ao sistema de arquivos. Assim, 100% do espaço em disco de cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos pode ser utilizado.
dup : Se o perfil dup for usado para os dados/metadados, cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos manterá duas cópias dos dados/metadados. Assim, 50% do espaço em disco de cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos pode ser utilizado.
raid0 : No perfil raid0 , os dados/metadados serão divididos igualmente em todos os dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, não haverá dados/metadados redundantes (duplicados). Assim, 100% do espaço em disco de cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos pode ser usado. Se, em qualquer caso, um dos dispositivos de armazenamento falhar, todo o sistema de arquivos será corrompido. Você precisará de pelo menos dois dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid0 .
raid1 : No perfil raid1 , duas cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, a matriz RAID pode sobreviver a uma falha de unidade. Mas você pode usar apenas 50% do espaço total em disco. Você precisará de pelo menos dois dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid1 .
raid1c3 : No perfil raid1c3 , três cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, a matriz RAID pode sobreviver a duas falhas de unidade, mas você pode usar apenas 33% do espaço total em disco. Você precisará de pelo menos três dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid1c3 .
raid1c4 : No perfil raid1c4 , quatro cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, a matriz RAID pode sobreviver a três falhas de unidade, mas você pode usar apenas 25% do espaço total em disco. Você precisará de pelo menos quatro dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid1c4 .
raid10 : No perfil raid10 , duas cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos, como no perfil raid1 . Além disso, os dados/metadados serão divididos entre os dispositivos de armazenamento, como no perfil raid0 .
O perfil raid10 é um híbrido dos perfis raid1 e raid0 . Alguns dos dispositivos de armazenamento formam arrays raid1 e alguns desses arrays raid1 são usados para formar um array raid0 . Em uma configuração raid10 , o sistema de arquivos pode sobreviver a uma única falha de unidade em cada uma das matrizes raid1 .
Você pode usar 50% do espaço total em disco na configuração raid10 . Você precisará de pelo menos quatro dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid10 .
raid5 : No perfil raid5 , uma cópia dos dados/metadados será dividida entre os dispositivos de armazenamento. Uma única paridade será calculada e distribuída entre os dispositivos de armazenamento do array RAID.
Em uma configuração raid5 , o sistema de arquivos pode sobreviver a uma única falha de unidade. Se uma unidade falhar, você pode adicionar uma nova unidade ao sistema de arquivos e os dados perdidos serão calculados a partir da paridade distribuída das unidades em execução.
Você pode usar 1 00x(N-1)/N % do total de espaços em disco na configuração raid5 . Aqui, N é o número de dispositivos de armazenamento adicionados ao sistema de arquivos. Você precisará de pelo menos três dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid5 .
raid6 : No perfil raid6 , uma cópia dos dados/metadados será dividida entre os dispositivos de armazenamento. Duas paridades serão calculadas e distribuídas entre os dispositivos de armazenamento do array RAID.
Em uma configuração raid6 , o sistema de arquivos pode sobreviver a duas falhas de unidade ao mesmo tempo. Se uma unidade falhar, você poderá adicionar uma nova unidade ao sistema de arquivos e os dados perdidos serão calculados a partir das duas paridades distribuídas das unidades em execução.
Você pode usar 100x(N-2)/N % do espaço total em disco na configuração raid6 . Aqui, N é o número de dispositivos de armazenamento adicionados ao sistema de arquivos. Você precisará de pelo menos quatro dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid6 .
-
@ 9223d2fa:b57e3de7
2025-04-15 02:54:0012,600 steps
-
@ 3589b793:ad53847e
2025-04-30 11:46:52※本記事は別サービスで2022年9月25日に公開した記事の移植です。
LNの手数料の適正水準はどう見積もったらいいだろうか?ルーティングノードの収益性を算出するためにはどうアプローチすればよいだろうか?本記事ではルーティングノード運用のポジションに立ち参考になりそうな数値や計算式を整理する。
個人的な感想を先に書くと以下となる。
- 現在の手数料市場は収益性が低くもっと手数料が上がった方が健全である。
- 他の決済手段と比較すると、LN支払い料金は10000ppm(手数料1%相当)でも十分ではないか。4ホップとすると中間1ノードあたり2500ppmである。
- ルーティングノードの収益性を考えると、1000ppmあれば1BTC程度の資金で年利2.8%になり半年で初期費用回収できるので十分な投資対象になると考える。
基本概念の整理
LNの料金方式
- LNの手数料は送金額に応じた料率方式が主になる。(基本料金の設定もあるが1 ~ 0 satsが大半)
- 料率単位のppm(parts per million)は、1,000,000satsを送るときの手数料をsats金額で示したもの。
- %での手数料率に変換すると1000ppm = 0.1%になる。
- 支払い者が払う手数料はルーティングに参加した各ノードの手数料の合計である。本稿では4ホップ(経由ノードが4つ)のルーティングがあるとすれば、各ノードの取り分は単純計算で1/4とみなす。
- ルーティングノードの収益はアウトバウンドフローで発生するのでアウトバウンドキャパシティが直接的な収益資源となる。
LNの料金以外のベネフィット
本稿では料金比較だけを行うが実際の決済検討では以下のような料金以外の効用も忘れてはならない。
- Bitcoin(L1)に比べると、料金の安さだけでなく、即時確定やトランザクション量のスケールという利点がある。
- クレジットカードなどの集権サービスと比較した特徴はBitcoin(L1)とだいたい同じである。
- 24時間365日利用できる
- 誰でも自由に使える
- 信頼する第三者に対する加盟や審査や手数料率などの交渉手続きが要らない
- 検閲がなく匿名性が高い
- 逆にデメリットはオンライン前提がゆえの利用の不便さやセキュリティ面の不安さなどが挙げられる。
現在の料金相場
- ルーティングノードの料金設定
- sinkノードとのチャネルは500 ~ 1000ppmが多い。
- routing/sourceノードとのチャネルは0 ~ 100ppmあたりのレンジになる。
- リバランスする場合もsinkで収益を上げているならsink以下になるのが道理である。
- プロダクト/サービスのバックエンドにいるノードの料金設定
- sinkやsourceに相当するものは上記の通り。
- 1000〜5000ppmあたりで一律同じ設定というノードもよく見かける。
- ビジネスモデル次第で千差万別だがアクティブと思われるノードでそれ以上はあまり見かけない。
- 上記は1ノードあたりの料金になる。支払い全体では経由したノードの合計になる。
料金目安
いくつかの方法で参考数字を出していく。LN料金算出は「支払い全体/2ホップしたときの1ノードあたり/4ホップしたときの1ノードあたり」の三段階で出す。
類推方式
決済代行業者との比較
- Squareの加盟店手数料は、日本3.25%、アメリカ2.60%である。
- 参考資料 https://www.meti.go.jp/shingikai/mono_info_service/cashless_payment/pdf/20220318_1.pdf
3.25%とするとLNでは「支払い全体/2ホップしたときの1ノードあたり/4ホップしたときの1ノードあたり」でそれぞれ
32,500ppm/16,250ppm/8,125ppm
になる。スマホのアプリストアとの比較
- Androidのアプリストアは年間売上高が100万USDまでなら15%、それ以上なら30%
- 参考資料https://support.google.com/googleplay/android-developer/answer/112622?hl=ja
15%とするとLNでは「支払い全体/2ホップしたときの1ノードあたり/4ホップしたときの1ノードあたり」でそれぞれ
150,000ppm/75,000ppm/37,500ppm
になる。Bitcoin(L1)との比較
Bitcoin(L1)は送金額が異なっても手数料がほぼ同じため、従量課金のLNと単純比較はできない。そのためここではLNの方が料金がお得になる目安を出す。
Bitcoin(L1)の手数料設定
- SegWitのシンプルな送金を対象にする。
- input×1、output×2(送金+お釣り)、tx合計222byte
- L1の手数料は、1sat/byteなら222sats、10sat/byteなら2,220sats、100sat/byteなら22,200sats。(単純化のためvirtual byteではなくbyteで計算する)
- サンプル例 https://www.blockchain.com/btc/tx/15b959509dad5df0e38be2818d8ec74531198ca29ac205db5cceeb17177ff095
L1相場が1sat/byteの時にLNの方がお得なライン
- 100ppmなら、0.0222BTC(2,220,000sats)まで
- 1000ppmなら、0.00222BTC(222,000sats)まで
L1相場が10sat/byteの時にLNの方がお得なライン
- 100ppmなら、0.222BTC(22,200,000sats)まで
- 1000ppmなら、0.0222BTC(2,220,000sats)まで
L1相場が100sat/byteの時にLNの方がお得なライン
- 100ppmなら、2.22BTC(222,000,000sats)まで
- 1000ppmなら、0.222BTC(22,200,000sats)まで
コスト積み上げ方式
ルーティングノードの原価から損益分岐点となるppmを算出する。事業者ではなく個人を想定し、クラウドではなくラズベリーパイでのノード構築環境で計算する。
費用明細
- BTC市場価格 1sat = 0.03円(1BTC = 3百万円)
初期費用
- ハードウェア一式 40,000円
- Raspberry Pi 4 8GB
- SSD 1TB
- 外付けディスプレイ
- チャネル開設のオンチェーン手数料 6.69円/チャネル
- 開設料 223sats
- 223sats * BTC市場価格0.03円 = 6.69円
固定費用
- 電気代 291.6円/月
- 時間あたりの電力量 0.015kWh
- Raspberry Pi 4 電圧5V、推奨電源容量3.0A
- https://www.raspberrypi.com/documentation/computers/raspberry-pi.html#power-supply
- kWh単価27円
- 0.015kWh * kWh単価27円 * 1ヶ月の時間720h = 291.6円
損益分岐点
- 月あたりの電気代を上回るために9,720sats(291.6円)/月以上の収益が必要である。
- ハードウェア費用回収のために0.01333333BTC(133万sats) = 40,000円の収益が必要である。
費用回収シナリオ例
アウトバウンドキャパに1BTCをデポジットしたAさんを例にする。1BTCは初心者とは言えないと思うが、このくらい原資を用意しないと費用回収の話がしづらいという裏事情がある。チャネル選択やルーティング戦略は何もしていない仮定である。ノード運用次第であることは言うまでもないので今回は要素や式を洗い出すことが主目的で一つ一つの変数の値は参考までに。
変数設定
- インバウンドを同額用意して合計キャパを2BTCとする。
- 1チャネルあたり5m satsで40チャネル開設する。
- チャネル開設費用 223sats * 40チャネル = 8,920sats
- 初期費用合計 1,333,333sats + 8,920sats = 1,342,253sats
- 一回あたり平均ルーティング量 = 100,000sats
- 1チャネルあたり平均アウトバウンド数/日 = 2
- 1チャネルあたり平均アウトバウンドppm = 50
費用回収地点
- 1日のアウトバウンド量は、 40チャネル * 2本 * 100,000sats = 8m sats
- 手数料収入は、8m sats * 0.005%(50ppm) = 400sats/日。月換算すると12,000sats/月
- 電気代を差し引くと、12,000sats - 電気代9,720sats =月収益2,280sats(68.4円)
- 初期費用回収まで、1,342,253sats / 2,280sats = 589ヵ月(49年)
- 後述するが電気代差引き前で年利0.14%になる。
理想的なppm
6ヵ月での初期費用回収を目的にしてアウトバウンドppmを求める。
- ひと月あたり、初期費用合計1,342,253sats / 6ヵ月 + 電気代9,720sats = 233,429sats(7,003円)の収益が必要。
- 1日あたり、7,781sats(233円)の収益
- その場合の平均アウトバウンドppmは、 7,781sats(1日の収益量) / 8m sats(1日のアウトバウンド量) * 1m sats(ppm変換係数) = 973ppm
他のファイナンスとの比較
ルーティングノードを運用して手数料収入を得ることは資産運用と捉えることもできる。レンディングやトレードなどの他の資産運用手段とパフォーマンス比較をするなら、デポジットしたアウトバウンドキャパシティに対する手数料収入をAPY換算する。(獲得した手数料はアウトバウンドキャパシティに積み重ねられるので複利と見做せる)
例としてLNDg(v1.3.1)のAPY算出計算式を転載する。見ての通り画面上の表記はAPYなのに中身はAPRになっているので注意だが今回は考え方の参考としてこのまま採用する。
年換算 = 365 / 7 = 52.142857 年利 = (7dayの収益 * 年換算) / アウトバウンドのキャパシティ
例えば上記のAさんの費用回収シナリオに当てはめると以下となる。
年利 0.14% = (400sats * 7日 * 年換算)/ 100m sats
電気代を差し引くと 76sats/日となり年利0.027%
もし平均アウトバウンド1000ppmになると8,000sats/日なので年利2.9%になる。 この場合、電気代はほぼ1日で回収されるため差し引いても大差なく7,676sats/日で年利2.8%になる。
考察
以上、BTC市場価格や一日のアウトバウンド量といった重要な数値をいくつか仮置きした上ではあるが、LN手数料の適正水準を考えるための参考材料を提示した。
まず、現在のLNの料金相場は他の決済手段から比べると圧倒的に安いことがわかった。1%でも競争力が十分ありそうなのに0.1%前後で送金することが大半である。
健全な市場発展のためには、ルーティングノードの採算が取れていることが欠かせないと考えるが、残念ながら現在の収益性は低い。ルーティングノードの収益性は仮定に仮定を重ねた見積もりになるが、平均アウトバウンドが1000ppmでようやく個人でも参入できるレベルになるという結論になった。ルーティングノードの立場に立つと、現在の市場平均から大幅な上昇が必要だと考える。
手数料市場は競争のためつねに下方圧力がかかっていて仕様上で可能な0に近づいている。この重力に逆らうためには、1. 需要 > 供給のバランスになること、2. 事業用途での高額買取のチャネルが増えること、の2つの観点を挙げる。1にせよ2にせよネットワークの活用が進むことで発生し、手数料市場の大きな変動機会になるのではないか。他の決済手段と比較すれば10000ppm、1チャネル2500ppmあたりまでは十分に健全な範囲だと考える。
-
@ a95c6243:d345522c
2025-01-24 20:59:01Menschen tun alles, egal wie absurd, \ um ihrer eigenen Seele nicht zu begegnen. \ Carl Gustav Jung
«Extremer Reichtum ist eine Gefahr für die Demokratie», sagen über die Hälfte der knapp 3000 befragten Millionäre aus G20-Staaten laut einer Umfrage der «Patriotic Millionaires». Ferner stellte dieser Zusammenschluss wohlhabender US-Amerikaner fest, dass 63 Prozent jener Millionäre den Einfluss von Superreichen auf US-Präsident Trump als Bedrohung für die globale Stabilität ansehen.
Diese Besorgnis haben 370 Millionäre und Milliardäre am Dienstag auch den in Davos beim WEF konzentrierten Privilegierten aus aller Welt übermittelt. In einem offenen Brief forderten sie die «gewählten Führer» auf, die Superreichen – also sie selbst – zu besteuern, um «die zersetzenden Auswirkungen des extremen Reichtums auf unsere Demokratien und die Gesellschaft zu bekämpfen». Zum Beispiel kontrolliere eine handvoll extrem reicher Menschen die Medien, beeinflusse die Rechtssysteme in unzulässiger Weise und verwandele Recht in Unrecht.
Schon 2019 beanstandete der bekannte Historiker und Schriftsteller Ruthger Bregman an einer WEF-Podiumsdiskussion die Steuervermeidung der Superreichen. Die elitäre Veranstaltung bezeichnete er als «Feuerwehr-Konferenz, bei der man nicht über Löschwasser sprechen darf.» Daraufhin erhielt Bregman keine Einladungen nach Davos mehr. Auf seine Aussagen machte der Schweizer Aktivist Alec Gagneux aufmerksam, der sich seit Jahrzehnten kritisch mit dem WEF befasst. Ihm wurde kürzlich der Zutritt zu einem dreiteiligen Kurs über das WEF an der Volkshochschule Region Brugg verwehrt.
Nun ist die Erkenntnis, dass mit Geld politischer Einfluss einhergeht, alles andere als neu. Und extremer Reichtum macht die Sache nicht wirklich besser. Trotzdem hat man über Initiativen wie Patriotic Millionaires oder Taxmenow bisher eher selten etwas gehört, obwohl es sie schon lange gibt. Auch scheint es kein Problem, wenn ein Herr Gates fast im Alleingang versucht, globale Gesundheits-, Klima-, Ernährungs- oder Bevölkerungspolitik zu betreiben – im Gegenteil. Im Jahr, als der Milliardär Donald Trump zum zweiten Mal ins Weiße Haus einzieht, ist das Echo in den Gesinnungsmedien dagegen enorm – und uniform, wer hätte das gedacht.
Der neue US-Präsident hat jedoch «Davos geerdet», wie Achgut es nannte. In seiner kurzen Rede beim Weltwirtschaftsforum verteidigte er seine Politik und stellte klar, er habe schlicht eine «Revolution des gesunden Menschenverstands» begonnen. Mit deutlichen Worten sprach er unter anderem von ersten Maßnahmen gegen den «Green New Scam», und von einem «Erlass, der jegliche staatliche Zensur beendet»:
«Unsere Regierung wird die Äußerungen unserer eigenen Bürger nicht mehr als Fehlinformation oder Desinformation bezeichnen, was die Lieblingswörter von Zensoren und derer sind, die den freien Austausch von Ideen und, offen gesagt, den Fortschritt verhindern wollen.»
Wie der «Trumpismus» letztlich einzuordnen ist, muss jeder für sich selbst entscheiden. Skepsis ist definitiv angebracht, denn «einer von uns» sind weder der Präsident noch seine auserwählten Teammitglieder. Ob sie irgendeinen Sumpf trockenlegen oder Staatsverbrechen aufdecken werden oder was aus WHO- und Klimaverträgen wird, bleibt abzuwarten.
Das WHO-Dekret fordert jedenfalls die Übertragung der Gelder auf «glaubwürdige Partner», die die Aktivitäten übernehmen könnten. Zufällig scheint mit «Impfguru» Bill Gates ein weiterer Harris-Unterstützer kürzlich das Lager gewechselt zu haben: Nach einem gemeinsamen Abendessen zeigte er sich «beeindruckt» von Trumps Interesse an der globalen Gesundheit.
Mit dem Projekt «Stargate» sind weitere dunkle Wolken am Erwartungshorizont der Fangemeinde aufgezogen. Trump hat dieses Joint Venture zwischen den Konzernen OpenAI, Oracle, und SoftBank als das «größte KI-Infrastrukturprojekt der Geschichte» angekündigt. Der Stein des Anstoßes: Oracle-CEO Larry Ellison, der auch Fan von KI-gestützter Echtzeit-Überwachung ist, sieht einen weiteren potenziellen Einsatz der künstlichen Intelligenz. Sie könne dazu dienen, Krebserkrankungen zu erkennen und individuelle mRNA-«Impfstoffe» zur Behandlung innerhalb von 48 Stunden zu entwickeln.
Warum bitte sollten sich diese superreichen «Eliten» ins eigene Fleisch schneiden und direkt entgegen ihren eigenen Interessen handeln? Weil sie Menschenfreunde, sogenannte Philanthropen sind? Oder vielleicht, weil sie ein schlechtes Gewissen haben und ihre Schuld kompensieren müssen? Deswegen jedenfalls brauchen «Linke» laut Robert Willacker, einem deutschen Politikberater mit brasilianischen Wurzeln, rechte Parteien – ein ebenso überraschender wie humorvoller Erklärungsansatz.
Wenn eine Krähe der anderen kein Auge aushackt, dann tut sie das sich selbst noch weniger an. Dass Millionäre ernsthaft ihre eigene Besteuerung fordern oder Machteliten ihren eigenen Einfluss zugunsten anderer einschränken würden, halte ich für sehr unwahrscheinlich. So etwas glaube ich erst, wenn zum Beispiel die Rüstungsindustrie sich um Friedensverhandlungen bemüht, die Pharmalobby sich gegen institutionalisierte Korruption einsetzt, Zentralbanken ihre CBDC-Pläne für Bitcoin opfern oder der ÖRR die Abschaffung der Rundfunkgebühren fordert.
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ 6e0ea5d6:0327f353
2025-04-14 15:11:17Ascolta.
We live in times where the average man is measured by the speeches he gives — not by the commitments he keeps. People talk about dreams, goals, promises… but what truly remains is what’s honored in the silence of small gestures, in actions that don’t seek applause, in attitudes unseen — yet speak volumes.
Punctuality, for example. Showing up on time isn’t about the clock. It’s about respect. Respect for another’s time, yes — but more importantly, respect for one’s own word. A man who is late without reason is already running late in his values. And the one who excuses his own lateness with sweet justifications slowly gets used to mediocrity.
Keeping your word is more than fulfilling promises. It is sealing, with the mouth, what the body must later uphold. Every time a man commits to something, he creates a moral debt with his own dignity. And to break that commitment is to declare bankruptcy — not in the eyes of others, but in front of himself.
And debts? Even the small ones — or especially the small ones — are precise thermometers of character. A forgotten sum, an unpaid favor, a commitment left behind… all of these reveal the structure of the inner building that man resides in. He who neglects the small is merely rehearsing for his future collapse.
Life, contrary to what the reckless say, is not built on grand deeds. It is built with small bricks, laid with almost obsessive precision. The truly great man is the one who respects the details — recognizing in them a code of conduct.
In Sicily, especially in the streets of Palermo, I learned early on that there is more nobility in paying a five-euro debt on time than in flaunting riches gained without word, without honor, without dignity.
As they say in Palermo: L’uomo si conosce dalle piccole cose.
So, amico mio, Don’t talk to me about greatness if you can’t show up on time. Don’t talk to me about respect if your word is fickle. And above all, don’t talk to me about honor if you still owe what you once promised — no matter how small.
Thank you for reading, my friend!
If this message resonated with you, consider leaving your "🥃" as a token of appreciation.
A toast to our family!
-
@ 6e0ea5d6:0327f353
2025-04-14 15:10:58Ascolta bene.
A man’s collapse never begins on the battlefield.
It begins in the invisible antechamber of his own mind.
Before any public fall, there is an ignored internal whisper—
a small, quiet, private decision that gradually drags him toward ruin.No empire ever fell without first rotting from within.
The world does not destroy a man who hasn’t first surrendered to himself.
The enemy outside only wins when it finds space in the void the man has silently carved.**Non ti sbagliare ** — there are no armies more ruthless than undisciplined thoughts.
There are no blows more fatal than the ones we deal ourselves:
with small concessions, well-crafted excuses,
and the slow deterioration of our integrity.
What people call failure is nothing more than the logical outcome
of a sequence of internal betrayals.Afraid of the world? Sciocchezze.
But a man who’s already bowed before his own weaknesses—
he needs no enemies.
He digs his own grave, chooses the epitaph,
and the only thing the world does is toss in some dirt.Capisci?
Strength isn’t the absence of falling, but the presence of resistance.
The true battle isn’t external.
It takes place within—where there’s only you, your conscience, and the mirror.
And it’s in that silent courtroom where everything is decided.The discipline to say “no” to yourself
is more noble than any public glory.
Self-control is more valuable than any victory over others.In Sicily, we learn early:
“Cu s’abbrazza cu’ so’ nemicu, si scorda la faccia di l’amicu.”
He who embraces his enemy forgets the face of his friend.
The most dangerous enemy is the one you feed daily with self-indulgence.
And the most relentless confrontation is the one you avoid in front of the mirror.So don’t talk to me about external defeats.
Tell me where inside you the weakness began.
Tell me the exact moment you abandoned what you believed in, in the name of ease.
Because a man only falls before the world… after falling before himself.Thank you for reading, my friend!
If this message resonated with you, consider leaving your "🥃" as a token of appreciation.
A toast to our family!
-
@ c631e267:c2b78d3e
2025-01-18 09:34:51Die grauenvollste Aussicht ist die der Technokratie – \ einer kontrollierenden Herrschaft, \ die durch verstümmelte und verstümmelnde Geister ausgeübt wird. \ Ernst Jünger
«Davos ist nicht mehr sexy», das Weltwirtschaftsforum (WEF) mache Davos kaputt, diese Aussagen eines Einheimischen las ich kürzlich in der Handelszeitung. Während sich einige vor Ort enorm an der «teuersten Gewerbeausstellung der Welt» bereicherten, würden die negativen Begleiterscheinungen wie Wohnungsnot und Niedergang der lokalen Wirtschaft immer deutlicher.
Nächsten Montag beginnt in dem Schweizer Bergdorf erneut ein Jahrestreffen dieses elitären Clubs der Konzerne, bei dem man mit hochrangigen Politikern aus aller Welt und ausgewählten Vertretern der Systemmedien zusammenhocken wird. Wie bereits in den vergangenen vier Jahren wird die Präsidentin der EU-Kommission, Ursula von der Leyen, in Begleitung von Klaus Schwab ihre Grundsatzansprache halten.
Der deutsche WEF-Gründer hatte bei dieser Gelegenheit immer höchst lobende Worte für seine Landsmännin: 2021 erklärte er sich «stolz, dass Europa wieder unter Ihrer Führung steht» und 2022 fand er es bemerkenswert, was sie erreicht habe angesichts des «erstaunlichen Wandels», den die Welt in den vorangegangenen zwei Jahren erlebt habe; es gebe nun einen «neuen europäischen Geist».
Von der Leyens Handeln während der sogenannten Corona-«Pandemie» lobte Schwab damals bereits ebenso, wie es diese Woche das Karlspreis-Direktorium tat, als man der Beschuldigten im Fall Pfizergate die diesjährige internationale Auszeichnung «für Verdienste um die europäische Einigung» verlieh. Außerdem habe sie die EU nicht nur gegen den «Aggressor Russland», sondern auch gegen die «innere Bedrohung durch Rassisten und Demagogen» sowie gegen den Klimawandel verteidigt.
Jene Herausforderungen durch «Krisen epochalen Ausmaßes» werden indes aus dem Umfeld des WEF nicht nur herbeigeredet – wie man alljährlich zur Zeit des Davoser Treffens im Global Risks Report nachlesen kann, der zusammen mit dem Versicherungskonzern Zurich erstellt wird. Seit die Globalisten 2020/21 in der Praxis gesehen haben, wie gut eine konzertierte und konsequente Angst-Kampagne funktionieren kann, geht es Schlag auf Schlag. Sie setzen alles daran, Schwabs goldenes Zeitfenster des «Great Reset» zu nutzen.
Ziel dieses «großen Umbruchs» ist die totale Kontrolle der Technokraten über die Menschen unter dem Deckmantel einer globalen Gesundheitsfürsorge. Wie aber könnte man so etwas erreichen? Ein Mittel dazu ist die «kreative Zerstörung». Weitere unabdingbare Werkzeug sind die Einbindung, ja Gleichschaltung der Medien und der Justiz.
Ein «Great Mental Reset» sei die Voraussetzung dafür, dass ein Großteil der Menschen Einschränkungen und Manipulationen wie durch die Corona-Maßnahmen praktisch kritik- und widerstandslos hinnehme, sagt der Mediziner und Molekulargenetiker Michael Nehls. Er meint damit eine regelrechte Umprogrammierung des Gehirns, wodurch nach und nach unsere Individualität und unser soziales Bewusstsein eliminiert und durch unreflektierten Konformismus ersetzt werden.
Der aktuelle Zustand unserer Gesellschaften ist auch für den Schweizer Rechtsanwalt Philipp Kruse alarmierend. Durch den Umgang mit der «Pandemie» sieht er die Grundlagen von Recht und Vernunft erschüttert, die Rechtsstaatlichkeit stehe auf dem Prüfstand. Seiner dringenden Mahnung an alle Bürger, die Prinzipien von Recht und Freiheit zu verteidigen, kann ich mich nur anschließen.
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ 8947a945:9bfcf626
2024-10-17 08:06:55สวัสดีทุกคนบน Nostr ครับ รวมไปถึง watchersและ ผู้ติดตามของผมจาก Deviantart และ platform งานศิลปะอื่นๆนะครับ
ตั้งแต่ต้นปี 2024 ผมใช้ AI เจนรูปงานตัวละครสาวๆจากอนิเมะ และเปิด exclusive content ให้สำหรับผู้ที่ชื่นชอบผลงานของผมเป็นพิเศษ
ผมโพสผลงานผมทั้งหมดไว้ที่เวบ Deviantart และค่อยๆสร้างฐานผู้ติดตามมาเรื่อยๆอย่างค่อยเป็นค่อยไปมาตลอดครับ ทุกอย่างเติบโตไปเรื่อยๆของมัน ส่วนตัวผมมองว่ามันเป็นพิร์ตธุรกิจออนไลน์ ของผมพอร์ตนึงได้เลย
เมื่อวันที่ 16 กย.2024 มีผู้ติดตามคนหนึ่งส่งข้อความส่วนตัวมาหาผม บอกว่าชื่นชอบผลงานของผมมาก ต้องการจะขอซื้อผลงาน แต่ขอซื้อเป็น NFT นะ เสนอราคาซื้อขายต่อชิ้นที่สูงมาก หลังจากนั้นผมกับผู้ซื้อคนนี้พูดคุยกันในเมล์ครับ
นี่คือข้อสรุปสั่นๆจากการต่อรองซื้อขายครับ
(หลังจากนี้ผมขอเรียกผู้ซื้อว่า scammer นะครับ เพราะไพ่มันหงายมาแล้ว ว่าเขาคือมิจฉาชีพ)
- Scammer รายแรก เลือกผลงานที่จะซื้อ เสนอราคาซื้อที่สูงมาก แต่ต้องเป็นเวบไซต์ NFTmarket place ที่เขากำหนดเท่านั้น มันทำงานอยู่บน ERC20 ผมเข้าไปดูเวบไซต์ที่ว่านี้แล้วรู้สึกว่ามันดูแปลกๆครับ คนที่จะลงขายผลงานจะต้องใช้ email ในการสมัครบัญชีซะก่อน ถึงจะผูก wallet อย่างเช่น metamask ได้ เมื่อผูก wallet แล้วไม่สามารถเปลี่ยนได้ด้วย ตอนนั้นผมใช้ wallet ที่ไม่ได้ link กับ HW wallet ไว้ ทดลองสลับ wallet ไปๆมาๆ มันทำไม่ได้ แถมลอง log out แล้ว เลข wallet ก็ยังคาอยู่อันเดิม อันนี้มันดูแปลกๆแล้วหนึ่งอย่าง เวบนี้ค่า ETH ในการ mint 0.15 - 0.2 ETH … ตีเป็นเงินบาทนี่แพงบรรลัยอยู่นะครับ
-
Scammer รายแรกพยายามชักจูงผม หว่านล้อมผมว่า แหม เดี๋ยวเขาก็มารับซื้องานผมน่า mint งานเสร็จ รีบบอกเขานะ เดี๋ยวเขารีบกดซื้อเลย พอขายได้กำไร ผมก็ได้ค่า gas คืนได้ แถมยังได้กำไรอีก ไม่มีอะไรต้องเสีนจริงมั้ย แต่มันเป้นความโชคดีครับ เพราะตอนนั้นผมไม่เหลือทุนสำรองที่จะมาซื้อ ETH ได้ ผมเลยต่อรองกับเขาตามนี้ครับ :
-
ผมเสนอว่า เอางี้มั้ย ผมส่งผลงานของผมแบบ low resolution ให้ก่อน แลกกับให้เขาช่วยโอน ETH ที่เป็นค่า mint งานมาให้หน่อย พอผมได้ ETH แล้ว ผมจะ upscale งานของผม แล้วเมล์ไปให้ ใจแลกใจกันไปเลย ... เขาไม่เอา
- ผมเสนอให้ไปซื้อที่ร้านค้าออนไลน์ buymeacoffee ของผมมั้ย จ่ายเป็น USD ... เขาไม่เอา
- ผมเสนอให้ซื้อขายผ่าน PPV lightning invoice ที่ผมมีสิทธิ์เข้าถึง เพราะเป็น creator ของ Creatr ... เขาไม่เอา
- ผมยอกเขาว่างั้นก็รอนะ รอเงินเดือนออก เขาบอก ok
สัปดาห์ถัดมา มี scammer คนที่สองติดต่อผมเข้ามา ใช้วิธีการใกล้เคียงกัน แต่ใช้คนละเวบ แถมเสนอราคาซื้อที่สูงกว่าคนแรกมาก เวบที่สองนี้เลวร้ายค่าเวบแรกอีกครับ คือต้องใช้เมล์สมัครบัญชี ไม่สามารถผูก metamask ได้ พอสมัครเสร็จจะได้ wallet เปล่าๆมาหนึ่งอัน ผมต้องโอน ETH เข้าไปใน wallet นั้นก่อน เพื่อเอาไปเป็นค่า mint NFT 0.2 ETH
ผมบอก scammer รายที่สองว่า ต้องรอนะ เพราะตอนนี้กำลังติดต่อซื้อขายอยู่กับผู้ซื้อรายแรกอยู่ ผมกำลังรอเงินเพื่อมาซื้อ ETH เป็นต้นทุนดำเนินงานอยู่ คนคนนี้ขอให้ผมส่งเวบแรกไปให้เขาดูหน่อย หลังจากนั้นไม่นานเขาเตือนผมมาว่าเวบแรกมันคือ scam นะ ไม่สามารถถอนเงินออกมาได้ เขายังส่งรูป cap หน้าจอที่คุยกับผู้เสียหายจากเวบแรกมาให้ดูว่าเจอปัญหาถอนเงินไม่ได้ ไม่พอ เขายังบลัฟ opensea ด้วยว่าลูกค้าขายงานได้ แต่ถอนเงินไม่ได้
Opensea ถอนเงินไม่ได้ ตรงนี้แหละครับคือตัวกระตุกต่อมเอ๊ะของผมดังมาก เพราะ opensea อ่ะ ผู้ใช้ connect wallet เข้ากับ marketplace โดยตรง ซื้อขายกันเกิดขึ้น เงินวิ่งเข้าวิ่งออก wallet ของแต่ละคนโดยตรงเลย opensea เก็บแค่ค่า fee ในการใช้ platform ไม่เก็บเงินลูกค้าไว้ แถมปีนี้ค่า gas fee ก็ถูกกว่า bull run cycle 2020 มาก ตอนนี้ค่า gas fee ประมาณ 0.0001 ETH (แต่มันก็แพงกว่า BTC อยู่ดีอ่ะครับ)
ผมเลยเอาเรื่องนี้ไปปรึกษาพี่บิท แต่แอดมินมาคุยกับผมแทน ทางแอดมินแจ้งว่ายังไม่เคยมีเพื่อนๆมาปรึกษาเรื่องนี้ กรณีที่ผมทักมาถามนี่เป็นรายแรกเลย แต่แอดมินให้ความเห็นไปในทางเดียวกับสมมุติฐานของผมว่าน่าจะ scam ในเวลาเดียวกับผมเอาเรื่องนี้ไปถามในเพจ NFT community คนไทนด้วย ได้รับการ confirm ชัดเจนว่า scam และมีคนไม่น้อยโดนหลอก หลังจากที่ผมรู้ที่มาแล้ว ผมเลยเล่นสงครามปั่นประสาท scammer ทั้งสองคนนี้ครับ เพื่อดูว่าหลอกหลวงมิจฉาชีพจริงมั้ย
โดยวันที่ 30 กย. ผมเลยปั่นประสาน scammer ทั้งสองรายนี้ โดยการ mint ผลงานที่เขาเสนอซื้อนั่นแหละ ขึ้น opensea แล้วส่งข้อความไปบอกว่า
mint ให้แล้วนะ แต่เงินไม่พอจริงๆว่ะโทษที เลย mint ขึ้น opensea แทน พอดีบ้านจน ทำได้แค่นี้ไปถึงแค่ opensea รีบไปซื้อล่ะ มีคนจ้องจะคว้างานผมเยอะอยู่ ผมไม่คิด royalty fee ด้วยนะเฮ้ย เอาไปขายต่อไม่ต้องแบ่งกำไรกับผม
เท่านั้นแหละครับ สงครามจิตวิทยาก็เริ่มขึ้น แต่เขาจนมุม กลืนน้ำลายตัวเอง ช็อตเด็ดคือ
เขา : เนี่ยอุส่ารอ บอกเพื่อนในทีมว่าวันจันทร์ที่ 30 กย. ได้ของแน่ๆ เพื่อนๆในทีมเห็นงานผมแล้วมันสวยจริง เลยใส่เงินเต็มที่ 9.3ETH (+ capture screen ส่งตัวเลขยอดเงินมาให้ดู)ไว้รอโดยเฉพาะเลยนะ ผม : เหรอ ... งั้น ขอดู wallet address ที่มี transaction มาให้ดูหน่อยสิ เขา : 2ETH นี่มัน 5000$ เลยนะ ผม : แล้วไง ขอดู wallet address ที่มีการเอายอดเงิน 9.3ETH มาให้ดูหน่อย ไหนบอกว่าเตรียมเงินไว้มากแล้วนี่ ขอดูหน่อย ว่าใส่ไว้เมื่อไหร่ ... เอามาแค่ adrress นะเว้ย ไม่ต้องทะลึ่งส่ง seed มาให้ เขา : ส่งรูปเดิม 9.3 ETH มาให้ดู ผม : รูป screenshot อ่ะ มันไม่มีความหมายหรอกเว้ย ตัดต่อเอาก็ได้ง่ายจะตาย เอา transaction hash มาดู ไหนว่าเตรียมเงินไว้รอ 9.3ETH แล้วอยากซื้องานผมจนตัวสั่นเลยไม่ใช่เหรอ ถ้าจะส่ง wallet address มาให้ดู หรือจะช่วยส่ง 0.15ETH มาให้ยืม mint งานก่อน แล้วมากดซื้อ 2ETH ไป แล้วผมใช้ 0.15ETH คืนให้ก็ได้ จะซื้อหรือไม่ซื้อเนี่ย เขา : จะเอา address เขาไปทำไม ผม : ตัดจบ รำคาญ ไม่ขายให้ละ เขา : 2ETH = 5000 USD เลยนะ ผม : แล้วไง
ผมเลยเขียนบทความนี้มาเตือนเพื่อนๆพี่ๆทุกคนครับ เผื่อใครกำลังเปิดพอร์ตทำธุรกิจขาย digital art online แล้วจะโชคดี เจอของดีแบบผม
ทำไมผมถึงมั่นใจว่ามันคือการหลอกหลวง แล้วคนโกงจะได้อะไร
อันดับแรกไปพิจารณาดู opensea ครับ เป็นเวบ NFTmarketplace ที่ volume การซื้อขายสูงที่สุด เขาไม่เก็บเงินของคนจะซื้อจะขายกันไว้กับตัวเอง เงินวิ่งเข้าวิ่งออก wallet ผู้ซื้อผู้ขายเลย ส่วนทางเวบเก็บค่าธรรมเนียมเท่านั้น แถมค่าธรรมเนียมก็ถูกกว่าเมื่อปี 2020 เยอะ ดังนั้นการที่จะไปลงขายงานบนเวบ NFT อื่นที่ค่า fee สูงกว่ากันเป็นร้อยเท่า ... จะทำไปทำไม
ผมเชื่อว่า scammer โกงเงินเจ้าของผลงานโดยการเล่นกับความโลภและความอ่อนประสบการณ์ของเจ้าของผลงานครับ เมื่อไหร่ก็ตามที่เจ้าของผลงานโอน ETH เข้าไปใน wallet เวบนั้นเมื่อไหร่ หรือเมื่อไหร่ก็ตามที่จ่ายค่า fee ในการ mint งาน เงินเหล่านั้นสิ่งเข้ากระเป๋า scammer ทันที แล้วก็จะมีการเล่นตุกติกต่อแน่นอนครับ เช่นถอนไม่ได้ หรือซื้อไม่ได้ ต้องโอนเงินมาเพิ่มเพื่อปลดล็อค smart contract อะไรก็ว่าไป แล้วคนนิสัยไม่ดีพวกเนี้ย ก็จะเล่นกับความโลภของคน เอาราคาเสนอซื้อที่สูงโคตรๆมาล่อ ... อันนี้ไม่ว่ากัน เพราะบนโลก NFT รูปภาพบางรูปที่ไม่ได้มีความเป็นศิลปะอะไรเลย มันดันขายกันได้ 100 - 150 ETH ศิลปินที่พยายามสร้างตัวก็อาจจะมองว่า ผลงานเรามีคนรับซื้อ 2 - 4 ETH ต่องานมันก็มากพอแล้ว (จริงๆมากเกินจนน่าตกใจด้วยซ้ำครับ)
บนโลกของ BTC ไม่ต้องเชื่อใจกัน โอนเงินไปหากันได้ ปิดสมุดบัญชีได้โดยไม่ต้องเชื่อใจกัน
บบโลกของ ETH "code is law" smart contract มีเขียนอยู่แล้ว ไปอ่าน มันไม่ได้ยากมากในการทำความเข้าใจ ดังนั้น การจะมาเชื่อคำสัญญาจากคนด้วยกัน เป็นอะไรที่ไม่มีเหตุผล
ผมไปเล่าเรื่องเหล่านี้ให้กับ community งานศิลปะ ก็มีทั้งเสียงตอบรับที่ดี และไม่ดีปนกันไป มีบางคนยืนยันเสียงแข็งไปในทำนองว่า ไอ้เรื่องแบบเนี้ยไม่ได้กินเขาหรอก เพราะเขาตั้งใจแน่วแน่ว่างานศิลป์ของเขา เขาไม่เอาเข้ามายุ่งในโลก digital currency เด็ดขาด ซึ่งผมก็เคารพมุมมองเขาครับ แต่มันจะดีกว่ามั้ย ถ้าเราเปิดหูเปิดตาให้ทันเทคโนโลยี โดยเฉพาะเรื่อง digital currency , blockchain โดนโกงทีนึงนี่คือหมดตัวกันง่ายกว่าเงิน fiat อีก
อยากจะมาเล่าให้ฟังครับ และอยากให้ช่วยแชร์ไปให้คนรู้จักด้วย จะได้ระวังตัวกัน
Note
- ภาพประกอบ cyber security ทั้งสองนี่ของผมเองครับ ทำเอง วางขายบน AdobeStock
- อีกบัญชีนึงของผม "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw กำลังค่อยๆเอาผลงานจากโลกข้างนอกเข้ามา nostr ครับ ตั้งใจจะมาสร้างงานศิลปะในนี้ เพื่อนๆที่ชอบงาน จะได้ไม่ต้องออกไปหาที่ไหน
ผลงานของผมครับ - Anime girl fanarts : HikariHarmony - HikariHarmony on Nostr - General art : KeshikiRakuen - KeshikiRakuen อาจจะเป็นบัญชี nostr ที่สามของผม ถ้าไหวครับ
-
@ 81650982:299380fa
2025-04-30 11:16:42Let us delve into Monero (XMR). Among the proponents of various altcoins, Monero arguably commands one of the most dedicated followings, perhaps second only to Ethereum. Unlike many altcoins where even investors often harbor speculative, short-term intentions, the genuine belief within the Monero community suggests an inherent appeal to the chain itself.
The primary advantage touted by Monero (and similar so-called "privacy coins") is its robust privacy protection features. The demand for anonymous payment systems, tracing its lineage back to David Chaum, predates even the inception of Bitcoin. Monero's most heavily promoted strength, relative to Bitcoin, is that its privacy features are enabled by default.
This relates to the concept of the "anonymity set." To guarantee anonymity, a user must blend into a crowd of ordinary users. The larger the group one hides within, the more difficult it becomes for an external observer to identify any specific individual. From the perspective of Monero advocates, Bitcoin's default transaction model is overly transparent, clearly revealing the flow of funds between addresses. While repeated mixing can enhance anonymity in Bitcoin, the fact that users must actively undertake such measures presents a significant hurdle. More critically, proponents argue, the very group engaging in such deliberate obfuscation is precisely the group one doesn't want to be associated with for effective anonymity. Hiding requires blending with the ordinary, not merely mixing with others who are also actively trying to hide — the latter, they contend, is akin to criminals mixing only with other criminals.
This is a valid point. For instance, there's a substantial difference between a messenger app offering end-to-end encryption for all communications by default, versus one requiring users to explicitly create a "secret chat" for encryption. While I personally believe that increased self-custody of Bitcoin in personal wallets, acquisition through direct peer-to-peer payments rather than exchange purchases, and the widespread adoption of the Lightning Network would make tracing significantly harder even without explicit mixing efforts, let us concede, for the sake of argument, that Bitcoin's base-layer anonymity might not drastically improve even in such a future scenario.
Nevertheless, Monero's long-term prospects appear considerably constrained when focusing purely on technical limitations, setting aside economic factors or incentive models for now. While discussions on economics can often be countered with "That's just your speculation," technical constraints present more objective facts and leave less room for dispute.
Monero's most fundamental problem is its lack of scalability. To briefly explain how Monero obfuscates the sender: it includes other addresses alongside the true sender's address in the 'from' field and attaches what appears to be valid signatures for all of them. With a default setting of 10 decoys (plus the real spender, making a ring size of 11), the signature size naturally becomes substantially larger than Bitcoin's. Since an observer cannot determine which of the 11 is the true sender, and these decoys are arbitrary outputs selected from the blockchain belonging to other users, anonymity is indeed enhanced. While the sender cannot generate individually valid signatures for the decoy outputs (as they don't own the private keys), the use of a ring signature mathematically proves that one member of the ring authorized the transaction, allowing it to pass network validation.
The critical issue is that this results in transaction sizes several times larger than Bitcoin's. Bitcoin already faces criticism for being relatively expensive and slow. Monero's structure imposes a burden that is multiples greater. One might question the relationship between transaction data size and transaction fees/speed. However, the perceived slowness of blockchains isn't typically due to inefficient code, but rather the strict limitations imposed on block size (or equivalent throughput constraints) to maintain decentralization. Therefore, larger transaction sizes directly translate into throughput limitations and upward pressure on fees. If someone claims Monero fees are currently lower than Bitcoin's, that is merely a consequence of its significantly lower usage. Should Monero's transaction volume reach even a fraction of Bitcoin's, its current architecture would struggle severely under the load.
To address this, Monero implemented a dynamic block size limit instead of a hardcoded one. However, this is not a comprehensive solution. If the block size increases proportionally with usage, a future where Monero achieves widespread adoption as currency — implying usage potentially hundreds, thousands, or even hundreds of thousands of times greater than today — would render the blockchain size extremely difficult to manage for ordinary node operators. Global internet traffic might be consumed by Monero transactions, or at the very least, the bandwidth and storage costs could exceed what individuals can reasonably bear.
Blockchains, by their nature, must maintain a size manageable enough for individuals to run full nodes, necessitating strict block size limits (or equivalent constraints in blockless designs). This fundamental requirement is the root cause of limited transaction speed and rising fees. Consequently, the standard approach to blockchain scaling involves Layer 2 solutions like the Lightning Network. The problem is, implementing such solutions on Monero is extremely challenging.
Layer 2 solutions, while varying in specific implementation details across different blockchains, generally rely heavily on the transparency of on-chain transactions. They typically involve sophisticated smart contracts built upon the ability to publicly verify on-chain states and events. Monero's inherent opacity, hiding crucial details of on-chain transactions, makes it exceptionally difficult for two mutually untrusting parties to reach the necessary consensus and cryptographic agreements (like establishing payment channels with verifiable state transitions and dispute mechanisms) that underpin such Layer 2 systems. The fact that Monero, despite existing for several years, still lacks a functional, widely adopted Layer 2 implementation suggests that this remains an unsolved and technically formidable challenge. While theoretical proposals exist, their real-world feasibility remains uncertain and would likely require significant breakthroughs in cryptographic protocol design.
Furthermore, Monero faces another severe scaling challenge related to its core privacy mechanism. As mentioned, decoy outputs are used to obscure the true sender. An astute observer might wonder: If a third party cannot distinguish the real spender, could the real spender potentially double-spend their funds later? Or could someone's funds become unusable simply because they were chosen as a decoy in another transaction? Naturally, Monero's developers anticipated this. The solution employed involves key images.
When an output is genuinely spent within a ring signature, a unique cryptographic identifier called a "key image" is derived from the real output and the spender's private key. This derivation is one-way (the key image cannot be used to reveal the original output or key). This key image is recorded on the blockchain. When validating a new transaction, the network checks if the submitted key image has already appeared in the history. If it exists, the transaction is rejected as a double-spend attempt. The crucial implication is that this set of used key images can never be pruned. Deleting historical key images would directly enable double-spending.
Therefore, Monero's state size — the data that full nodes must retain and check against — grows linearly and perpetually with the total number of transactions ever processed on the network.
Summary In summary, Monero faces critical technical hurdles:
Significantly Larger Transaction Sizes: The use of ring signatures for anonymity results in transaction data sizes several times larger than typical cryptocurrencies like Bitcoin.
Inherent Scalability Limitations: The large transaction size, combined with the necessity of strict block throughput limits to preserve decentralization, creates severe scalability bottlenecks regarding transaction speed and cost under significant load. Dynamic block sizes, while helpful in the short term, do not constitute a viable long-term solution for broad decentralization.
Layer 2 Implementation Difficulty: Monero's fundamental opacity makes implementing established Layer 2 scaling solutions (like payment channels) extremely difficult with current approaches. The absence of a widely adopted solution to date indicates that this remains a major unresolved challenge.
Unprunable, Linearly Growing State: The key image mechanism required to prevent double-spending mandates the perpetual storage of data proportional to the entire transaction history, unlike Bitcoin where nodes can prune historical blocks and primarily need to maintain the current UTXO set (whose size depends on usage patterns, not total history).
These technical constraints raise legitimate concerns about Monero's ability to scale effectively and achieve widespread adoption in the long term. While ongoing research may alleviate some of these issues, at present they represent formidable challenges that any privacy-focused cryptocurrency must contend with.
-
@ c21b1a6c:0cd4d170
2025-04-14 14:41:20🧾 Progress Report Two
Hey everyone! I’m back with another progress report for Formstr, a part of the now completed grant from nostr:npub10pensatlcfwktnvjjw2dtem38n6rvw8g6fv73h84cuacxn4c28eqyfn34f . This update covers everything we’ve built since the last milestone — including polish, performance, power features, and plenty of bug-squashing.
🏗️ What’s New Since Last Time?
This quarter was less about foundational rewrites and more about production hardening and real-world feedback. With users now onboard, our focus shifted to polishing UX, fixing issues, and adding new features that made Formstr easier and more powerful to use.
✨ New Features & UX Improvements
- Edit Existing Forms
- Form Templates
- Drag & Drop Enhancements (especially for mobile)
- New Public Forms UX (card-style layout)
- FAQ & Support Sections
- Relay Modal for Publishing
- Skeleton Loaders and subtle UI Polish
🐛 Major Bug Fixes
- Fixed broken CSV exports when responses were empty
- Cleaned up mobile rendering issues for public forms
- Resolved blank.ts export issues and global form bugs
- Fixed invalid
npub
strings in the admin flow - Patched response handling for private forms
- Lots of small fixes for titles, drafts, embedded form URLs, etc.
🔐 Access Control & Privacy
- Made forms private by default
- Fixed multiple issues around form visibility, access control UIs, and anonymous submissions
- Improved detection of pubkey issues in shared forms
🚧 Some Notable In-Progress Features
The following features are actively being developed, and many are nearing completion:
-
Conditional Questions:
This one’s been tough to crack, but we’re close!
Work in progress bykeraliss
and myself:
👉 PR #252 -
Downloadable Forms:
Fully-contained downloadable HTML versions of forms.
Being led bycasyazmon
with initial code by Basanta Goswami
👉 PR #274 -
OLLAMA Integration (Self-Hosted LLMs):
Users will be able to create forms using locally hosted LLMs.
PR byashu01304
👉 PR #247 -
Sections in Forms:
Work just started on adding section support!
Small PoC PR bykeraliss
:
👉 PR #217
🙌 Huge Thanks to New Contributors
We've had amazing contributors this cycle. Big thanks to:
- Aashutosh Gandhi (ashu01304) – drag-and-drop enhancements, OLLAMA integration
- Amaresh Prasad (devAmaresh) – fixed npub and access bugs
- Biresh Biswas (Billa05) – skeleton loaders
- Shashank Shekhar Singh (Shashankss1205) – bugfixes, co-authored image patches
- Akap Azmon Deh-nji (casyazmon) – CSV fixes, downloadable forms
- Manas Ranjan Dash (mdash3735) – bug fixes
- Basanta Goswami – initial groundwork for downloadable forms
- keraliss – ongoing work on conditional questions and sections
We also registered for the Summer of Bitcoin program and have been receiving contributions from some incredibly bright new applicants.
🔍 What’s Still Coming?
From the wishlist I committed to during the grant, here’s what’s still in the oven:
-[x] Upgrade to nip-44 - [x] Access Controlled Forms: A Form will be able to have multiple admins and Editors. - [x] Private Forms and Fixed Participants: Enncrypt a form and only allow certain npubs to fill it. - [x] Edit Past Forms: Being able to edit an existing form. - [x] Edit Past Forms
- [ ] Conditional Rendering (in progress)
- [ ] Sections (just started)
- [ ] Integrations - OLLAMA / AI-based Form Generation (near complete)
- [ ] Paid Surveys
- [ ] NIP-42 Private Relay support
❌ What’s De-Prioritized?
- Nothing is de-prioritized now especially since Ollama Integration got re-prioritized (thanks to Summer Of Bitcoin). We are a little delayed on Private Relays support but it's now becoming a priority and in active development. Zap Surveys will be coming soon too.
💸 How Funds Were Used
- Paid individual contributors for their work.
- Living expenses to allow full-time focus on development
🧠 Closing Thoughts
Things feel like they’re coming together now. We’re out of "beta hell", starting to see real adoption, and most importantly, gathering feedback from real users. That’s helping us make smarter choices and move fast without breaking too much.
Stay tuned for the next big drop — and in the meantime, try creating a form at formstr.app, and let me know what you think!
-
@ a95c6243:d345522c
2025-01-13 10:09:57Ich begann, Social Media aufzubauen, \ um den Menschen eine Stimme zu geben. \ Mark Zuckerberg
Sind euch auch die Tränen gekommen, als ihr Mark Zuckerbergs Wendehals-Deklaration bezüglich der Meinungsfreiheit auf seinen Portalen gehört habt? Rührend, oder? Während er früher die offensichtliche Zensur leugnete und später die Regierung Biden dafür verantwortlich machte, will er nun angeblich «die Zensur auf unseren Plattformen drastisch reduzieren».
«Purer Opportunismus» ob des anstehenden Regierungswechsels wäre als Klassifizierung viel zu kurz gegriffen. Der jetzige Schachzug des Meta-Chefs ist genauso Teil einer kühl kalkulierten Business-Strategie, wie es die 180 Grad umgekehrte Praxis vorher war. Social Media sind ein höchst lukratives Geschäft. Hinzu kommt vielleicht noch ein bisschen verkorkstes Ego, weil derartig viel Einfluss und Geld sicher auch auf die Psyche schlagen. Verständlich.
«Es ist an der Zeit, zu unseren Wurzeln der freien Meinungsäußerung auf Facebook und Instagram zurückzukehren. Ich begann, Social Media aufzubauen, um den Menschen eine Stimme zu geben», sagte Zuckerberg.
Welche Wurzeln? Hat der Mann vergessen, dass er von der Überwachung, dem Ausspionieren und dem Ausverkauf sämtlicher Daten und digitaler Spuren sowie der Manipulation seiner «Kunden» lebt? Das ist knallharter Kommerz, nichts anderes. Um freie Meinungsäußerung geht es bei diesem Geschäft ganz sicher nicht, und das war auch noch nie so. Die Wurzeln von Facebook liegen in einem Projekt des US-Militärs mit dem Namen «LifeLog». Dessen Ziel war es, «ein digitales Protokoll vom Leben eines Menschen zu erstellen».
Der Richtungswechsel kommt allerdings nicht überraschend. Schon Anfang Dezember hatte Meta-Präsident Nick Clegg von «zu hoher Fehlerquote bei der Moderation» von Inhalten gesprochen. Bei der Gelegenheit erwähnte er auch, dass Mark sehr daran interessiert sei, eine aktive Rolle in den Debatten über eine amerikanische Führungsrolle im technologischen Bereich zu spielen.
Während Milliardärskollege und Big Tech-Konkurrent Elon Musk bereits seinen Posten in der kommenden Trump-Regierung in Aussicht hat, möchte Zuckerberg also nicht nur seine Haut retten – Trump hatte ihn einmal einen «Feind des Volkes» genannt und ihm lebenslange Haft angedroht –, sondern am liebsten auch mitspielen. KI-Berater ist wohl die gewünschte Funktion, wie man nach einem Treffen Trump-Zuckerberg hörte. An seine Verhaftung dachte vermutlich auch ein weiterer Multimilliardär mit eigener Social Media-Plattform, Pavel Durov, als er Zuckerberg jetzt kritisierte und gleichzeitig warnte.
Politik und Systemmedien drehen jedenfalls durch – was zu viel ist, ist zu viel. Etwas weniger Zensur und mehr Meinungsfreiheit würden die Freiheit der Bürger schwächen und seien potenziell vernichtend für die Menschenrechte. Zuckerberg setze mit dem neuen Kurs die Demokratie aufs Spiel, das sei eine «Einladung zum nächsten Völkermord», ernsthaft. Die Frage sei, ob sich die EU gegen Musk und Zuckerberg behaupten könne, Brüssel müsse jedenfalls hart durchgreifen.
Auch um die Faktenchecker macht man sich Sorgen. Für die deutsche Nachrichtenagentur dpa und die «Experten» von Correctiv, die (noch) Partner für Fact-Checking-Aktivitäten von Facebook sind, sei das ein «lukratives Geschäftsmodell». Aber möglicherweise werden die Inhalte ohne diese vermeintlichen Korrektoren ja sogar besser. Anders als Meta wollen jedoch Scholz, Faeser und die Tagesschau keine Fehler zugeben und zum Beispiel Correctiv-Falschaussagen einräumen.
Bei derlei dramatischen Befürchtungen wundert es nicht, dass der öffentliche Plausch auf X zwischen Elon Musk und AfD-Chefin Alice Weidel von 150 EU-Beamten überwacht wurde, falls es irgendwelche Rechtsverstöße geben sollte, die man ihnen ankreiden könnte. Auch der Deutsche Bundestag war wachsam. Gefunden haben dürften sie nichts. Das Ganze war eher eine Show, viel Wind wurde gemacht, aber letztlich gab es nichts als heiße Luft.
Das Anbiedern bei Donald Trump ist indes gerade in Mode. Die Weltgesundheitsorganisation (WHO) tut das auch, denn sie fürchtet um Spenden von über einer Milliarde Dollar. Eventuell könnte ja Elon Musk auch hier künftig aushelfen und der Organisation sowie deren größtem privaten Förderer, Bill Gates, etwas unter die Arme greifen. Nachdem Musks KI-Projekt xAI kürzlich von BlackRock & Co. sechs Milliarden eingestrichen hat, geht da vielleicht etwas.
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ a95c6243:d345522c
2025-01-03 20:26:47Was du bist hängt von drei Faktoren ab: \ Was du geerbt hast, \ was deine Umgebung aus dir machte \ und was du in freier Wahl \ aus deiner Umgebung und deinem Erbe gemacht hast. \ Aldous Huxley
Das brave Mitmachen und Mitlaufen in einem vorgegebenen, recht engen Rahmen ist gewiss nicht neu, hat aber gerade wieder mal Konjunktur. Dies kann man deutlich beobachten, eigentlich egal, in welchem gesellschaftlichen Bereich man sich umschaut. Individualität ist nur soweit angesagt, wie sie in ein bestimmtes Schema von «Diversität» passt, und Freiheit verkommt zur Worthülse – nicht erst durch ein gewisses Buch einer gewissen ehemaligen Regierungschefin.
Erklärungsansätze für solche Entwicklungen sind bekannt, und praktisch alle haben etwas mit Massenpsychologie zu tun. Der Herdentrieb, also der Trieb der Menschen, sich – zum Beispiel aus Unsicherheit oder Bequemlichkeit – lieber der Masse anzuschließen als selbstständig zu denken und zu handeln, ist einer der Erklärungsversuche. Andere drehen sich um Macht, Propaganda, Druck und Angst, also den gezielten Einsatz psychologischer Herrschaftsinstrumente.
Aber wollen die Menschen überhaupt Freiheit? Durch Gespräche im privaten Umfeld bin ich diesbezüglich in der letzten Zeit etwas skeptisch geworden. Um die Jahreswende philosophiert man ja gerne ein wenig über das Erlebte und über die Erwartungen für die Zukunft. Dabei hatte ich hin und wieder den Eindruck, die totalitären Anwandlungen unserer «Repräsentanten» kämen manchen Leuten gerade recht.
«Desinformation» ist so ein brisantes Thema. Davor müsse man die Menschen doch schützen, hörte ich. Jemand müsse doch zum Beispiel diese ganzen merkwürdigen Inhalte in den Social Media filtern – zur Ukraine, zum Klima, zu Gesundheitsthemen oder zur Migration. Viele wüssten ja gar nicht einzuschätzen, was richtig und was falsch ist, sie bräuchten eine Führung.
Freiheit bedingt Eigenverantwortung, ohne Zweifel. Eventuell ist es einigen tatsächlich zu anspruchsvoll, die Verantwortung für das eigene Tun und Lassen zu übernehmen. Oder die persönliche Freiheit wird nicht als ausreichend wertvolles Gut angesehen, um sich dafür anzustrengen. In dem Fall wäre die mangelnde Selbstbestimmung wohl das kleinere Übel. Allerdings fehlt dann gemäß Aldous Huxley ein Teil der Persönlichkeit. Letztlich ist natürlich alles eine Frage der Abwägung.
Sind viele Menschen möglicherweise schon so «eingenordet», dass freiheitliche Ambitionen gar nicht für eine ganze Gruppe, ein Kollektiv, verfolgt werden können? Solche Gedanken kamen mir auch, als ich mir kürzlich diverse Talks beim viertägigen Hacker-Kongress des Chaos Computer Clubs (38C3) anschaute. Ich war nicht nur überrascht, sondern reichlich erschreckt angesichts der in weiten Teilen mainstream-geformten Inhalte, mit denen ein dankbares Publikum beglückt wurde. Wo ich allgemein hellere Köpfe erwartet hatte, fand ich Konformismus und enthusiastisch untermauerte Narrative.
Gibt es vielleicht so etwas wie eine Herdenimmunität gegen Indoktrination? Ich denke, ja, zumindest eine gestärkte Widerstandsfähigkeit. Was wir brauchen, sind etwas gesunder Menschenverstand, offene Informationskanäle und der Mut, sich freier auch zwischen den Herden zu bewegen. Sie tun das bereits, aber sagen Sie es auch dieses Jahr ruhig weiter.
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ 3589b793:ad53847e
2025-04-30 10:53:29※本記事は別サービスで2022年5月22日に公開した記事の移植です。
Happy 🍕 Day's Present
まだ邦訳版が出版されていませんがこれまでのシリーズと同じくGitHubにソースコードが公開されています。なんと、現在のライセンスでは個人使用限定なら翻訳や製本が可能です。Macで、翻訳にはPDFをインプットにできるDeepLを用いた環境で、インスタントに製本してKindleなどで読めるようにする方法をまとめました。
手順の概要
- Ruby環境を用意する
- PDF作成ツールをセットアップする
- GitHubのリポジトリを自分のPCにクローンする
- asciidocをPDFに変換する
- DeepLを節約するためにPDFを結合する
- DeepLで翻訳ファイルを作る
- 一冊に製本する
この手法の強み・弱み
翻訳だけならPDFを挟まなくてもGithubなどでプレビューできるコンパイル後のドキュメントの文章をコピーしてDeepLのWebツールにペーストすればよいですが、原著のペーパーブックで438ページある大容量です。熟練のコピペ職人でも年貢を納めて後進(機械やソフトウェア)に道を譲る刻ではないでしょうか?ただし、Pros/Consがあります。
Pros
- 一冊の本になるので毎度のコピペ作業がいらない
- Pizzaを食べながらタブレットやKindleで読める
- 図や表が欠落しない(プロトコルの手順を追った解説が多いため最大の動機でした)
- 2022/6/16追記: DeepLの拡張機能がアップデートされウェブページの丸ごと翻訳が可能になりました。よってウェブ上のgithubの図表付きページをそのまま翻訳できます。
Cons
- Money is power(大容量のためDeepLの有料契約が必要)
- ページを跨いだ文章が統合されずに不自然な翻訳になる(仕様です)
- ~~翻訳できない章が一つある(解決方法がないか調査中です。DeepLさんもっとエラーメッセージ出してくれ。Help me)~~ DeepLサポートに投げたら翻訳できるようになりました。
詳細ステップ
0.Ruby環境を用意する
asciidoctorも新しく入れるなら最新のビルドで良いでしょう。
1.PDF作成ツールをセットアップする
$ gem install asciidoctor asciidoctor-pdf $ brew install gs
2.GitHubのリポジトリを自分のPCにクローンする
どこかの作業ディレクトリで以下を実行する
$ git clone git@github.com:lnbook/lnbook.git $ cd lnbook
3.asciidocをPDFに変換する
ワイルドカードを用いて本文を根こそぎPDF化します。
$ asciidoctor-pdf 0*.asciidoc 1*.asciidoc
いろいろ解析の警告が出ますが、ソースのasciidocを弄んでいくなりawsomeライブラリを導入すれば解消できるはずです。しかし如何せん量が多いので心が折れます。いったん無視して"Done is better than perfect"精神で最後までやり切りましょう。そのままGO!
また、お好みに合わせて、htmlで用意されている装丁用の部品も準備しましょう。私は表紙のcover.htmlをピックしました。ソースがhtmlなのでasciidoctorを通さず普通にPDFへ変換します。https://qiita.com/chenglin/items/9c4ed0dd626234b71a2c
4.DeepLを節約するためにPDFを結合する
DeepLでは課金プラン毎に翻訳可能なファイル数が設定されている上に、一本あたりの最大ファイルサイズが10MBです。また、翻訳エラーになる章が含まれていると丸ごとコケます。そのためPDCAサイクルを回し、最適なファイル数を手探りで見つけます。以下が今回導出した解となります。
$ gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=output_1.pdf 01_introduction.pdf 02_getting_started.pdf 03_how_ln_works.pdf 04_node_client.pdf 05_node_operations.pdf
$gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=output_2_1.pdf 06_lightning_architecture.pdf 07_payment_channels.pdf 08_routing_htlcs.pdf
$gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=output_2_2.pdf 09_channel_operation.pdf 10_onion_routing.asciidoc$ gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=output_3.pdf 11_gossip_channel_graph.pdf 12_path_finding.pdf 13_wire_protocol.pdf 14_encrypted_transport.pdf 15_payment_requests.pdf 16_security_privacy_ln.pdf 17_conclusion.pdf
5. DeepLで翻訳ファイルを作る
PDFファイルを真心を込めた手作業で一つ一つDeepLにアップロードしていき翻訳ファイルを作ります。ファイル名はデフォルトの
[originalName](日本語).pdf
のままにしています。6. 一冊に製本する
表紙 + 本文で作成する例です。
$ gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=mastering_ln_jp.pdf cover.pdf "output_1 (日本語).pdf" "output_2_1 (日本語).pdf" "output_2_2 (日本語).pdf" "output_3 (日本語).pdf"
コングラチュレーションズ🎉
あなたは『Mastering the Lightning Network』の日本語版を手に入れた!個人使用に限り、あとは煮るなり焼くなりEPUBなりkindleへ送信するなり好き放題だ。
-
@ d08c9312:73efcc9f
2025-04-30 09:59:52Resolvr CEO, Aaron Daniel, summarizes his keynote speech from the Bitcoin Insurance Summit, April 26, 2025
Introduction
At the inaugural Bitcoin Insurance Summit in Miami, I had the pleasure of sharing two historical parallels that illuminate why "Bitcoin needs insurance, and insurance needs Bitcoin." The insurance industry's reactions to fire and coffee can help us better understand the profound relationship between emerging technologies, risk management, and commercial innovation.
Fire: Why Bitcoin Needs Insurance
The first story explores how the insurance industry's response to catastrophic urban fires shaped modern building safety. Following devastating events like the Great Fire of London (1666) and the Great Chicago Fire (1871), the nascent insurance industry began engaging with fire risk systematically.
Initially, insurers offered private fire brigades to policyholders who displayed their company's fire mark on their buildings. This evolved into increasingly sophisticated risk assessment and pricing models throughout the 19th century:
- Early 19th century: Basic risk classifications with simple underwriting based on rules of thumb
- Mid-19th century: Detailed construction types and cooperative sharing of loss data through trade associations
- Late 19th/Early 20th century: Scientific, data-driven approaches with differentiated rate pricing
The insurance industry fundamentally transformed building construction practices by developing evidence-based standards that would later inform regulatory frameworks. Organizations like the National Board of Fire Underwriters (founded 1866) and Underwriters Laboratories (established 1894) tested and standardized new technologies, turning seemingly risky innovations like electricity into safer, controlled advancements.
This pattern offers a powerful precedent for Bitcoin. Like electricity, Bitcoin represents a new technology that appears inherently risky but has tremendous potential for society. By engaging with Bitcoin rather than avoiding it, the insurance industry can develop evidence-based standards, implement proper controls, and ultimately make the entire Bitcoin ecosystem safer and more robust.
Coffee: Why Insurance Needs Bitcoin
The second story reveals how coffee houses in 17th-century England became commercial hubs that gave birth to modern insurance. Nathaniel Canopius brewed the first documented cup of coffee in England in 1637. But it wasn't until advances in navigation and shipping technology opened new trade lanes that coffee became truly ubiquitous in England. Once global trade blossomed, coffee houses rapidly spread throughout London, becoming centers of business, information exchange, and innovation.
In 1686, Edward Lloyd opened his coffee house catering to sailors, merchants, and shipowners, which would eventually evolve into Lloyd's of London. Similarly, Jonathan's Coffee House became the birthplace of what would become the London Stock Exchange.
These coffee houses functioned as information networks where merchants could access shipping news and trade opportunities, as well as risk management solutions. They created a virtuous cycle: better shipping technology brought more coffee, which fueled commerce and led to better marine insurance and financing, which in turn improved global trade.
Today, we're experiencing a similar technological and financial revolution with Bitcoin. This digital, programmable money moves at the speed of light and operates 24/7 as a nearly $2 trillion asset class. The insurance industry stands to benefit tremendously by embracing this innovation early.
Conclusion
The lessons from history are clear. Just as the insurance industry drove safety improvements by engaging with fire risk, it can help develop standards and best practices for Bitcoin security. And just as coffee houses created commercial networks that revolutionized finance, insurance, and trade, Bitcoin offers new pathways for global commerce and risk management.
For the insurance industry to remain relevant in a rapidly digitizing world, it must engage with Bitcoin rather than avoid it. The companies that recognize this opportunity first will enjoy significant advantages, while those who resist change risk being left behind.
The Bitcoin Insurance Summit represented an important first step in creating the collaborative spaces needed for this transformation—a modern version of those innovative coffee houses that changed the world over three centuries ago.
View Aaron's full keynote:
https://youtu.be/eIjT1H2XuCU
For more information about how Resolvr can help your organization leverage Bitcoin in its operations, contact us today.
-
@ 8947a945:9bfcf626
2024-10-17 07:33:00Hello everyone on Nostr and all my watchersand followersfrom DeviantArt, as well as those from other art platforms
I have been creating and sharing AI-generated anime girl fanart since the beginning of 2024 and have been running member-exclusive content on Patreon.
I also publish showcases of my artworks to Deviantart. I organically build up my audience from time to time. I consider it as one of my online businesses of art. Everything is slowly growing
On September 16, I received a DM from someone expressing interest in purchasing my art in NFT format and offering a very high price for each piece. We later continued the conversation via email.
Here’s a brief overview of what happened
- The first scammer selected the art they wanted to buy and offered a high price for each piece. They provided a URL to an NFT marketplace site running on the Ethereum (ETH) mainnet or ERC20. The site appeared suspicious, requiring email sign-up and linking a MetaMask wallet. However, I couldn't change the wallet address later. The minting gas fees were quite expensive, ranging from 0.15 to 0.2 ETH
-
The scammers tried to convince me that the high profits would easily cover the minting gas fees, so I had nothing to lose. Luckily, I didn’t have spare funds to purchase ETH for the gas fees at the time, so I tried negotiating with them as follows:
-
I offered to send them a lower-quality version of my art via email in exchange for the minting gas fees, but they refused.
- I offered them the option to pay in USD through Buy Me a Coffee shop here, but they refused.
- I offered them the option to pay via Bitcoin using the Lightning Network invoice , but they refused.
- I asked them to wait until I could secure the funds, and they agreed to wait.
The following week, a second scammer approached me with a similar offer, this time at an even higher price and through a different NFT marketplace website.
This second site also required email registration, and after navigating to the dashboard, it asked for a minting fee of 0.2 ETH. However, the site provided a wallet address for me instead of connecting a MetaMask wallet.
I told the second scammer that I was waiting to make a profit from the first sale, and they asked me to show them the first marketplace. They then warned me that the first site was a scam and even sent screenshots of victims, including one from OpenSea saying that Opensea is not paying.
This raised a red flag, and I began suspecting I might be getting scammed. On OpenSea, funds go directly to users' wallets after transactions, and OpenSea charges a much lower platform fee compared to the previous crypto bull run in 2020. Minting fees on OpenSea are also significantly cheaper, around 0.0001 ETH per transaction.
I also consulted with Thai NFT artist communities and the ex-chairman of the Thai Digital Asset Association. According to them, no one had reported similar issues, but they agreed it seemed like a scam.
After confirming my suspicions with my own research and consulting with the Thai crypto community, I decided to test the scammers’ intentions by doing the following
I minted the artwork they were interested in, set the price they offered, and listed it for sale on OpenSea. I then messaged them, letting them know the art was available and ready to purchase, with no royalty fees if they wanted to resell it.
They became upset and angry, insisting I mint the art on their chosen platform, claiming they had already funded their wallet to support me. When I asked for proof of their wallet address and transactions, they couldn't provide any evidence that they had enough funds.
Here’s what I want to warn all artists in the DeviantArt community or other platforms If you find yourself in a similar situation, be aware that scammers may be targeting you.
My Perspective why I Believe This is a Scam and What the Scammers Gain
From my experience with BTC and crypto since 2017, here's why I believe this situation is a scam, and what the scammers aim to achieve
First, looking at OpenSea, the largest NFT marketplace on the ERC20 network, they do not hold users' funds. Instead, funds from transactions go directly to users’ wallets. OpenSea’s platform fees are also much lower now compared to the crypto bull run in 2020. This alone raises suspicion about the legitimacy of other marketplaces requiring significantly higher fees.
I believe the scammers' tactic is to lure artists into paying these exorbitant minting fees, which go directly into the scammers' wallets. They convince the artists by promising to purchase the art at a higher price, making it seem like there's no risk involved. In reality, the artist has already lost by paying the minting fee, and no purchase is ever made.
In the world of Bitcoin (BTC), the principle is "Trust no one" and “Trustless finality of transactions” In other words, transactions are secure and final without needing trust in a third party.
In the world of Ethereum (ETH), the philosophy is "Code is law" where everything is governed by smart contracts deployed on the blockchain. These contracts are transparent, and even basic code can be read and understood. Promises made by people don’t override what the code says.
I also discuss this issue with art communities. Some people have strongly expressed to me that they want nothing to do with crypto as part of their art process. I completely respect that stance.
However, I believe it's wise to keep your eyes open, have some skin in the game, and not fall into scammers’ traps. Understanding the basics of crypto and NFTs can help protect you from these kinds of schemes.
If you found this article helpful, please share it with your fellow artists.
Until next time Take care
Note
- Both cyber security images are mine , I created and approved by AdobeStock to put on sale
- I'm working very hard to bring all my digital arts into Nostr to build my Sats business here to my another npub "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw
Link to my full gallery - Anime girl fanarts : HikariHarmony - HikariHarmony on Nostr - General art : KeshikiRakuen
-
@ 0b118e40:4edc09cb
2025-04-13 02:46:36note - i wrote this before the global trade war, back when tariffs only affected China, Mexico, and Canada. But you will still get the gist of it.
During tough economic times, governments have to decide if they should open markets to global trade or protect local businesses with tariffs. The United States has swung between these two strategies, and history shows that the results are never straightforward
Just days ago, President Donald Trump imposed tariffs on imports from Canada, Mexico, and China. He framed these tariffs (25% on most Canadian goods, 10% on Canadian energy, 25% on Mexican imports, and 10% on Chinese imports) as a way to protect American industries.
But will they actually help, or could they backfire?
A History of U.S. Tariffs
Many have asked if countries will retaliate against the US. They can and they have. Once upon a time, 60 countries were so pissed off at the US, they retaliated at one go and crushed US dominance over trade.
This was during the Great Depression era in the 1930s when the government passed the Smoot-Hawley Tariff Act, placing high taxes on over 20,000 foreign goods. The goal was to protect American jobs, especially American farmers and manufacturers, but it backfired so badly.
Over 60 countries, including Canada, France, and Germany, retaliated by imposing their own tariffs. By 1933, US imports and exports both dropped significantly over 60%, and unemployment rose to 25%.
After President Franklin Roosevelt came to office, he implemented the Reciprocal Trade Agreements Act of 1934 to reverse these policies, calming the world down and reviving trade again.
The economist history of protectionism
The idea of shielding local businesses with tariffs isn’t new or recent. It's been around for a few centuries. In the 16th to 18th centuries, mercantilism encouraged countries to limit imports and boost exports.
In the 18th century, Adam Smith, in The Wealth of Nations, argued that free trade allows nations to specialize in what they do best countering protectionism policies. Friedrich List later challenged Smith's view by stating that developing countries need some protection to grow their “infant” industries which is a belief that still influences many governments today.
But how often do governments truly support startups and new small businesses in ways that create real growth, rather than allowing funds to trickle down to large corporations instead?
In modern times, John Maynard Keynes supported government intervention during economic downturns, while Milton Friedman championed free trade and minimal state interference.
Paul Krugman argued that limited protectionism can help large industries by providing them unfair advantages to become global market leaders. I have deep reservations about Krugman’s take, particularly on its impact or lack thereof in globalizing small businesses.
The debate between free trade and protectionism has existed for centuries. What’s clear is that there is no one-size-fits-all model to this.
The Political Debate - left vs right
Both the left and right have used tariffs but for different reasons. The right supports tariffs to protect jobs and industries, while the left uses them to prevent multinational corporations from exploiting cheap labor abroad.
Neoliberal policies favor free trade, arguing that competition drives efficiency and growth. In the US this gets a little bit confusing as liberals are tied to the left, and free trade is tied to libertarianism which the rights align closely with, yet at present right wing politicians push for protectionism which crosses the boundaries of free-trade.
There are also institutions like the WTO and IMF who advocate for open markets, but their policies often reflect political alliances and preferential treatment - so it depends on what you define as true 'free trade’.
Who Really Benefits from Tariffs?
Most often, tariffs help capital-intensive industries like pharmaceuticals, tech, and defense, while hurting labor-intensive sectors like manufacturing, agriculture, and construction.
This worsens inequality as big corporations will thrive, while small businesses and working-class people struggle with rising costs and fewer job opportunities.
I’ve been reading through international trade economics out of personal interest, I'll share some models below on why this is the case
1. The Disruption of Natural Trade
Tariffs disrupt the natural flow of trade. The Heckscher-Ohlin model explains that countries export goods that match their resources like Canada’s natural resource energy or China’s labour intensive textile and electronics. When tariffs block this natural exchange, industries suffer.
A clear example was Europe’s energy crisis during the Russia-Ukraine war. By abruptly cutting themselves off from the supply of Russian energy, Europe scrambled to find alternative sources. In the end, it was the people who had to bear the brunt of skyrocketing prices of energy.
2. Who wins and who loses?
The Stolper-Samuelson theorem helps us understand who benefits from tariffs and who loses. The idea behind it is that tariffs benefit capital-intensive industries, while labor-intensive sectors are hurt.
In the US, small manufacturing industries that rely on low-cost imports on intermediary parts from countries like China and Mexico will face rising costs, making their final goods too expensive and less competitive. This is similar to what happened to Argentina, where subsidies and devaluation of pesos contributed to cost-push inflation, making locally produced goods more expensive and less competitive globally.
This also reminded me of the decline of the US Rust Belt during the 1970s and 1980s, where the outsourcing of labour-intensive manufacturing jobs led to economic stagnation in many regions in the Midwest, while capital-intensive sectors flourished on the coasts. It resulted in significantly high income inequality that has not improved over the last 40 years.
Ultimately the cost of economic disruption is disproportionately borne by smaller businesses and low-skilled workers. At the end of the day, the rich get richer and the poor get poorer.
3. Delays in Economic Growth
The Rybczynski theorem suggests that economic growth depends on how efficiently nations reallocate their resources toward capital- or labor-intensive industries. But tariffs can distort this transition and progress.
In the 70s and 80s, the US steel industry had competition from Japan and Germany who modernized their production methods, making their steel more efficient and cost-effective. Instead of prioritizing innovation, many U.S. steel producers relied on tariffs and protectionist measures to shield themselves from foreign competition. This helped for a bit but over time, American steelmakers lost global market share as foreign competitors continued to produce better, cheaper steel. Other factors, such as aging infrastructure, and economic shifts toward a service-based economy, further contributed to the industry's decline.
A similar struggle is seen today with China’s high-tech ambitions. Tariffs on Chinese electronics and technology products limit access to key inputs, such as semiconductors and advanced robotics. While China continues its push for automation and AI-driven manufacturing, these trade barriers increase costs and disrupt supply chains, forcing China to accelerate its decoupling from Western markets. This shift could further strengthen alliances within BRICS, as China seeks alternative trade partnerships to reduce reliance on U.S.-controlled financial and technological ecosystems.
Will the current Tariff imposition backfire and isolate the US like it did a hundred years ago or 50 years ago? Is US risking it's position as a trusted economic leader? Only time would tell
The impact of tariff on innovation - or lack thereof
While the short-term impacts of tariffs often include higher consumer prices and job losses, the long-term effects can be even more damaging, as they discourage innovation by increasing costs and reducing competition.
Some historical examples globally : * Nigeria: Blocking import of rice opened up black market out of desperation to survive. * Brazil: Protectionist car policies led to expensive, outdated vehicles. * Malaysia’s Proton: Sheltered by tariffs and cronyism and failed to compete globally. * India (before 1991): Over-regulation limited the industries, until economic reforms allowed for growth. * Soviet Union during Cold War : Substandard products and minimal innovation due to the absence of foreign alternatives, yielding to economic stagnation.
On the flip side, Vietnam has significantly reduced protectionism policies by actively pursuing free trade agreements. This enabled it to become a key manufacturing hub. But Vietnam is not stopping there as it is actively pushing forward its capital-intensive growth by funding entrepreneurs.
The Future of U.S. Tariffs
History has shown that tariffs rarely deliver their intended benefits without unintended consequences. While they may provide temporary relief, they often raise prices, shrink job opportunities, and weaken industries in the long run.
Without a clear strategy for innovation and industrial modernization, the U.S. risks repeating past mistakes of isolating itself from global trade rather than strengthening its economy.
At this point, only time will tell whether these tariffs will truly help Americans or will they, once again, make the rich richer and the poor poorer.
-
@ a95c6243:d345522c
2025-01-01 17:39:51Heute möchte ich ein Gedicht mit euch teilen. Es handelt sich um eine Ballade des österreichischen Lyrikers Johann Gabriel Seidl aus dem 19. Jahrhundert. Mir sind diese Worte fest in Erinnerung, da meine Mutter sie perfekt rezitieren konnte, auch als die Kräfte schon langsam schwanden.
Dem originalen Titel «Die Uhr» habe ich für mich immer das Wort «innere» hinzugefügt. Denn der Zeitmesser – hier vermutliche eine Taschenuhr – symbolisiert zwar in dem Kontext das damalige Zeitempfinden und die Umbrüche durch die industrielle Revolution, sozusagen den Zeitgeist und das moderne Leben. Aber der Autor setzt sich philosophisch mit der Zeit auseinander und gibt seinem Werk auch eine klar spirituelle Dimension.
Das Ticken der Uhr und die Momente des Glücks und der Trauer stehen sinnbildlich für das unaufhaltsame Fortschreiten und die Vergänglichkeit des Lebens. Insofern könnte man bei der Uhr auch an eine Sonnenuhr denken. Der Rhythmus der Ereignisse passt uns vielleicht nicht immer in den Kram.
Was den Takt pocht, ist durchaus auch das Herz, unser «inneres Uhrwerk». Wenn dieses Meisterwerk einmal stillsteht, ist es unweigerlich um uns geschehen. Hoffentlich können wir dann dankbar sagen: «Ich habe mein Bestes gegeben.»
Ich trage, wo ich gehe, stets eine Uhr bei mir; \ Wieviel es geschlagen habe, genau seh ich an ihr. \ Es ist ein großer Meister, der künstlich ihr Werk gefügt, \ Wenngleich ihr Gang nicht immer dem törichten Wunsche genügt.
Ich wollte, sie wäre rascher gegangen an manchem Tag; \ Ich wollte, sie hätte manchmal verzögert den raschen Schlag. \ In meinen Leiden und Freuden, in Sturm und in der Ruh, \ Was immer geschah im Leben, sie pochte den Takt dazu.
Sie schlug am Sarge des Vaters, sie schlug an des Freundes Bahr, \ Sie schlug am Morgen der Liebe, sie schlug am Traualtar. \ Sie schlug an der Wiege des Kindes, sie schlägt, will's Gott, noch oft, \ Wenn bessere Tage kommen, wie meine Seele es hofft.
Und ward sie auch einmal träger, und drohte zu stocken ihr Lauf, \ So zog der Meister immer großmütig sie wieder auf. \ Doch stände sie einmal stille, dann wär's um sie geschehn, \ Kein andrer, als der sie fügte, bringt die Zerstörte zum Gehn.
Dann müßt ich zum Meister wandern, der wohnt am Ende wohl weit, \ Wohl draußen, jenseits der Erde, wohl dort in der Ewigkeit! \ Dann gäb ich sie ihm zurücke mit dankbar kindlichem Flehn: \ Sieh, Herr, ich hab nichts verdorben, sie blieb von selber stehn.
Johann Gabriel Seidl (1804-1875)
-
@ 4e616576:43c4fee8
2025-04-30 09:57:29asdf
-
@ e6817453:b0ac3c39
2024-10-06 11:21:27Hey folks, today we're diving into an exciting and emerging topic: personal artificial intelligence (PAI) and its connection to sovereignty, privacy, and ethics. With the rapid advancements in AI, there's a growing interest in the development of personal AI agents that can work on behalf of the user, acting autonomously and providing tailored services. However, as with any new technology, there are several critical factors that shape the future of PAI. Today, we'll explore three key pillars: privacy and ownership, explainability, and bias.
1. Privacy and Ownership: Foundations of Personal AI
At the heart of personal AI, much like self-sovereign identity (SSI), is the concept of ownership. For personal AI to be truly effective and valuable, users must own not only their data but also the computational power that drives these systems. This autonomy is essential for creating systems that respect the user's privacy and operate independently of large corporations.
In this context, privacy is more than just a feature—it's a fundamental right. Users should feel safe discussing sensitive topics with their AI, knowing that their data won’t be repurposed or misused by big tech companies. This level of control and data ownership ensures that users remain the sole beneficiaries of their information and computational resources, making privacy one of the core pillars of PAI.
2. Bias and Fairness: The Ethical Dilemma of LLMs
Most of today’s AI systems, including personal AI, rely heavily on large language models (LLMs). These models are trained on vast datasets that represent snapshots of the internet, but this introduces a critical ethical challenge: bias. The datasets used for training LLMs can be full of biases, misinformation, and viewpoints that may not align with a user’s personal values.
This leads to one of the major issues in AI ethics for personal AI—how do we ensure fairness and minimize bias in these systems? The training data that LLMs use can introduce perspectives that are not only unrepresentative but potentially harmful or unfair. As users of personal AI, we need systems that are free from such biases and can be tailored to our individual needs and ethical frameworks.
Unfortunately, training models that are truly unbiased and fair requires vast computational resources and significant investment. While large tech companies have the financial means to develop and train these models, individual users or smaller organizations typically do not. This limitation means that users often have to rely on pre-trained models, which may not fully align with their personal ethics or preferences. While fine-tuning models with personalized datasets can help, it's not a perfect solution, and bias remains a significant challenge.
3. Explainability: The Need for Transparency
One of the most frustrating aspects of modern AI is the lack of explainability. Many LLMs operate as "black boxes," meaning that while they provide answers or make decisions, it's often unclear how they arrived at those conclusions. For personal AI to be effective and trustworthy, it must be transparent. Users need to understand how the AI processes information, what data it relies on, and the reasoning behind its conclusions.
Explainability becomes even more critical when AI is used for complex decision-making, especially in areas that impact other people. If an AI is making recommendations, judgments, or decisions, it’s crucial for users to be able to trace the reasoning process behind those actions. Without this transparency, users may end up relying on AI systems that provide flawed or biased outcomes, potentially causing harm.
This lack of transparency is a major hurdle for personal AI development. Current LLMs, as mentioned earlier, are often opaque, making it difficult for users to trust their outputs fully. The explainability of AI systems will need to be improved significantly to ensure that personal AI can be trusted for important tasks.
Addressing the Ethical Landscape of Personal AI
As personal AI systems evolve, they will increasingly shape the ethical landscape of AI. We’ve already touched on the three core pillars—privacy and ownership, bias and fairness, and explainability. But there's more to consider, especially when looking at the broader implications of personal AI development.
Most current AI models, particularly those from big tech companies like Facebook, Google, or OpenAI, are closed systems. This means they are aligned with the goals and ethical frameworks of those companies, which may not always serve the best interests of individual users. Open models, such as Meta's LLaMA, offer more flexibility and control, allowing users to customize and refine the AI to better meet their personal needs. However, the challenge remains in training these models without significant financial and technical resources.
There’s also the temptation to use uncensored models that aren’t aligned with the values of large corporations, as they provide more freedom and flexibility. But in reality, models that are entirely unfiltered may introduce harmful or unethical content. It’s often better to work with aligned models that have had some of the more problematic biases removed, even if this limits some aspects of the system’s freedom.
The future of personal AI will undoubtedly involve a deeper exploration of these ethical questions. As AI becomes more integrated into our daily lives, the need for privacy, fairness, and transparency will only grow. And while we may not yet be able to train personal AI models from scratch, we can continue to shape and refine these systems through curated datasets and ongoing development.
Conclusion
In conclusion, personal AI represents an exciting new frontier, but one that must be navigated with care. Privacy, ownership, bias, and explainability are all essential pillars that will define the future of these systems. As we continue to develop personal AI, we must remain vigilant about the ethical challenges they pose, ensuring that they serve the best interests of users while remaining transparent, fair, and aligned with individual values.
If you have any thoughts or questions on this topic, feel free to reach out—I’d love to continue the conversation!
-
@ 3b3a42d3:d192e325
2025-04-10 08:57:51Atomic Signature Swaps (ASS) over Nostr is a protocol for atomically exchanging Schnorr signatures using Nostr events for orchestration. This new primitive enables multiple interesting applications like:
- Getting paid to publish specific Nostr events
- Issuing automatic payment receipts
- Contract signing in exchange for payment
- P2P asset exchanges
- Trading and enforcement of asset option contracts
- Payment in exchange for Nostr-based credentials or access tokens
- Exchanging GMs 🌞
It only requires that (i) the involved signatures be Schnorr signatures using the secp256k1 curve and that (ii) at least one of those signatures be accessible to both parties. These requirements are naturally met by Nostr events (published to relays), Taproot transactions (published to the mempool and later to the blockchain), and Cashu payments (using mints that support NUT-07, allowing any pair of these signatures to be swapped atomically.
How the Cryptographic Magic Works 🪄
This is a Schnorr signature
(Zₓ, s)
:s = z + H(Zₓ || P || m)⋅k
If you haven't seen it before, don't worry, neither did I until three weeks ago.
The signature scalar s is the the value a signer with private key
k
(and public keyP = k⋅G
) must calculate to prove his commitment over the messagem
given a randomly generated noncez
(Zₓ
is just the x-coordinate of the public pointZ = z⋅G
).H
is a hash function (sha256 with the tag "BIP0340/challenge" when dealing with BIP340),||
just means to concatenate andG
is the generator point of the elliptic curve, used to derive public values from private ones.Now that you understand what this equation means, let's just rename
z = r + t
. We can do that,z
is just a randomly generated number that can be represented as the sum of two other numbers. It also follows thatz⋅G = r⋅G + t⋅G ⇔ Z = R + T
. Putting it all back into the definition of a Schnorr signature we get:s = (r + t) + H((R + T)ₓ || P || m)⋅k
Which is the same as:
s = sₐ + t
wheresₐ = r + H((R + T)ₓ || P || m)⋅k
sₐ
is what we call the adaptor signature scalar) and t is the secret.((R + T)ₓ, sₐ)
is an incomplete signature that just becomes valid by add the secret t to thesₐ
:s = sₐ + t
What is also important for our purposes is that by getting access to the valid signature s, one can also extract t from it by just subtracting
sₐ
:t = s - sₐ
The specific value of
t
depends on our choice of the public pointT
, sinceR
is just a public point derived from a randomly generated noncer
.So how do we choose
T
so that it requires the secret t to be the signature over a specific messagem'
by an specific public keyP'
? (without knowing the value oft
)Let's start with the definition of t as a valid Schnorr signature by P' over m':
t = r' + H(R'ₓ || P' || m')⋅k' ⇔ t⋅G = r'⋅G + H(R'ₓ || P' || m')⋅k'⋅G
That is the same as:
T = R' + H(R'ₓ || P' || m')⋅P'
Notice that in order to calculate the appropriate
T
that requirest
to be an specific signature scalar, we only need to know the public nonceR'
used to generate that signature.In summary: in order to atomically swap Schnorr signatures, one party
P'
must provide a public nonceR'
, while the other partyP
must provide an adaptor signature using that nonce:sₐ = r + H((R + T)ₓ || P || m)⋅k
whereT = R' + H(R'ₓ || P' || m')⋅P'
P'
(the nonce provider) can then add his own signature t to the adaptor signaturesₐ
in order to get a valid signature byP
, i.e.s = sₐ + t
. When he publishes this signature (as a Nostr event, Cashu transaction or Taproot transaction), it becomes accessible toP
that can now extract the signaturet
byP'
and also make use of it.Important considerations
A signature may not be useful at the end of the swap if it unlocks funds that have already been spent, or that are vulnerable to fee bidding wars.
When a swap involves a Taproot UTXO, it must always use a 2-of-2 multisig timelock to avoid those issues.
Cashu tokens do not require this measure when its signature is revealed first, because the mint won't reveal the other signature if they can't be successfully claimed, but they also require a 2-of-2 multisig timelock when its signature is only revealed last (what is unavoidable in cashu for cashu swaps).
For Nostr events, whoever receives the signature first needs to publish it to at least one relay that is accessible by the other party. This is a reasonable expectation in most cases, but may be an issue if the event kind involved is meant to be used privately.
How to Orchestrate the Swap over Nostr?
Before going into the specific event kinds, it is important to recognize what are the requirements they must meet and what are the concerns they must address. There are mainly three requirements:
- Both parties must agree on the messages they are going to sign
- One party must provide a public nonce
- The other party must provide an adaptor signature using that nonce
There is also a fundamental asymmetry in the roles of both parties, resulting in the following significant downsides for the party that generates the adaptor signature:
- NIP-07 and remote signers do not currently support the generation of adaptor signatures, so he must either insert his nsec in the client or use a fork of another signer
- There is an overhead of retrieving the completed signature containing the secret, either from the blockchain, mint endpoint or finding the appropriate relay
- There is risk he may not get his side of the deal if the other party only uses his signature privately, as I have already mentioned
- There is risk of losing funds by not extracting or using the signature before its timelock expires. The other party has no risk since his own signature won't be exposed by just not using the signature he received.
The protocol must meet all those requirements, allowing for some kind of role negotiation and while trying to reduce the necessary hops needed to complete the swap.
Swap Proposal Event (kind:455)
This event enables a proposer and his counterparty to agree on the specific messages whose signatures they intend to exchange. The
content
field is the following stringified JSON:{ "give": <signature spec (required)>, "take": <signature spec (required)>, "exp": <expiration timestamp (optional)>, "role": "<adaptor | nonce (optional)>", "description": "<Info about the proposal (optional)>", "nonce": "<Signature public nonce (optional)>", "enc_s": "<Encrypted signature scalar (optional)>" }
The field
role
indicates what the proposer will provide during the swap, either the nonce or the adaptor. When this optional field is not provided, the counterparty may decide whether he will send a nonce back in a Swap Nonce event or a Swap Adaptor event using thenonce
(optionally) provided by in the Swap Proposal in order to avoid one hop of interaction.The
enc_s
field may be used to store the encrypted scalar of the signature associated with thenonce
, since this information is necessary later when completing the adaptor signature received from the other party.A
signature spec
specifies thetype
and all necessary information for producing and verifying a given signature. In the case of signatures for Nostr events, it contain a template with all the fields, exceptpubkey
,id
andsig
:{ "type": "nostr", "template": { "kind": "<kind>" "content": "<content>" "tags": [ … ], "created_at": "<created_at>" } }
In the case of Cashu payments, a simplified
signature spec
just needs to specify the payment amount and an array of mints trusted by the proposer:{ "type": "cashu", "amount": "<amount>", "mint": ["<acceptable mint_url>", …] }
This works when the payer provides the adaptor signature, but it still needs to be extended to also work when the payer is the one receiving the adaptor signature. In the later case, the
signature spec
must also include atimelock
and the derived public keysY
of each Cashu Proof, but for now let's just ignore this situation. It should be mentioned that the mint must be trusted by both parties and also support Token state check (NUT-07) for revealing the completed adaptor signature and P2PK spending conditions (NUT-11) for the cryptographic scheme to work.The
tags
are:"p"
, the proposal counterparty's public key (required)"a"
, akind:30455
Swap Listing event or an application specific version of it (optional)
Forget about this Swap Listing event for now, I will get to it later...
Swap Nonce Event (kind:456) - Optional
This is an optional event for the Swap Proposal receiver to provide the public nonce of his signature when the proposal does not include a nonce or when he does not want to provide the adaptor signature due to the downsides previously mentioned. The
content
field is the following stringified JSON:{ "nonce": "<Signature public nonce>", "enc_s": "<Encrypted signature scalar (optional)>" }
And the
tags
must contain:"e"
, akind:455
Swap Proposal Event (required)"p"
, the counterparty's public key (required)
Swap Adaptor Event (kind:457)
The
content
field is the following stringified JSON:{ "adaptors": [ { "sa": "<Adaptor signature scalar>", "R": "<Signer's public nonce (including parity byte)>", "T": "<Adaptor point (including parity byte)>", "Y": "<Cashu proof derived public key (if applicable)>", }, …], "cashu": "<Cashu V4 token (if applicable)>" }
And the
tags
must contain:"e"
, akind:455
Swap Proposal Event (required)"p"
, the counterparty's public key (required)
Discoverability
The Swap Listing event previously mentioned as an optional tag in the Swap Proposal may be used to find an appropriate counterparty for a swap. It allows a user to announce what he wants to accomplish, what his requirements are and what is still open for negotiation.
Swap Listing Event (kind:30455)
The
content
field is the following stringified JSON:{ "description": "<Information about the listing (required)>", "give": <partial signature spec (optional)>, "take": <partial signature spec (optional)>, "examples: [<take signature spec>], // optional "exp": <expiration timestamp (optional)>, "role": "<adaptor | nonce (optional)>" }
The
description
field describes the restrictions on counterparties and signatures the user is willing to accept.A
partial signature spec
is an incompletesignature spec
used in Swap Proposal eventskind:455
where omitting fields signals that they are still open for negotiation.The
examples
field is an array ofsignature specs
the user would be willing totake
.The
tags
are:"d"
, a unique listing id (required)"s"
, the status of the listingdraft | open | closed
(required)"t"
, topics related to this listing (optional)"p"
, public keys to notify about the proposal (optional)
Application Specific Swap Listings
Since Swap Listings are still fairly generic, it is expected that specific use cases define new event kinds based on the generic listing. Those application specific swap listing would be easier to filter by clients and may impose restrictions and add new fields and/or tags. The following are some examples under development:
Sponsored Events
This listing is designed for users looking to promote content on the Nostr network, as well as for those who want to monetize their accounts by sharing curated sponsored content with their existing audiences.
It follows the same format as the generic Swap Listing event, but uses the
kind:30456
instead.The following new tags are included:
"k"
, event kind being sponsored (required)"title"
, campaign title (optional)
It is required that at least one
signature spec
(give
and/ortake
) must have"type": "nostr"
and also contain the following tag["sponsor", "<pubkey>", "<attestation>"]
with the sponsor's public key and his signature over the signature spec without the sponsor tag as his attestation. This last requirement enables clients to disclose and/or filter sponsored events.Asset Swaps
This listing is designed for users looking for counterparties to swap different assets that can be transferred using Schnorr signatures, like any unit of Cashu tokens, Bitcoin or other asset IOUs issued using Taproot.
It follows the same format as the generic Swap Listing event, but uses the
kind:30457
instead.It requires the following additional tags:
"t"
, asset pair to be swapped (e.g."btcusd"
)"t"
, asset being offered (e.g."btc"
)"t"
, accepted payment method (e.g."cashu"
,"taproot"
)
Swap Negotiation
From finding an appropriate Swap Listing to publishing a Swap Proposal, there may be some kind of negotiation between the involved parties, e.g. agreeing on the amount to be paid by one of the parties or the exact content of a Nostr event signed by the other party. There are many ways to accomplish that and clients may implement it as they see fit for their specific goals. Some suggestions are:
- Adding
kind:1111
Comments to the Swap Listing or an existing Swap Proposal - Exchanging tentative Swap Proposals back and forth until an agreement is reached
- Simple exchanges of DMs
- Out of band communication (e.g. Signal)
Work to be done
I've been refining this specification as I develop some proof-of-concept clients to experience its flaws and trade-offs in practice. I left the signature spec for Taproot signatures out of the current document as I still have to experiment with it. I will probably find some important orchestration issues related to dealing with
2-of-2 multisig timelocks
, which also affects Cashu transactions when spent last, that may require further adjustments to what was presented here.The main goal of this article is to find other people interested in this concept and willing to provide valuable feedback before a PR is opened in the NIPs repository for broader discussions.
References
- GM Swap- Nostr client for atomically exchanging GM notes. Live demo available here.
- Sig4Sats Script - A Typescript script demonstrating the swap of a Cashu payment for a signed Nostr event.
- Loudr- Nostr client under development for sponsoring the publication of Nostr events. Live demo available at loudr.me.
- Poelstra, A. (2017). Scriptless Scripts. Blockstream Research. https://github.com/BlockstreamResearch/scriptless-scripts
-
@ e6817453:b0ac3c39
2024-09-30 14:52:23In the modern world of AI, managing vast amounts of data while keeping it relevant and accessible is a significant challenge, mainly when dealing with large language models (LLMs) and vector databases. One approach that has gained prominence in recent years is integrating vector search with metadata, especially in retrieval-augmented generation (RAG) pipelines. Vector search and metadata enable faster and more accurate data retrieval. However, the process of pre- and post-search filtering results plays a crucial role in ensuring data relevance.
The Vector Search and Metadata Challenge
In a typical vector search, you create embeddings from chunks of text, such as a PDF document. These embeddings allow the system to search for similar items and retrieve them based on relevance. The challenge, however, arises when you need to combine vector search results with structured metadata. For example, you may have timestamped text-based content and want to retrieve the most relevant content within a specific date range. This is where metadata becomes critical in refining search results.
Unfortunately, most vector databases treat metadata as a secondary feature, isolating it from the primary vector search process. As a result, handling queries that combine vectors and metadata can become a challenge, particularly when the search needs to account for a dynamic range of filters, such as dates or other structured data.
LibSQL and vector search metadata
LibSQL is a more general-purpose SQLite-based database that adds vector capabilities to regular data. Vectors are presented as blob columns of regular tables. It makes vector embeddings and metadata a first-class citizen that naturally builds deep integration of these data points.
create table if not exists conversation ( id varchar(36) primary key not null, startDate real, endDate real, summary text, vectorSummary F32_BLOB(512) );
It solves the challenge of metadata and vector search and eliminates impedance between vector data and regular structured data points in the same storage.
As you can see, you can access vector-like data and start date in the same query.
select c.id ,c.startDate, c.endDate, c.summary, vector_distance_cos(c.vectorSummary, vector(${vector})) distance from conversation where ${startDate ? `and c.startDate >= ${startDate.getTime()}` : ''} ${endDate ? `and c.endDate <= ${endDate.getTime()}` : ''} ${distance ? `and distance <= ${distance}` : ''} order by distance limit ${top};
vector_distance_cos calculated as distance allows us to make a primitive vector search that does a full scan and calculates distances on rows. We could optimize it with CTE and limit search and distance calculations to a much smaller subset of data.
This approach could be calculation intensive and fail on large amounts of data.
Libsql offers a way more effective vector search based on FlashDiskANN vector indexed.
vector_top_k('idx_conversation_vectorSummary', ${vector} , ${top}) i
vector_top_k is a table function that searches for the top of the newly created vector search index. As you can see, we could use only vector as a function parameter, and other columns could be used outside of the table function. So, to use a vector index together with different columns, we need to apply some strategies.
Now we get a classical problem of integration vector search results with metadata queries.
Post-Filtering: A Common Approach
The most widely adopted method in these pipelines is post-filtering. In this approach, the system first retrieves data based on vector similarities and then applies metadata filters. For example, imagine you’re conducting a vector search to retrieve conversations relevant to a specific question. Still, you also want to ensure these conversations occurred in the past week.
Post-filtering allows the system to retrieve the most relevant vector-based results and subsequently filter out any that don’t meet the metadata criteria, such as date range. This method is efficient when vector similarity is the primary factor driving the search, and metadata is only applied as a secondary filter.
const sqlQuery = ` select c.id ,c.startDate, c.endDate, c.summary, vector_distance_cos(c.vectorSummary, vector(${vector})) distance from vector_top_k('idx_conversation_vectorSummary', ${vector} , ${top}) i inner join conversation c on i.id = c.rowid where ${startDate ? `and c.startDate >= ${startDate.getTime()}` : ''} ${endDate ? `and c.endDate <= ${endDate.getTime()}` : ''} ${distance ? `and distance <= ${distance}` : ''} order by distance limit ${top};
However, there are some limitations. For example, the initial vector search may yield fewer results or omit some relevant data before applying the metadata filter. If the search window is narrow enough, this can lead to complete results.
One working strategy is to make the top value in vector_top_K much bigger. Be careful, though, as the function's default max number of results is around 200 rows.
Pre-Filtering: A More Complex Approach
Pre-filtering is a more intricate approach but can be more effective in some instances. In pre-filtering, metadata is used as the primary filter before vector search takes place. This means that only data that meets the metadata criteria is passed into the vector search process, limiting the scope of the search right from the beginning.
While this approach can significantly reduce the amount of irrelevant data in the final results, it comes with its own challenges. For example, pre-filtering requires a deeper understanding of the data structure and may necessitate denormalizing the data or creating separate pre-filtered tables. This can be resource-intensive and, in some cases, impractical for dynamic metadata like date ranges.
In certain use cases, pre-filtering might outperform post-filtering. For instance, when the metadata (e.g., specific date ranges) is the most important filter, pre-filtering ensures the search is conducted only on the most relevant data.
Pre-filtering with distance-based filtering
So, we are getting back to an old concept. We do prefiltering instead of using a vector index.
WITH FilteredDates AS ( SELECT c.id, c.startDate, c.endDate, c.summary, c.vectorSummary FROM YourTable c WHERE ${startDate ? `AND c.startDate >= ${startDate.getTime()}` : ''} ${endDate ? `AND c.endDate <= ${endDate.getTime()}` : ''} ), DistanceCalculation AS ( SELECT fd.id, fd.startDate, fd.endDate, fd.summary, fd.vectorSummary, vector_distance_cos(fd.vectorSummary, vector(${vector})) AS distance FROM FilteredDates fd ) SELECT dc.id, dc.startDate, dc.endDate, dc.summary, dc.distance FROM DistanceCalculation dc WHERE 1=1 ${distance ? `AND dc.distance <= ${distance}` : ''} ORDER BY dc.distance LIMIT ${top};
It makes sense if the filter produces small data and distance calculation happens on the smaller data set.
As a pro of this approach, you have full control over the data and get all results without omitting some typical values for extensive index searches.
Choosing Between Pre and Post-Filtering
Both pre-filtering and post-filtering have their advantages and disadvantages. Post-filtering is more accessible to implement, especially when vector similarity is the primary search factor, but it can lead to incomplete results. Pre-filtering, on the other hand, can yield more accurate results but requires more complex data handling and optimization.
In practice, many systems combine both strategies, depending on the query. For example, they might start with a broad pre-filtering based on metadata (like date ranges) and then apply a more targeted vector search with post-filtering to refine the results further.
Conclusion
Vector search with metadata filtering offers a powerful approach for handling large-scale data retrieval in LLMs and RAG pipelines. Whether you choose pre-filtering or post-filtering—or a combination of both—depends on your application's specific requirements. As vector databases continue to evolve, future innovations that combine these two approaches more seamlessly will help improve data relevance and retrieval efficiency further.
-
@ a95c6243:d345522c
2024-12-21 09:54:49Falls du beim Lesen des Titels dieses Newsletters unwillkürlich an positive Neuigkeiten aus dem globalen polit-medialen Irrenhaus oder gar aus dem wirtschaftlichen Umfeld gedacht hast, darf ich dich beglückwünschen. Diese Assoziation ist sehr löblich, denn sie weist dich als unverbesserlichen Optimisten aus. Leider muss ich dich diesbezüglich aber enttäuschen. Es geht hier um ein anderes Thema, allerdings sehr wohl ein positives, wie ich finde.
Heute ist ein ganz besonderer Tag: die Wintersonnenwende. Genau gesagt hat heute morgen um 10:20 Uhr Mitteleuropäischer Zeit (MEZ) auf der Nordhalbkugel unseres Planeten der astronomische Winter begonnen. Was daran so außergewöhnlich ist? Der kürzeste Tag des Jahres war gestern, seit heute werden die Tage bereits wieder länger! Wir werden also jetzt jeden Tag ein wenig mehr Licht haben.
Für mich ist dieses Ereignis immer wieder etwas kurios: Es beginnt der Winter, aber die Tage werden länger. Das erscheint mir zunächst wie ein Widerspruch, denn meine spontanen Assoziationen zum Winter sind doch eher Kälte und Dunkelheit, relativ zumindest. Umso erfreulicher ist der emotionale Effekt, wenn dann langsam die Erkenntnis durchsickert: Ab jetzt wird es schon wieder heller!
Natürlich ist es kalt im Winter, mancherorts mehr als anderswo. Vielleicht jedoch nicht mehr lange, wenn man den Klimahysterikern glauben wollte. Mindestens letztes Jahr hat Väterchen Frost allerdings gleich zu Beginn seiner Saison – und passenderweise während des globalen Überhitzungsgipfels in Dubai – nochmal richtig mit der Faust auf den Tisch gehauen. Schnee- und Eischaos sind ja eigentlich in der Agenda bereits nicht mehr vorgesehen. Deswegen war man in Deutschland vermutlich in vorauseilendem Gehorsam schon nicht mehr darauf vorbereitet und wurde glatt lahmgelegt.
Aber ich schweife ab. Die Aussicht auf nach und nach mehr Licht und damit auch Wärme stimmt mich froh. Den Zusammenhang zwischen beidem merkt man in Andalusien sehr deutlich. Hier, wo die Häuser im Winter arg auskühlen, geht man zum Aufwärmen raus auf die Straße oder auf den Balkon. Die Sonne hat auch im Winter eine erfreuliche Kraft. Und da ist jede Minute Gold wert.
Außerdem ist mir vor Jahren so richtig klar geworden, warum mir das südliche Klima so sehr gefällt. Das liegt nämlich nicht nur an der Sonne als solcher, oder der Wärme – das liegt vor allem am Licht. Ohne Licht keine Farben, das ist der ebenso simple wie gewaltige Unterschied zwischen einem deprimierenden matschgraubraunen Winter und einem fröhlichen bunten. Ein großes Stück Lebensqualität.
Mir gefällt aber auch die Symbolik dieses Tages: Licht aus der Dunkelheit, ein Wendepunkt, ein Neuanfang, neue Möglichkeiten, Übergang zu neuer Aktivität. In der winterlichen Stille keimt bereits neue Lebendigkeit. Und zwar in einem Zyklus, das wird immer wieder so geschehen. Ich nehme das gern als ein Stück Motivation, es macht mir Hoffnung und gibt mir Energie.
Übrigens ist parallel am heutigen Tag auf der südlichen Halbkugel Sommeranfang. Genau im entgegengesetzten Rhythmus, sich ergänzend, wie Yin und Yang. Das alles liegt an der Schrägstellung der Erdachse, die ist nämlich um 23,4º zur Umlaufbahn um die Sonne geneigt. Wir erinnern uns, gell?
Insofern bleibt eindeutig festzuhalten, dass “schräg sein” ein willkommener, wichtiger und positiver Wert ist. Mit anderen Worten: auch ungewöhnlich, eigenartig, untypisch, wunderlich, kauzig, … ja sogar irre, spinnert oder gar “quer” ist in Ordnung. Das schließt das Denken mit ein.
In diesem Sinne wünsche ich euch allen urige Weihnachtstage!
Dieser Beitrag ist letztes Jahr in meiner Denkbar erschienen.
-
@ a95c6243:d345522c
2024-12-13 19:30:32Das Betriebsklima ist das einzige Klima, \ das du selbst bestimmen kannst. \ Anonym
Eine Strategie zur Anpassung an den Klimawandel hat das deutsche Bundeskabinett diese Woche beschlossen. Da «Wetterextreme wie die immer häufiger auftretenden Hitzewellen und Starkregenereignisse» oft desaströse Auswirkungen auf Mensch und Umwelt hätten, werde eine Anpassung an die Folgen des Klimawandels immer wichtiger. «Klimaanpassungsstrategie» nennt die Regierung das.
Für die «Vorsorge vor Klimafolgen» habe man nun erstmals klare Ziele und messbare Kennzahlen festgelegt. So sei der Erfolg überprüfbar, und das solle zu einer schnelleren Bewältigung der Folgen führen. Dass sich hinter dem Begriff Klimafolgen nicht Folgen des Klimas, sondern wohl «Folgen der globalen Erwärmung» verbergen, erklärt den Interessierten die Wikipedia. Dabei ist das mit der Erwärmung ja bekanntermaßen so eine Sache.
Die Zunahme schwerer Unwetterereignisse habe gezeigt, so das Ministerium, wie wichtig eine frühzeitige und effektive Warnung der Bevölkerung sei. Daher solle es eine deutliche Anhebung der Nutzerzahlen der sogenannten Nina-Warn-App geben.
Die ARD spurt wie gewohnt und setzt die Botschaft zielsicher um. Der Artikel beginnt folgendermaßen:
«Die Flut im Ahrtal war ein Schock für das ganze Land. Um künftig besser gegen Extremwetter gewappnet zu sein, hat die Bundesregierung eine neue Strategie zur Klimaanpassung beschlossen. Die Warn-App Nina spielt eine zentrale Rolle. Der Bund will die Menschen in Deutschland besser vor Extremwetter-Ereignissen warnen und dafür die Reichweite der Warn-App Nina deutlich erhöhen.»
Die Kommunen würden bei ihren «Klimaanpassungsmaßnahmen» vom Zentrum KlimaAnpassung unterstützt, schreibt das Umweltministerium. Mit dessen Aufbau wurden das Deutsche Institut für Urbanistik gGmbH, welches sich stark für Smart City-Projekte engagiert, und die Adelphi Consult GmbH beauftragt.
Adelphi beschreibt sich selbst als «Europas führender Think-and-Do-Tank und eine unabhängige Beratung für Klima, Umwelt und Entwicklung». Sie seien «global vernetzte Strateg*innen und weltverbessernde Berater*innen» und als «Vorreiter der sozial-ökologischen Transformation» sei man mit dem Deutschen Nachhaltigkeitspreis ausgezeichnet worden, welcher sich an den Zielen der Agenda 2030 orientiere.
Über die Warn-App mit dem niedlichen Namen Nina, die möglichst jeder auf seinem Smartphone installieren soll, informiert das Bundesamt für Bevölkerungsschutz und Katastrophenhilfe (BBK). Gewarnt wird nicht nur vor Extrem-Wetterereignissen, sondern zum Beispiel auch vor Waffengewalt und Angriffen, Strom- und anderen Versorgungsausfällen oder Krankheitserregern. Wenn man die Kategorie Gefahreninformation wählt, erhält man eine Dosis von ungefähr zwei Benachrichtigungen pro Woche.
Beim BBK erfahren wir auch einiges über die empfohlenen Systemeinstellungen für Nina. Der Benutzer möge zum Beispiel den Zugriff auf die Standortdaten «immer zulassen», und zwar mit aktivierter Funktion «genauen Standort verwenden». Die Datennutzung solle unbeschränkt sein, auch im Hintergrund. Außerdem sei die uneingeschränkte Akkunutzung zu aktivieren, der Energiesparmodus auszuschalten und das Stoppen der App-Aktivität bei Nichtnutzung zu unterbinden.
Dass man so dramatische Ereignisse wie damals im Ahrtal auch anders bewerten kann als Regierungen und Systemmedien, hat meine Kollegin Wiltrud Schwetje anhand der Tragödie im spanischen Valencia gezeigt. Das Stichwort «Agenda 2030» taucht dabei in einem Kontext auf, der wenig mit Nachhaltigkeitspreisen zu tun hat.
Dieser Beitrag ist zuerst auf Transition News erschienen.
-
@ 4e616576:43c4fee8
2025-04-30 09:48:05asdfasdflkjasdf
-
@ 4e616576:43c4fee8
2025-04-30 09:42:33lorem ipsum
-
@ 3bf0c63f:aefa459d
2024-09-06 12:49:46Nostr: a quick introduction, attempt #2
Nostr doesn't subscribe to any ideals of "free speech" as these belong to the realm of politics and assume a big powerful government that enforces a common ruleupon everybody else.
Nostr instead is much simpler, it simply says that servers are private property and establishes a generalized framework for people to connect to all these servers, creating a true free market in the process. In other words, Nostr is the public road that each market participant can use to build their own store or visit others and use their services.
(Of course a road is never truly public, in normal cases it's ran by the government, in this case it relies upon the previous existence of the internet with all its quirks and chaos plus a hand of government control, but none of that matters for this explanation).
More concretely speaking, Nostr is just a set of definitions of the formats of the data that can be passed between participants and their expected order, i.e. messages between clients (i.e. the program that runs on a user computer) and relays (i.e. the program that runs on a publicly accessible computer, a "server", generally with a domain-name associated) over a type of TCP connection (WebSocket) with cryptographic signatures. This is what is called a "protocol" in this context, and upon that simple base multiple kinds of sub-protocols can be added, like a protocol for "public-square style microblogging", "semi-closed group chat" or, I don't know, "recipe sharing and feedback".
-
@ a95c6243:d345522c
2024-12-06 18:21:15Die Ungerechtigkeit ist uns nur in dem Falle angenehm,\ dass wir Vorteile aus ihr ziehen;\ in jedem andern hegt man den Wunsch,\ dass der Unschuldige in Schutz genommen werde.\ Jean-Jacques Rousseau
Politiker beteuern jederzeit, nur das Beste für die Bevölkerung zu wollen – nicht von ihr. Auch die zahlreichen unsäglichen «Corona-Maßnahmen» waren angeblich zu unserem Schutz notwendig, vor allem wegen der «besonders vulnerablen Personen». Daher mussten alle möglichen Restriktionen zwangsweise und unter Umgehung der Parlamente verordnet werden.
Inzwischen hat sich immer deutlicher herausgestellt, dass viele jener «Schutzmaßnahmen» den gegenteiligen Effekt hatten, sie haben den Menschen und den Gesellschaften enorm geschadet. Nicht nur haben die experimentellen Geninjektionen – wie erwartet – massive Nebenwirkungen, sondern Maskentragen schadet der Psyche und der Entwicklung (nicht nur unserer Kinder) und «Lockdowns und Zensur haben Menschen getötet».
Eine der wichtigsten Waffen unserer «Beschützer» ist die Spaltung der Gesellschaft. Die tiefen Gräben, die Politiker, Lobbyisten und Leitmedien praktisch weltweit ausgehoben haben, funktionieren leider nahezu in Perfektion. Von ihren persönlichen Erfahrungen als Kritikerin der Maßnahmen berichtete kürzlich eine Schweizerin im Interview mit Transition News. Sie sei schwer enttäuscht und verspüre bis heute eine Hemmschwelle und ein seltsames Unwohlsein im Umgang mit «Geimpften».
Menschen, die aufrichtig andere schützen wollten, werden von einer eindeutig politischen Justiz verfolgt, verhaftet und angeklagt. Dazu zählen viele Ärzte, darunter Heinrich Habig, Bianca Witzschel und Walter Weber. Über den aktuell laufenden Prozess gegen Dr. Weber hat Transition News mehrfach berichtet (z.B. hier und hier). Auch der Selbstschutz durch Verweigerung der Zwangs-Covid-«Impfung» bewahrt nicht vor dem Knast, wie Bundeswehrsoldaten wie Alexander Bittner erfahren mussten.
Die eigentlich Kriminellen schützen sich derweil erfolgreich selber, nämlich vor der Verantwortung. Die «Impf»-Kampagne war «das größte Verbrechen gegen die Menschheit». Trotzdem stellt man sich in den USA gerade die Frage, ob der scheidende Präsident Joe Biden nach seinem Sohn Hunter möglicherweise auch Anthony Fauci begnadigen wird – in diesem Fall sogar präventiv. Gibt es überhaupt noch einen Rest Glaubwürdigkeit, den Biden verspielen könnte?
Der Gedanke, den ehemaligen wissenschaftlichen Chefberater des US-Präsidenten und Direktor des National Institute of Allergy and Infectious Diseases (NIAID) vorsorglich mit einem Schutzschild zu versehen, dürfte mit der vergangenen Präsidentschaftswahl zu tun haben. Gleich mehrere Personalentscheidungen des designierten Präsidenten Donald Trump lassen Leute wie Fauci erneut in den Fokus rücken.
Das Buch «The Real Anthony Fauci» des nominierten US-Gesundheitsministers Robert F. Kennedy Jr. erschien 2021 und dreht sich um die Machenschaften der Pharma-Lobby in der öffentlichen Gesundheit. Das Vorwort zur rumänischen Ausgabe des Buches schrieb übrigens Călin Georgescu, der Überraschungssieger der ersten Wahlrunde der aktuellen Präsidentschaftswahlen in Rumänien. Vielleicht erklärt diese Verbindung einen Teil der Panik im Wertewesten.
In Rumänien selber gab es gerade einen Paukenschlag: Das bisherige Ergebnis wurde heute durch das Verfassungsgericht annuliert und die für Sonntag angesetzte Stichwahl kurzfristig abgesagt – wegen angeblicher «aggressiver russischer Einmischung». Thomas Oysmüller merkt dazu an, damit sei jetzt in der EU das Tabu gebrochen, Wahlen zu verbieten, bevor sie etwas ändern können.
Unsere Empörung angesichts der Historie von Maßnahmen, die die Falschen beschützen und für die meisten von Nachteil sind, müsste enorm sein. Die Frage ist, was wir damit machen. Wir sollten nach vorne schauen und unsere Energie clever einsetzen. Abgesehen von der Umgehung von jeglichem «Schutz vor Desinformation und Hassrede» (sprich: Zensur) wird es unsere wichtigste Aufgabe sein, Gräben zu überwinden.
Dieser Beitrag ist zuerst auf Transition News erschienen.