-
@ ae1008d2:a166d760
2025-04-01 00:29:56This is part one in a series of long-form content of my ideas as to what we are entering into in my opinion;The Roaring '20's 2.0 (working title). I hope you'll join me on this journey together.
"History does not repeat itself, but it often rhymes"; - Samuel Clemens, aka Mark Twain. My only class I received an A+ in high school was history, this opened up the opportunity for me to enroll in an AP (college level) history class my senior year. There was an inherent nature for me to study history. Another quote I found to live by; "If we do not study history, we are bound to repeat it", a paraphrased quote by the many great philosphers of old from Edmund Burke, George Santayana and even Winston Churchill, all pulling from the same King Solomon quote; "What has been will be again, what has been done will be done again; there is nothing new under the sun". My curiousity of human actions, psychological and therefore economical behavior, has benefitted me greatly throughout my life and career, at such a young age. Being able to 'see around the curves' ahead I thought was a gift many had, but was sorely mistaken. People are just built different. One, if not my hardest action for me is to share. I just do things; act, often without even thinking about writing down or sharing in anyway shape or form what I just did here with friends, what we just built or how we formed these startups, etc., I've finally made the time, mainly for myself, to share my thoughts and ideas as to where we are at, and what we can do moving forward. It's very easy for us living a sovereign-lifestyle in Bitcoin, Nostr and other P2P, cryptographically-signed sovereign tools and tech-stacks alike, permissionless and self-hostable, to take all these tools for granted. We just live with them. Use them everyday. Do you own property? Do you have to take care of the cattle everyday? To live a sovereign life is tough, but most rewarding. As mentioned above, I'm diving into the details in a several part series as to what the roaring '20's were about, how it got to the point it did, and the inevitable outcome we all know what came to be. How does this possibly repeat itself almost exactly a century later? How does Bitcoin play a role? Are we all really going to be replaced by AI robots (again, history rhymes here)? Time will tell, but I think most of us actually using the tools will also forsee many of these possible outcomes, as it's why we are using many of these tools today. The next parts of this series will be released periodically, maybe once per month, maybe once per quarter. I'll also be releasing these on other platforms like Medium for reach, but Nostr will always be first, most important and prioritized.
I'll leave you with one of my favorite quotes I've lived by from one of the greatest traders of all time, especially during this roaring '20's era, Jesse Livermore; "Money is made by sitting, not trading". -
@ 21335073:a244b1ad
2025-03-15 23:00:40I want to see Nostr succeed. If you can think of a way I can help make that happen, I’m open to it. I’d like your suggestions.
My schedule’s shifting soon, and I could volunteer a few hours a week to a Nostr project. I won’t have more total time, but how I use it will change.
Why help? I care about freedom. Nostr’s one of the most powerful freedom tools I’ve seen in my lifetime. If I believe that, I should act on it.
I don’t care about money or sats. I’m not rich, I don’t have extra cash. That doesn’t drive me—freedom does. I’m volunteering, not asking for pay.
I’m not here for clout. I’ve had enough spotlight in my life; it doesn’t move me. If I wanted clout, I’d be on Twitter dropping basic takes. Clout’s easy. Freedom’s hard. I’d rather help anonymously. No speaking at events—small meetups are cool for the vibe, but big conferences? Not my thing. I’ll never hit a huge Bitcoin conference. It’s just not my scene.
That said, I could be convinced to step up if it’d really boost Nostr—as long as it’s legal and gets results.
In this space, I’d watch for social engineering. I watch out for it. I’m not here to make friends, just to help. No shade—you all seem great—but I’ve got a full life and awesome friends irl. I don’t need your crew or to be online cool. Connect anonymously if you want; I’d encourage it.
I’m sick of watching other social media alternatives grow while Nostr kinda stalls. I could trash-talk, but I’d rather do something useful.
Skills? I’m good at spotting social media problems and finding possible solutions. I won’t overhype myself—that’s weird—but if you’re responding, you probably see something in me. Perhaps you see something that I don’t see in myself.
If you need help now or later with Nostr projects, reach out. Nostr only—nothing else. Anonymous contact’s fine. Even just a suggestion on how I can pitch in, no project attached, works too. 💜
Creeps or harassment will get blocked or I’ll nuke my simplex code if it becomes a problem.
https://simplex.chat/contact#/?v=2-4&smp=smp%3A%2F%2FSkIkI6EPd2D63F4xFKfHk7I1UGZVNn6k1QWZ5rcyr6w%3D%40smp9.simplex.im%2FbI99B3KuYduH8jDr9ZwyhcSxm2UuR7j0%23%2F%3Fv%3D1-2%26dh%3DMCowBQYDK2VuAyEAS9C-zPzqW41PKySfPCEizcXb1QCus6AyDkTTjfyMIRM%253D%26srv%3Djssqzccmrcws6bhmn77vgmhfjmhwlyr3u7puw4erkyoosywgl67slqqd.onion
-
@ 8d34bd24:414be32b
2025-05-07 21:47:57I’ve been really deeply studying end times prophecy today. Trying to see how all of the prophecies from the initial proto-Gospel in Genesis 3 through the last chapter in Revelation is hard to arrange in my head. That being said, after reading the Bible daily for about 40 years and reading through it each year for about 30 years, I am really starting to see so many links between passages through out the Bible. It has made my Bible study enthralling. I wish I had time to spend hours and hours every day studying and writing about what I have learned.
I thought it might be handy to share some hints on how I study the Bible. Hopefully this can help some people, although I do tend to think my subscribers tend to be those who love Bible study and are already in the word. People who don’t love the Bible are unlikely to read my long, scripture laden posts. Still, hopefully this will be useful.
Starting the Habit of Bible Reading
The first and foremost thing we all need is to start the habit of daily Bible reading. You can’t worship a God you don’t know about and you can’t obey a God whose commands you don’t know. Every Christian needs to read the whole Bible. This needs to be a priority.
I used to recommend people just start at the beginning, Genesis, and read straight through to Revelation, but I’ve lately changed my mind. So many people will start in Genesis, enjoy Genesis and Exodus, which are basically just stories about creation, judgment in the global flood, and God’s chosen people. They then get to Leviticus, Numbers, and Deuteronomy (the details of the law including the intricate ceremonial law) and they lose momentum in the tedium. I do think every Christian eventually needs to read and know these books, but I think it is OK to skip some or all of them the first time through. They will mean more once you have read the whole Bible. If you are only going to read one, I’d probably read Deuteronomy.
I also know that it can be helpful for some people to mix up their reading. I used to have book marks with daily readings, so I read some Old Testament, some Psalms/Proverbs, some New Testament. There was one other category, but I can’t remember what it was. This way, you get a little of different types of passages. My bookmarks burnt up when my house burnt down and when I went searching online for something similar, I found a few similar reading plans, but not the one I used and really liked. Here are a couple that looked good, but I haven’t used myself. here. here. here. These plans look good, but don’t have the convenient bookmarks. here. here. For those who like reading online or on your phone (which isn’t me), I found this one. It looked nice I’ve just started using it despite the fact I prefer a Bible I can hold, turn the pages of, and write in. It has a chronological Old Testament Passage and a New Testament reading that relates in some way to the Old Testament Passage. It also links to some maps that let you see where the places mentioned in the passages are located and questions to get you to think about what you read. The one downside is it only lets you attach notes if you create a group. I do really like the idea that you can setup a group to read through the Bible and share your comments and thoughts, but I haven’t tried the feature.
Another thing I’ve found very helpful is a chronological Bible. It is handy having things in the order they happened and the different passages that cover an event (such as from each gospel or 1/2 Samuel vs 1/2 Chronicles or Leviticus vs Deuteronomy, etc.) right by each other. It is handy to see what actually comes before what and the way different writers describe the same event, since different authors include different details. I think reading a chronological Bible has helped me see more links between passages and get a better understanding of the Bible as a whole. I am getting close to finishing my second reading through. I don’t know if one chronological Bible is significantly better than another, but this is the one I am reading right now.
Another tactic I have used, when I started getting bogged down reading through the Bible again and again was to study one book of the Bible in depth. It worked best reading one of the shorter books. I’d read through the book repeatedly for a month, usually in 1-3 days. I’d follow the links in my study Bible to related passages or study where some of the words were used in other parts of the Bible. I’d get so I really knew the book well.
One thing that has helped me with my Bible study is writing in my Bible. The first time I wrote, it felt almost sacrilegious, but it helps me to organize my thoughts. I’ll write what I get out of it, how it relates to another passage, etc. I’ll underline or circle key words or sentences. These are then useful when I read through again and may see something different, but it reminds me of my growth and learning. I’ve actually thought I really need to get a new wide margin Bible to have more room for my notes. I can write really small and have an ultrafine point pen, so I can write even smaller than the print. The problem is my eyes aren’t so good and I now have trouble reading my tiny print. I can’t read my own writing without my reading glasses.
Bible reading starts getting really exciting when you get to know the Bible well enough that you start seeing the links between different passages and different books. Suddenly it opens up a whole new level of understanding. It is like an exciting scavenger hunt finding how all of the ideas in the Bible relate to each other and clarify each other in one whole.
Historically I’ve hated writing. The thought of writing a journal or something sounded like torture, but I have truly found organizing my thoughts in an essay, really helps my understanding of the Scriptures in ways that reading and thinking about it never did. Whether anyone reads my writings or not, I’ll continue writing because it is a blessing to me. I have grown immensely in my understanding of the Bible by writing out a reasoned argument for what I believe the Bible is saying. I’ve also done in depth study and realized that I was not completely right in my understanding and had to adjust my understanding of Scripture.
but sanctify Christ as Lord in your hearts, always being ready to make a defense to everyone who asks you to give an account for the hope that is in you, yet with gentleness and reverence (1 Peter 3:15)
As Christians, we are supposed to be ready to make a defense. Reading, studying, and knowing the Bible is the only true way to be ready. I made a necklace with the first letter in each word in this verse to help me memorize it.
Memorizing God’s word is also well worth the effort. I’ll admit, that I would be terrible for following my own advice in this, except I have a special needs son, who is in Awana, and needs help memorizing 1-5 verses a week. The only way either of us can pull it off is I make a song for each 1-3 verse passage that he has to memorize. We then sing them together until we know them. I debated on whether to share my songs. They are not well done. The version uploaded is my first rough attempt at the song and we usually fine tune them over the week, but I don’t get around to rerecording them. I also have at best an OK voice. Still, I decided to share in case these songs can help someone else with their Bible memorization. Hopefully I am not embarrassing myself too much.
Another thing that has helped me is finding Open Bible’s geocoding site. When reading Bible passages, there are frequent references to places that are unfamiliar, either because they are far away or because the ancient names, rather than modern names, are used. This site allows you to see on a map (satellite & modern country formats) where places are located and how they relate to each other. I’ve especially found this useful with end times prophecy because the Bible describes places with their ancient, not modern names.
In addition to my direct Bible study, I also daily listen to sermons, Christian podcasts, read Christian substack posts, and read Christian commentaries. All help my understanding of the Bible. FYI, the sermons, podcasts, blogs, and commentaries are a risk if you don’t know the Bible and aren’t being like the Bereans who searched “… the Scriptures daily to see whether these things were so.” (Acts 17:11) There are so many false or erroneous teachers, that you have to be very careful listening to people and never put the opinions of men above the word of God. Of course, it is possible to learn a bunch from Godly teachers. Sadly, even the best Bible teachers seem to have at least one area of error. For example, I love listening to R.C. Sproul’s “Renewing Your Mind” podcast, but his teaching on the first 11 chapters of Genesis are a bit “squishy” (not outright wrong, but not holding firm enough to the Bible) and I’d say his end times teaching is flat out wrong. Everything I’ve heard him teach between Genesis 12 and Jude is amazing and very true to the Bible. This is where he spends almost all of his time teaching, so I can highly recommend his podcast. Without a firm foundation in the Bible, it is not possible to recognize false teaching, especially when taught by someone who is very good in most respects.
I hope this is useful to people to help them get into the habit of regular Bible reading and seeing how exciting Bible study can be.
May God give you a hunger for and understanding of His word. May you fill your heart and mind with the word of God so it overflows and is seen by all around you.
Trust Jesus.
-
@ a39d19ec:3d88f61e
2025-04-22 12:44:42Die Debatte um Migration, Grenzsicherung und Abschiebungen wird in Deutschland meist emotional geführt. Wer fordert, dass illegale Einwanderer abgeschoben werden, sieht sich nicht selten dem Vorwurf des Rassismus ausgesetzt. Doch dieser Vorwurf ist nicht nur sachlich unbegründet, sondern verkehrt die Realität ins Gegenteil: Tatsächlich sind es gerade diejenigen, die hinter jeder Forderung nach Rechtssicherheit eine rassistische Motivation vermuten, die selbst in erster Linie nach Hautfarbe, Herkunft oder Nationalität urteilen.
Das Recht steht über Emotionen
Deutschland ist ein Rechtsstaat. Das bedeutet, dass Regeln nicht nach Bauchgefühl oder politischer Stimmungslage ausgelegt werden können, sondern auf klaren gesetzlichen Grundlagen beruhen müssen. Einer dieser Grundsätze ist in Artikel 16a des Grundgesetzes verankert. Dort heißt es:
„Auf Absatz 1 [Asylrecht] kann sich nicht berufen, wer aus einem Mitgliedstaat der Europäischen Gemeinschaften oder aus einem anderen Drittstaat einreist, in dem die Anwendung des Abkommens über die Rechtsstellung der Flüchtlinge und der Europäischen Menschenrechtskonvention sichergestellt ist.“
Das bedeutet, dass jeder, der über sichere Drittstaaten nach Deutschland einreist, keinen Anspruch auf Asyl hat. Wer dennoch bleibt, hält sich illegal im Land auf und unterliegt den geltenden Regelungen zur Rückführung. Die Forderung nach Abschiebungen ist daher nichts anderes als die Forderung nach der Einhaltung von Recht und Gesetz.
Die Umkehrung des Rassismusbegriffs
Wer einerseits behauptet, dass das deutsche Asyl- und Aufenthaltsrecht strikt durchgesetzt werden soll, und andererseits nicht nach Herkunft oder Hautfarbe unterscheidet, handelt wertneutral. Diejenigen jedoch, die in einer solchen Forderung nach Rechtsstaatlichkeit einen rassistischen Unterton sehen, projizieren ihre eigenen Denkmuster auf andere: Sie unterstellen, dass die Debatte ausschließlich entlang ethnischer, rassistischer oder nationaler Kriterien geführt wird – und genau das ist eine rassistische Denkweise.
Jemand, der illegale Einwanderung kritisiert, tut dies nicht, weil ihn die Herkunft der Menschen interessiert, sondern weil er den Rechtsstaat respektiert. Hingegen erkennt jemand, der hinter dieser Kritik Rassismus wittert, offenbar in erster Linie die „Rasse“ oder Herkunft der betreffenden Personen und reduziert sie darauf.
Finanzielle Belastung statt ideologischer Debatte
Neben der rechtlichen gibt es auch eine ökonomische Komponente. Der deutsche Wohlfahrtsstaat basiert auf einem Solidarprinzip: Die Bürger zahlen in das System ein, um sich gegenseitig in schwierigen Zeiten zu unterstützen. Dieser Wohlstand wurde über Generationen hinweg von denjenigen erarbeitet, die hier seit langem leben. Die Priorität liegt daher darauf, die vorhandenen Mittel zuerst unter denjenigen zu verteilen, die durch Steuern, Sozialabgaben und Arbeit zum Erhalt dieses Systems beitragen – nicht unter denen, die sich durch illegale Einreise und fehlende wirtschaftliche Eigenleistung in das System begeben.
Das ist keine ideologische Frage, sondern eine rein wirtschaftliche Abwägung. Ein Sozialsystem kann nur dann nachhaltig funktionieren, wenn es nicht unbegrenzt belastet wird. Würde Deutschland keine klaren Regeln zur Einwanderung und Abschiebung haben, würde dies unweigerlich zur Überlastung des Sozialstaates führen – mit negativen Konsequenzen für alle.
Sozialpatriotismus
Ein weiterer wichtiger Aspekt ist der Schutz der Arbeitsleistung jener Generationen, die Deutschland nach dem Zweiten Weltkrieg mühsam wieder aufgebaut haben. Während oft betont wird, dass die Deutschen moralisch kein Erbe aus der Zeit vor 1945 beanspruchen dürfen – außer der Verantwortung für den Holocaust –, ist es umso bedeutsamer, das neue Erbe nach 1945 zu respektieren, das auf Fleiß, Disziplin und harter Arbeit beruht. Der Wiederaufbau war eine kollektive Leistung deutscher Menschen, deren Früchte nicht bedenkenlos verteilt werden dürfen, sondern vorrangig denjenigen zugutekommen sollten, die dieses Fundament mitgeschaffen oder es über Generationen mitgetragen haben.
Rechtstaatlichkeit ist nicht verhandelbar
Wer sich für eine konsequente Abschiebepraxis ausspricht, tut dies nicht aus rassistischen Motiven, sondern aus Respekt vor der Rechtsstaatlichkeit und den wirtschaftlichen Grundlagen des Landes. Der Vorwurf des Rassismus in diesem Kontext ist daher nicht nur falsch, sondern entlarvt eine selektive Wahrnehmung nach rassistischen Merkmalen bei denjenigen, die ihn erheben.
-
@ 557c650b:b04c6817
2025-05-07 19:46:47Um exercício mental
-
@ a39d19ec:3d88f61e
2025-03-18 17:16:50Nun da das deutsche Bundesregime den Ruin Deutschlands beschlossen hat, der sehr wahrscheinlich mit dem Werkzeug des Geld druckens "finanziert" wird, kamen mir so viele Gedanken zur Geldmengenausweitung, dass ich diese für einmal niedergeschrieben habe.
Die Ausweitung der Geldmenge führt aus klassischer wirtschaftlicher Sicht immer zu Preissteigerungen, weil mehr Geld im Umlauf auf eine begrenzte Menge an Gütern trifft. Dies lässt sich in mehreren Schritten analysieren:
1. Quantitätstheorie des Geldes
Die klassische Gleichung der Quantitätstheorie des Geldes lautet:
M • V = P • Y
wobei:
- M die Geldmenge ist,
- V die Umlaufgeschwindigkeit des Geldes,
- P das Preisniveau,
- Y die reale Wirtschaftsleistung (BIP).Wenn M steigt und V sowie Y konstant bleiben, muss P steigen – also Inflation entstehen.
2. Gütermenge bleibt begrenzt
Die Menge an real produzierten Gütern und Dienstleistungen wächst meist nur langsam im Vergleich zur Ausweitung der Geldmenge. Wenn die Geldmenge schneller steigt als die Produktionsgütermenge, führt dies dazu, dass mehr Geld für die gleiche Menge an Waren zur Verfügung steht – die Preise steigen.
3. Erwartungseffekte und Spekulation
Wenn Unternehmen und Haushalte erwarten, dass mehr Geld im Umlauf ist, da eine zentrale Planung es so wollte, können sie steigende Preise antizipieren. Unternehmen erhöhen ihre Preise vorab, und Arbeitnehmer fordern höhere Löhne. Dies kann eine sich selbst verstärkende Spirale auslösen.
4. Internationale Perspektive
Eine erhöhte Geldmenge kann die Währung abwerten, wenn andere Länder ihre Geldpolitik stabil halten. Eine schwächere Währung macht Importe teurer, was wiederum Preissteigerungen antreibt.
5. Kritik an der reinen Geldmengen-Theorie
Der Vollständigkeit halber muss erwähnt werden, dass die meisten modernen Ökonomen im Staatsauftrag argumentieren, dass Inflation nicht nur von der Geldmenge abhängt, sondern auch von der Nachfrage nach Geld (z. B. in einer Wirtschaftskrise). Dennoch zeigt die historische Erfahrung, dass eine unkontrollierte Geldmengenausweitung langfristig immer zu Preissteigerungen führt, wie etwa in der Hyperinflation der Weimarer Republik oder in Simbabwe.
-
@ 0d97beae:c5274a14
2025-01-11 16:52:08This article hopes to complement the article by Lyn Alden on YouTube: https://www.youtube.com/watch?v=jk_HWmmwiAs
The reason why we have broken money
Before the invention of key technologies such as the printing press and electronic communications, even such as those as early as morse code transmitters, gold had won the competition for best medium of money around the world.
In fact, it was not just gold by itself that became money, rulers and world leaders developed coins in order to help the economy grow. Gold nuggets were not as easy to transact with as coins with specific imprints and denominated sizes.
However, these modern technologies created massive efficiencies that allowed us to communicate and perform services more efficiently and much faster, yet the medium of money could not benefit from these advancements. Gold was heavy, slow and expensive to move globally, even though requesting and performing services globally did not have this limitation anymore.
Banks took initiative and created derivatives of gold: paper and electronic money; these new currencies allowed the economy to continue to grow and evolve, but it was not without its dark side. Today, no currency is denominated in gold at all, money is backed by nothing and its inherent value, the paper it is printed on, is worthless too.
Banks and governments eventually transitioned from a money derivative to a system of debt that could be co-opted and controlled for political and personal reasons. Our money today is broken and is the cause of more expensive, poorer quality goods in the economy, a larger and ever growing wealth gap, and many of the follow-on problems that have come with it.
Bitcoin overcomes the "transfer of hard money" problem
Just like gold coins were created by man, Bitcoin too is a technology created by man. Bitcoin, however is a much more profound invention, possibly more of a discovery than an invention in fact. Bitcoin has proven to be unbreakable, incorruptible and has upheld its ability to keep its units scarce, inalienable and counterfeit proof through the nature of its own design.
Since Bitcoin is a digital technology, it can be transferred across international borders almost as quickly as information itself. It therefore severely reduces the need for a derivative to be used to represent money to facilitate digital trade. This means that as the currency we use today continues to fare poorly for many people, bitcoin will continue to stand out as hard money, that just so happens to work as well, functionally, along side it.
Bitcoin will also always be available to anyone who wishes to earn it directly; even China is unable to restrict its citizens from accessing it. The dollar has traditionally become the currency for people who discover that their local currency is unsustainable. Even when the dollar has become illegal to use, it is simply used privately and unofficially. However, because bitcoin does not require you to trade it at a bank in order to use it across borders and across the web, Bitcoin will continue to be a viable escape hatch until we one day hit some critical mass where the world has simply adopted Bitcoin globally and everyone else must adopt it to survive.
Bitcoin has not yet proven that it can support the world at scale. However it can only be tested through real adoption, and just as gold coins were developed to help gold scale, tools will be developed to help overcome problems as they arise; ideally without the need for another derivative, but if necessary, hopefully with one that is more neutral and less corruptible than the derivatives used to represent gold.
Bitcoin blurs the line between commodity and technology
Bitcoin is a technology, it is a tool that requires human involvement to function, however it surprisingly does not allow for any concentration of power. Anyone can help to facilitate Bitcoin's operations, but no one can take control of its behaviour, its reach, or its prioritisation, as it operates autonomously based on a pre-determined, neutral set of rules.
At the same time, its built-in incentive mechanism ensures that people do not have to operate bitcoin out of the good of their heart. Even though the system cannot be co-opted holistically, It will not stop operating while there are people motivated to trade their time and resources to keep it running and earn from others' transaction fees. Although it requires humans to operate it, it remains both neutral and sustainable.
Never before have we developed or discovered a technology that could not be co-opted and used by one person or faction against another. Due to this nature, Bitcoin's units are often described as a commodity; they cannot be usurped or virtually cloned, and they cannot be affected by political biases.
The dangers of derivatives
A derivative is something created, designed or developed to represent another thing in order to solve a particular complication or problem. For example, paper and electronic money was once a derivative of gold.
In the case of Bitcoin, if you cannot link your units of bitcoin to an "address" that you personally hold a cryptographically secure key to, then you very likely have a derivative of bitcoin, not bitcoin itself. If you buy bitcoin on an online exchange and do not withdraw the bitcoin to a wallet that you control, then you legally own an electronic derivative of bitcoin.
Bitcoin is a new technology. It will have a learning curve and it will take time for humanity to learn how to comprehend, authenticate and take control of bitcoin collectively. Having said that, many people all over the world are already using and relying on Bitcoin natively. For many, it will require for people to find the need or a desire for a neutral money like bitcoin, and to have been burned by derivatives of it, before they start to understand the difference between the two. Eventually, it will become an essential part of what we regard as common sense.
Learn for yourself
If you wish to learn more about how to handle bitcoin and avoid derivatives, you can start by searching online for tutorials about "Bitcoin self custody".
There are many options available, some more practical for you, and some more practical for others. Don't spend too much time trying to find the perfect solution; practice and learn. You may make mistakes along the way, so be careful not to experiment with large amounts of your bitcoin as you explore new ideas and technologies along the way. This is similar to learning anything, like riding a bicycle; you are sure to fall a few times, scuff the frame, so don't buy a high performance racing bike while you're still learning to balance.
-
@ 37fe9853:bcd1b039
2025-01-11 15:04:40yoyoaa
-
@ 62033ff8:e4471203
2025-01-11 15:00:24收录的内容中 kind=1的部分,实话说 质量不高。 所以我增加了kind=30023 长文的article,但是更新的太少,多个relays 的服务器也没有多少长文。
所有搜索nostr如果需要产生价值,需要有高质量的文章和新闻。 而且现在有很多机器人的文章充满着浪费空间的作用,其他作用都用不上。
https://www.duozhutuan.com 目前放的是给搜索引擎提供搜索的原材料。没有做UI给人类浏览。所以看上去是粗糙的。 我并没有打算去做一个发microblog的 web客户端,那类的客户端太多了。
我觉得nostr社区需要解决的还是应用。如果仅仅是microblog 感觉有点够呛
幸运的是npub.pro 建站这样的,我觉得有点意思。
yakihonne 智能widget 也有意思
我做的TaskQ5 我自己在用了。分布式的任务系统,也挺好的。
-
@ 23b0e2f8:d8af76fc
2025-01-08 18:17:52Necessário
- Um Android que você não use mais (a câmera deve estar funcionando).
- Um cartão microSD (opcional, usado apenas uma vez).
- Um dispositivo para acompanhar seus fundos (provavelmente você já tem um).
Algumas coisas que você precisa saber
- O dispositivo servirá como um assinador. Qualquer movimentação só será efetuada após ser assinada por ele.
- O cartão microSD será usado para transferir o APK do Electrum e garantir que o aparelho não terá contato com outras fontes de dados externas após sua formatação. Contudo, é possível usar um cabo USB para o mesmo propósito.
- A ideia é deixar sua chave privada em um dispositivo offline, que ficará desligado em 99% do tempo. Você poderá acompanhar seus fundos em outro dispositivo conectado à internet, como seu celular ou computador pessoal.
O tutorial será dividido em dois módulos:
- Módulo 1 - Criando uma carteira fria/assinador.
- Módulo 2 - Configurando um dispositivo para visualizar seus fundos e assinando transações com o assinador.
No final, teremos:
- Uma carteira fria que também servirá como assinador.
- Um dispositivo para acompanhar os fundos da carteira.
Módulo 1 - Criando uma carteira fria/assinador
-
Baixe o APK do Electrum na aba de downloads em https://electrum.org/. Fique à vontade para verificar as assinaturas do software, garantindo sua autenticidade.
-
Formate o cartão microSD e coloque o APK do Electrum nele. Caso não tenha um cartão microSD, pule este passo.
- Retire os chips e acessórios do aparelho que será usado como assinador, formate-o e aguarde a inicialização.
- Durante a inicialização, pule a etapa de conexão ao Wi-Fi e rejeite todas as solicitações de conexão. Após isso, você pode desinstalar aplicativos desnecessários, pois precisará apenas do Electrum. Certifique-se de que Wi-Fi, Bluetooth e dados móveis estejam desligados. Você também pode ativar o modo avião.\ (Curiosidade: algumas pessoas optam por abrir o aparelho e danificar a antena do Wi-Fi/Bluetooth, impossibilitando essas funcionalidades.)
- Insira o cartão microSD com o APK do Electrum no dispositivo e instale-o. Será necessário permitir instalações de fontes não oficiais.
- No Electrum, crie uma carteira padrão e gere suas palavras-chave (seed). Anote-as em um local seguro. Caso algo aconteça com seu assinador, essas palavras permitirão o acesso aos seus fundos novamente. (Aqui entra seu método pessoal de backup.)
Módulo 2 - Configurando um dispositivo para visualizar seus fundos e assinando transações com o assinador.
-
Criar uma carteira somente leitura em outro dispositivo, como seu celular ou computador pessoal, é uma etapa bastante simples. Para este tutorial, usaremos outro smartphone Android com Electrum. Instale o Electrum a partir da aba de downloads em https://electrum.org/ ou da própria Play Store. (ATENÇÃO: O Electrum não existe oficialmente para iPhone. Desconfie se encontrar algum.)
-
Após instalar o Electrum, crie uma carteira padrão, mas desta vez escolha a opção Usar uma chave mestra.
- Agora, no assinador que criamos no primeiro módulo, exporte sua chave pública: vá em Carteira > Detalhes da carteira > Compartilhar chave mestra pública.
-
Escaneie o QR gerado da chave pública com o dispositivo de consulta. Assim, ele poderá acompanhar seus fundos, mas sem permissão para movimentá-los.
-
Para receber fundos, envie Bitcoin para um dos endereços gerados pela sua carteira: Carteira > Addresses/Coins.
-
Para movimentar fundos, crie uma transação no dispositivo de consulta. Como ele não possui a chave privada, será necessário assiná-la com o dispositivo assinador.
- No assinador, escaneie a transação não assinada, confirme os detalhes, assine e compartilhe. Será gerado outro QR, desta vez com a transação já assinada.
- No dispositivo de consulta, escaneie o QR da transação assinada e transmita-a para a rede.
Conclusão
Pontos positivos do setup:
- Simplicidade: Basta um dispositivo Android antigo.
- Flexibilidade: Funciona como uma ótima carteira fria, ideal para holders.
Pontos negativos do setup:
- Padronização: Não utiliza seeds no padrão BIP-39, você sempre precisará usar o electrum.
- Interface: A aparência do Electrum pode parecer antiquada para alguns usuários.
Nesse ponto, temos uma carteira fria que também serve para assinar transações. O fluxo de assinar uma transação se torna: Gerar uma transação não assinada > Escanear o QR da transação não assinada > Conferir e assinar essa transação com o assinador > Gerar QR da transação assinada > Escanear a transação assinada com qualquer outro dispositivo que possa transmiti-la para a rede.
Como alguns devem saber, uma transação assinada de Bitcoin é praticamente impossível de ser fraudada. Em um cenário catastrófico, você pode mesmo que sem internet, repassar essa transação assinada para alguém que tenha acesso à rede por qualquer meio de comunicação. Mesmo que não queiramos que isso aconteça um dia, esse setup acaba por tornar essa prática possível.
-
@ 207ad2a0:e7cca7b0
2025-01-07 03:46:04Quick context: I wanted to check out Nostr's longform posts and this blog post seemed like a good one to try and mirror. It's originally from my free to read/share attempt to write a novel, but this post here is completely standalone - just describing how I used AI image generation to make a small piece of the work.
Hold on, put your pitchforks down - outside of using Grammerly & Emacs for grammatical corrections - not a single character was generated or modified by computers; a non-insignificant portion of my first draft originating on pen & paper. No AI is ~~weird and crazy~~ imaginative enough to write like I do. The only successful AI contribution you'll find is a single image, the map, which I heavily edited. This post will go over how I generated and modified an image using AI, which I believe brought some value to the work, and cover a few quick thoughts about AI towards the end.
Let's be clear, I can't draw, but I wanted a map which I believed would improve the story I was working on. After getting abysmal results by prompting AI with text only I decided to use "Diffuse the Rest," a Stable Diffusion tool that allows you to provide a reference image + description to fine tune what you're looking for. I gave it this Microsoft Paint looking drawing:
and after a number of outputs, selected this one to work on:
The image is way better than the one I provided, but had I used it as is, I still feel it would have decreased the quality of my work instead of increasing it. After firing up Gimp I cropped out the top and bottom, expanded the ocean and separated the landmasses, then copied the top right corner of the large landmass to replace the bottom left that got cut off. Now we've got something that looks like concept art: not horrible, and gets the basic idea across, but it's still due for a lot more detail.
The next thing I did was add some texture to make it look more map like. I duplicated the layer in Gimp and applied the "Cartoon" filter to both for some texture. The top layer had a much lower effect strength to give it a more textured look, while the lower layer had a higher effect strength that looked a lot like mountains or other terrain features. Creating a layer mask allowed me to brush over spots to display the lower layer in certain areas, giving it some much needed features.
At this point I'd made it to where I felt it may improve the work instead of detracting from it - at least after labels and borders were added, but the colors seemed artificial and out of place. Luckily, however, this is when PhotoFunia could step in and apply a sketch effect to the image.
At this point I was pretty happy with how it was looking, it was close to what I envisioned and looked very visually appealing while still being a good way to portray information. All that was left was to make the white background transparent, add some minor details, and add the labels and borders. Below is the exact image I wound up using:
Overall, I'm very satisfied with how it turned out, and if you're working on a creative project, I'd recommend attempting something like this. It's not a central part of the work, but it improved the chapter a fair bit, and was doable despite lacking the talent and not intending to allocate a budget to my making of a free to read and share story.
The AI Generated Elephant in the Room
If you've read my non-fiction writing before, you'll know that I think AI will find its place around the skill floor as opposed to the skill ceiling. As you saw with my input, I have absolutely zero drawing talent, but with some elbow grease and an existing creative direction before and after generating an image I was able to get something well above what I could have otherwise accomplished. Outside of the lowest common denominators like stock photos for the sole purpose of a link preview being eye catching, however, I doubt AI will be wholesale replacing most creative works anytime soon. I can assure you that I tried numerous times to describe the map without providing a reference image, and if I used one of those outputs (or even just the unedited output after providing the reference image) it would have decreased the quality of my work instead of improving it.
I'm going to go out on a limb and expect that AI image, text, and video is all going to find its place in slop & generic content (such as AI generated slop replacing article spinners and stock photos respectively) and otherwise be used in a supporting role for various creative endeavors. For people working on projects like I'm working on (e.g. intended budget $0) it's helpful to have an AI capable of doing legwork - enabling projects to exist or be improved in ways they otherwise wouldn't have. I'm also guessing it'll find its way into more professional settings for grunt work - think a picture frame or fake TV show that would exist in the background of an animated project - likely a detail most people probably wouldn't notice, but that would save the creators time and money and/or allow them to focus more on the essential aspects of said work. Beyond that, as I've predicted before: I expect plenty of emails will be generated from a short list of bullet points, only to be summarized by the recipient's AI back into bullet points.
I will also make a prediction counter to what seems mainstream: AI is about to peak for a while. The start of AI image generation was with Google's DeepDream in 2015 - image recognition software that could be run in reverse to "recognize" patterns where there were none, effectively generating an image from digital noise or an unrelated image. While I'm not an expert by any means, I don't think we're too far off from that a decade later, just using very fine tuned tools that develop more coherent images. I guess that we're close to maxing out how efficiently we're able to generate images and video in that manner, and the hard caps on how much creative direction we can have when using AI - as well as the limits to how long we can keep it coherent (e.g. long videos or a chronologically consistent set of images) - will prevent AI from progressing too far beyond what it is currently unless/until another breakthrough occurs.
-
@ e6817453:b0ac3c39
2025-01-05 14:29:17The Rise of Graph RAGs and the Quest for Data Quality
As we enter a new year, it’s impossible to ignore the boom of retrieval-augmented generation (RAG) systems, particularly those leveraging graph-based approaches. The previous year saw a surge in advancements and discussions about Graph RAGs, driven by their potential to enhance large language models (LLMs), reduce hallucinations, and deliver more reliable outputs. Let’s dive into the trends, challenges, and strategies for making the most of Graph RAGs in artificial intelligence.
Booming Interest in Graph RAGs
Graph RAGs have dominated the conversation in AI circles. With new research papers and innovations emerging weekly, it’s clear that this approach is reshaping the landscape. These systems, especially those developed by tech giants like Microsoft, demonstrate how graphs can:
- Enhance LLM Outputs: By grounding responses in structured knowledge, graphs significantly reduce hallucinations.
- Support Complex Queries: Graphs excel at managing linked and connected data, making them ideal for intricate problem-solving.
Conferences on linked and connected data have increasingly focused on Graph RAGs, underscoring their central role in modern AI systems. However, the excitement around this technology has brought critical questions to the forefront: How do we ensure the quality of the graphs we’re building, and are they genuinely aligned with our needs?
Data Quality: The Foundation of Effective Graphs
A high-quality graph is the backbone of any successful RAG system. Constructing these graphs from unstructured data requires attention to detail and rigorous processes. Here’s why:
- Richness of Entities: Effective retrieval depends on graphs populated with rich, detailed entities.
- Freedom from Hallucinations: Poorly constructed graphs amplify inaccuracies rather than mitigating them.
Without robust data quality, even the most sophisticated Graph RAGs become ineffective. As a result, the focus must shift to refining the graph construction process. Improving data strategy and ensuring meticulous data preparation is essential to unlock the full potential of Graph RAGs.
Hybrid Graph RAGs and Variations
While standard Graph RAGs are already transformative, hybrid models offer additional flexibility and power. Hybrid RAGs combine structured graph data with other retrieval mechanisms, creating systems that:
- Handle diverse data sources with ease.
- Offer improved adaptability to complex queries.
Exploring these variations can open new avenues for AI systems, particularly in domains requiring structured and unstructured data processing.
Ontology: The Key to Graph Construction Quality
Ontology — defining how concepts relate within a knowledge domain — is critical for building effective graphs. While this might sound abstract, it’s a well-established field blending philosophy, engineering, and art. Ontology engineering provides the framework for:
- Defining Relationships: Clarifying how concepts connect within a domain.
- Validating Graph Structures: Ensuring constructed graphs are logically sound and align with domain-specific realities.
Traditionally, ontologists — experts in this discipline — have been integral to large enterprises and research teams. However, not every team has access to dedicated ontologists, leading to a significant challenge: How can teams without such expertise ensure the quality of their graphs?
How to Build Ontology Expertise in a Startup Team
For startups and smaller teams, developing ontology expertise may seem daunting, but it is achievable with the right approach:
- Assign a Knowledge Champion: Identify a team member with a strong analytical mindset and give them time and resources to learn ontology engineering.
- Provide Training: Invest in courses, workshops, or certifications in knowledge graph and ontology creation.
- Leverage Partnerships: Collaborate with academic institutions, domain experts, or consultants to build initial frameworks.
- Utilize Tools: Introduce ontology development tools like Protégé, OWL, or SHACL to simplify the creation and validation process.
- Iterate with Feedback: Continuously refine ontologies through collaboration with domain experts and iterative testing.
So, it is not always affordable for a startup to have a dedicated oncologist or knowledge engineer in a team, but you could involve consulters or build barefoot experts.
You could read about barefoot experts in my article :
Even startups can achieve robust and domain-specific ontology frameworks by fostering in-house expertise.
How to Find or Create Ontologies
For teams venturing into Graph RAGs, several strategies can help address the ontology gap:
-
Leverage Existing Ontologies: Many industries and domains already have open ontologies. For instance:
-
Public Knowledge Graphs: Resources like Wikipedia’s graph offer a wealth of structured knowledge.
- Industry Standards: Enterprises such as Siemens have invested in creating and sharing ontologies specific to their fields.
-
Business Framework Ontology (BFO): A valuable resource for enterprises looking to define business processes and structures.
-
Build In-House Expertise: If budgets allow, consider hiring knowledge engineers or providing team members with the resources and time to develop expertise in ontology creation.
-
Utilize LLMs for Ontology Construction: Interestingly, LLMs themselves can act as a starting point for ontology development:
-
Prompt-Based Extraction: LLMs can generate draft ontologies by leveraging their extensive training on graph data.
- Domain Expert Refinement: Combine LLM-generated structures with insights from domain experts to create tailored ontologies.
Parallel Ontology and Graph Extraction
An emerging approach involves extracting ontologies and graphs in parallel. While this can streamline the process, it presents challenges such as:
- Detecting Hallucinations: Differentiating between genuine insights and AI-generated inaccuracies.
- Ensuring Completeness: Ensuring no critical concepts are overlooked during extraction.
Teams must carefully validate outputs to ensure reliability and accuracy when employing this parallel method.
LLMs as Ontologists
While traditionally dependent on human expertise, ontology creation is increasingly supported by LLMs. These models, trained on vast amounts of data, possess inherent knowledge of many open ontologies and taxonomies. Teams can use LLMs to:
- Generate Skeleton Ontologies: Prompt LLMs with domain-specific information to draft initial ontology structures.
- Validate and Refine Ontologies: Collaborate with domain experts to refine these drafts, ensuring accuracy and relevance.
However, for validation and graph construction, formal tools such as OWL, SHACL, and RDF should be prioritized over LLMs to minimize hallucinations and ensure robust outcomes.
Final Thoughts: Unlocking the Power of Graph RAGs
The rise of Graph RAGs underscores a simple but crucial correlation: improving graph construction and data quality directly enhances retrieval systems. To truly harness this power, teams must invest in understanding ontologies, building quality graphs, and leveraging both human expertise and advanced AI tools.
As we move forward, the interplay between Graph RAGs and ontology engineering will continue to shape the future of AI. Whether through adopting existing frameworks or exploring innovative uses of LLMs, the path to success lies in a deep commitment to data quality and domain understanding.
Have you explored these technologies in your work? Share your experiences and insights — and stay tuned for more discussions on ontology extraction and its role in AI advancements. Cheers to a year of innovation!
-
@ a4a6b584:1e05b95b
2025-01-02 18:13:31The Four-Layer Framework
Layer 1: Zoom Out
Start by looking at the big picture. What’s the subject about, and why does it matter? Focus on the overarching ideas and how they fit together. Think of this as the 30,000-foot view—it’s about understanding the "why" and "how" before diving into the "what."
Example: If you’re learning programming, start by understanding that it’s about giving logical instructions to computers to solve problems.
- Tip: Keep it simple. Summarize the subject in one or two sentences and avoid getting bogged down in specifics at this stage.
Once you have the big picture in mind, it’s time to start breaking it down.
Layer 2: Categorize and Connect
Now it’s time to break the subject into categories—like creating branches on a tree. This helps your brain organize information logically and see connections between ideas.
Example: Studying biology? Group concepts into categories like cells, genetics, and ecosystems.
- Tip: Use headings or labels to group similar ideas. Jot these down in a list or simple diagram to keep track.
With your categories in place, you’re ready to dive into the details that bring them to life.
Layer 3: Master the Details
Once you’ve mapped out the main categories, you’re ready to dive deeper. This is where you learn the nuts and bolts—like formulas, specific techniques, or key terminology. These details make the subject practical and actionable.
Example: In programming, this might mean learning the syntax for loops, conditionals, or functions in your chosen language.
- Tip: Focus on details that clarify the categories from Layer 2. Skip anything that doesn’t add to your understanding.
Now that you’ve mastered the essentials, you can expand your knowledge to include extra material.
Layer 4: Expand Your Horizons
Finally, move on to the extra material—less critical facts, trivia, or edge cases. While these aren’t essential to mastering the subject, they can be useful in specialized discussions or exams.
Example: Learn about rare programming quirks or historical trivia about a language’s development.
- Tip: Spend minimal time here unless it’s necessary for your goals. It’s okay to skim if you’re short on time.
Pro Tips for Better Learning
1. Use Active Recall and Spaced Repetition
Test yourself without looking at notes. Review what you’ve learned at increasing intervals—like after a day, a week, and a month. This strengthens memory by forcing your brain to actively retrieve information.
2. Map It Out
Create visual aids like diagrams or concept maps to clarify relationships between ideas. These are particularly helpful for organizing categories in Layer 2.
3. Teach What You Learn
Explain the subject to someone else as if they’re hearing it for the first time. Teaching exposes any gaps in your understanding and helps reinforce the material.
4. Engage with LLMs and Discuss Concepts
Take advantage of tools like ChatGPT or similar large language models to explore your topic in greater depth. Use these tools to:
- Ask specific questions to clarify confusing points.
- Engage in discussions to simulate real-world applications of the subject.
- Generate examples or analogies that deepen your understanding.Tip: Use LLMs as a study partner, but don’t rely solely on them. Combine these insights with your own critical thinking to develop a well-rounded perspective.
Get Started
Ready to try the Four-Layer Method? Take 15 minutes today to map out the big picture of a topic you’re curious about—what’s it all about, and why does it matter? By building your understanding step by step, you’ll master the subject with less stress and more confidence.
-
@ 04c915da:3dfbecc9
2025-03-13 19:39:28In much of the world, it is incredibly difficult to access U.S. dollars. Local currencies are often poorly managed and riddled with corruption. Billions of people demand a more reliable alternative. While the dollar has its own issues of corruption and mismanagement, it is widely regarded as superior to the fiat currencies it competes with globally. As a result, Tether has found massive success providing low cost, low friction access to dollars. Tether claims 400 million total users, is on track to add 200 million more this year, processes 8.1 million transactions daily, and facilitates $29 billion in daily transfers. Furthermore, their estimates suggest nearly 40% of users rely on it as a savings tool rather than just a transactional currency.
Tether’s rise has made the company a financial juggernaut. Last year alone, Tether raked in over $13 billion in profit, with a lean team of less than 100 employees. Their business model is elegantly simple: hold U.S. Treasuries and collect the interest. With over $113 billion in Treasuries, Tether has turned a straightforward concept into a profit machine.
Tether’s success has resulted in many competitors eager to claim a piece of the pie. This has triggered a massive venture capital grift cycle in USD tokens, with countless projects vying to dethrone Tether. Due to Tether’s entrenched network effect, these challengers face an uphill battle with little realistic chance of success. Most educated participants in the space likely recognize this reality but seem content to perpetuate the grift, hoping to cash out by dumping their equity positions on unsuspecting buyers before they realize the reality of the situation.
Historically, Tether’s greatest vulnerability has been U.S. government intervention. For over a decade, the company operated offshore with few allies in the U.S. establishment, making it a major target for regulatory action. That dynamic has shifted recently and Tether has seized the opportunity. By actively courting U.S. government support, Tether has fortified their position. This strategic move will likely cement their status as the dominant USD token for years to come.
While undeniably a great tool for the millions of users that rely on it, Tether is not without flaws. As a centralized, trusted third party, it holds the power to freeze or seize funds at its discretion. Corporate mismanagement or deliberate malpractice could also lead to massive losses at scale. In their goal of mitigating regulatory risk, Tether has deepened ties with law enforcement, mirroring some of the concerns of potential central bank digital currencies. In practice, Tether operates as a corporate CBDC alternative, collaborating with authorities to surveil and seize funds. The company proudly touts partnerships with leading surveillance firms and its own data reveals cooperation in over 1,000 law enforcement cases, with more than $2.5 billion in funds frozen.
The global demand for Tether is undeniable and the company’s profitability reflects its unrivaled success. Tether is owned and operated by bitcoiners and will likely continue to push forward strategic goals that help the movement as a whole. Recent efforts to mitigate the threat of U.S. government enforcement will likely solidify their network effect and stifle meaningful adoption of rival USD tokens or CBDCs. Yet, for all their achievements, Tether is simply a worse form of money than bitcoin. Tether requires trust in a centralized entity, while bitcoin can be saved or spent without permission. Furthermore, Tether is tied to the value of the US Dollar which is designed to lose purchasing power over time, while bitcoin, as a truly scarce asset, is designed to increase in purchasing power with adoption. As people awaken to the risks of Tether’s control, and the benefits bitcoin provides, bitcoin adoption will likely surpass it.
-
@ fe32298e:20516265
2024-12-16 20:59:13Today I learned how to install NVapi to monitor my GPUs in Home Assistant.
NVApi is a lightweight API designed for monitoring NVIDIA GPU utilization and enabling automated power management. It provides real-time GPU metrics, supports integration with tools like Home Assistant, and offers flexible power management and PCIe link speed management based on workload and thermal conditions.
- GPU Utilization Monitoring: Utilization, memory usage, temperature, fan speed, and power consumption.
- Automated Power Limiting: Adjusts power limits dynamically based on temperature thresholds and total power caps, configurable per GPU or globally.
- Cross-GPU Coordination: Total power budget applies across multiple GPUs in the same system.
- PCIe Link Speed Management: Controls minimum and maximum PCIe link speeds with idle thresholds for power optimization.
- Home Assistant Integration: Uses the built-in RESTful platform and template sensors.
Getting the Data
sudo apt install golang-go git clone https://github.com/sammcj/NVApi.git cd NVapi go run main.go -port 9999 -rate 1 curl http://localhost:9999/gpu
Response for a single GPU:
[ { "index": 0, "name": "NVIDIA GeForce RTX 4090", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 16, "power_limit_watts": 450, "memory_total_gb": 23.99, "memory_used_gb": 0.46, "memory_free_gb": 23.52, "memory_usage_percent": 2, "temperature": 38, "processes": [], "pcie_link_state": "not managed" } ]
Response for multiple GPUs:
[ { "index": 0, "name": "NVIDIA GeForce RTX 3090", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 14, "power_limit_watts": 350, "memory_total_gb": 24, "memory_used_gb": 0.43, "memory_free_gb": 23.57, "memory_usage_percent": 2, "temperature": 36, "processes": [], "pcie_link_state": "not managed" }, { "index": 1, "name": "NVIDIA RTX A4000", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 10, "power_limit_watts": 140, "memory_total_gb": 15.99, "memory_used_gb": 0.56, "memory_free_gb": 15.43, "memory_usage_percent": 3, "temperature": 41, "processes": [], "pcie_link_state": "not managed" } ]
Start at Boot
Create
/etc/systemd/system/nvapi.service
:``` [Unit] Description=Run NVapi After=network.target
[Service] Type=simple Environment="GOPATH=/home/ansible/go" WorkingDirectory=/home/ansible/NVapi ExecStart=/usr/bin/go run main.go -port 9999 -rate 1 Restart=always User=ansible
Environment="GPU_TEMP_CHECK_INTERVAL=5"
Environment="GPU_TOTAL_POWER_CAP=400"
Environment="GPU_0_LOW_TEMP=40"
Environment="GPU_0_MEDIUM_TEMP=70"
Environment="GPU_0_LOW_TEMP_LIMIT=135"
Environment="GPU_0_MEDIUM_TEMP_LIMIT=120"
Environment="GPU_0_HIGH_TEMP_LIMIT=100"
Environment="GPU_1_LOW_TEMP=45"
Environment="GPU_1_MEDIUM_TEMP=75"
Environment="GPU_1_LOW_TEMP_LIMIT=140"
Environment="GPU_1_MEDIUM_TEMP_LIMIT=125"
Environment="GPU_1_HIGH_TEMP_LIMIT=110"
[Install] WantedBy=multi-user.target ```
Home Assistant
Add to Home Assistant
configuration.yaml
and restart HA (completely).For a single GPU, this works: ``` sensor: - platform: rest name: MYPC GPU Information resource: http://mypc:9999 method: GET headers: Content-Type: application/json value_template: "{{ value_json[0].index }}" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature scan_interval: 1 # seconds
- platform: template sensors: mypc_gpu_0_gpu: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} GPU" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_memory: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Memory" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_power: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_0_power_limit: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_0_temperature: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Temperature" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'temperature') }}" unit_of_measurement: "°C" ```
For multiple GPUs: ``` rest: scan_interval: 1 resource: http://mypc:9999 sensor: - name: "MYPC GPU0 Information" value_template: "{{ value_json[0].index }}" json_attributes_path: "$.0" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature - name: "MYPC GPU1 Information" value_template: "{{ value_json[1].index }}" json_attributes_path: "$.1" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature
-
platform: template sensors: mypc_gpu_0_gpu: friendly_name: "MYPC GPU0 GPU" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_memory: friendly_name: "MYPC GPU0 Memory" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_power: friendly_name: "MYPC GPU0 Power" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_0_power_limit: friendly_name: "MYPC GPU0 Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_0_temperature: friendly_name: "MYPC GPU0 Temperature" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'temperature') }}" unit_of_measurement: "C"
-
platform: template sensors: mypc_gpu_1_gpu: friendly_name: "MYPC GPU1 GPU" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_1_memory: friendly_name: "MYPC GPU1 Memory" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_1_power: friendly_name: "MYPC GPU1 Power" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_1_power_limit: friendly_name: "MYPC GPU1 Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_1_temperature: friendly_name: "MYPC GPU1 Temperature" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'temperature') }}" unit_of_measurement: "C"
```
Basic entity card:
type: entities entities: - entity: sensor.mypc_gpu_0_gpu secondary_info: last-updated - entity: sensor.mypc_gpu_0_memory secondary_info: last-updated - entity: sensor.mypc_gpu_0_power secondary_info: last-updated - entity: sensor.mypc_gpu_0_power_limit secondary_info: last-updated - entity: sensor.mypc_gpu_0_temperature secondary_info: last-updated
Ansible Role
```
-
name: install go become: true package: name: golang-go state: present
-
name: git clone git: repo: "https://github.com/sammcj/NVApi.git" dest: "/home/ansible/NVapi" update: yes force: true
go run main.go -port 9999 -rate 1
-
name: install systemd service become: true copy: src: nvapi.service dest: /etc/systemd/system/nvapi.service
-
name: Reload systemd daemons, enable, and restart nvapi become: true systemd: name: nvapi daemon_reload: yes enabled: yes state: restarted ```
-
@ 04c915da:3dfbecc9
2025-03-10 23:31:30Bitcoin has always been rooted in freedom and resistance to authority. I get that many of you are conflicted about the US Government stacking but by design we cannot stop anyone from using bitcoin. Many have asked me for my thoughts on the matter, so let’s rip it.
Concern
One of the most glaring issues with the strategic bitcoin reserve is its foundation, built on stolen bitcoin. For those of us who value private property this is an obvious betrayal of our core principles. Rather than proof of work, the bitcoin that seeds this reserve has been taken by force. The US Government should return the bitcoin stolen from Bitfinex and the Silk Road.
Usually stolen bitcoin for the reserve creates a perverse incentive. If governments see a bitcoin as a valuable asset, they will ramp up efforts to confiscate more bitcoin. The precedent is a major concern, and I stand strongly against it, but it should be also noted that governments were already seizing coin before the reserve so this is not really a change in policy.
Ideally all seized bitcoin should be burned, by law. This would align incentives properly and make it less likely for the government to actively increase coin seizures. Due to the truly scarce properties of bitcoin, all burned bitcoin helps existing holders through increased purchasing power regardless. This change would be unlikely but those of us in policy circles should push for it regardless. It would be best case scenario for American bitcoiners and would create a strong foundation for the next century of American leadership.
Optimism
The entire point of bitcoin is that we can spend or save it without permission. That said, it is a massive benefit to not have one of the strongest governments in human history actively trying to ruin our lives.
Since the beginning, bitcoiners have faced horrible regulatory trends. KYC, surveillance, and legal cases have made using bitcoin and building bitcoin businesses incredibly difficult. It is incredibly important to note that over the past year that trend has reversed for the first time in a decade. A strategic bitcoin reserve is a key driver of this shift. By holding bitcoin, the strongest government in the world has signaled that it is not just a fringe technology but rather truly valuable, legitimate, and worth stacking.
This alignment of incentives changes everything. The US Government stacking proves bitcoin’s worth. The resulting purchasing power appreciation helps all of us who are holding coin and as bitcoin succeeds our government receives direct benefit. A beautiful positive feedback loop.
Realism
We are trending in the right direction. A strategic bitcoin reserve is a sign that the state sees bitcoin as an asset worth embracing rather than destroying. That said, there is a lot of work left to be done. We cannot be lulled into complacency, the time to push forward is now, and we cannot take our foot off the gas. We have a seat at the table for the first time ever. Let's make it worth it.
We must protect the right to free usage of bitcoin and other digital technologies. Freedom in the digital age must be taken and defended, through both technical and political avenues. Multiple privacy focused developers are facing long jail sentences for building tools that protect our freedom. These cases are not just legal battles. They are attacks on the soul of bitcoin. We need to rally behind them, fight for their freedom, and ensure the ethos of bitcoin survives this new era of government interest. The strategic reserve is a step in the right direction, but it is up to us to hold the line and shape the future.
-
@ 6f6b50bb:a848e5a1
2024-12-15 15:09:52Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
-
Scoperta. Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
-
Efficienza. La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
Scoperta. Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
Efficienza. La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!
Tradotto da:\ https://willwhitney.com/computing-inside-ai.htmlhttps://willwhitney.com/computing-inside-ai.html
-
-
@ 6389be64:ef439d32
2025-02-27 21:32:12GA, plebs. The latest episode of Bitcoin And is out, and, as always, the chicanery is running rampant. Let’s break down the biggest topics I covered, and if you want the full, unfiltered rant, make sure to listen to the episode linked below.
House Democrats’ MEME Act: A Bad Joke?
House Democrats are proposing a bill to ban presidential meme coins, clearly aimed at Trump’s and Melania’s ill-advised token launches. While grifters launching meme coins is bad, this bill is just as ridiculous. If this legislation moves forward, expect a retaliatory strike exposing how politicians like Pelosi and Warren mysteriously amassed their fortunes. Will it pass? Doubtful. But it’s another sign of the government’s obsession with regulating everything except itself.
Senate Banking’s First Digital Asset Hearing: The Real Target Is You
Cynthia Lummis chaired the first digital asset hearing, and—surprise!—it was all about control. The discussion centered on stablecoins, AML, and KYC regulations, with witnesses suggesting Orwellian measures like freezing stablecoin transactions unless pre-approved by authorities. What was barely mentioned? Bitcoin. They want full oversight of stablecoins, which is really about controlling financial freedom. Expect more nonsense targeting self-custody wallets under the guise of stopping “bad actors.”
Bank of America and PayPal Want In on Stablecoins
Bank of America’s CEO openly stated they’ll launch a stablecoin as soon as regulation allows. Meanwhile, PayPal’s CEO paid for a hat using Bitcoin—not their own stablecoin, Pi USD. Why wouldn’t he use his own product? Maybe he knows stablecoins aren’t what they’re hyped up to be. Either way, the legacy financial system is gearing up to flood the market with stablecoins, not because they love crypto, but because it’s a tool to extend U.S. dollar dominance.
MetaPlanet Buys the Dip
Japan’s MetaPlanet issued $13.4M in bonds to buy more Bitcoin, proving once again that institutions see the writing on the wall. Unlike U.S. regulators who obsess over stablecoins, some companies are actually stacking sats.
UK Expands Crypto Seizure Powers
Across the pond, the UK government is pushing legislation to make it easier to seize and destroy crypto linked to criminal activity. While they frame it as going after the bad guys, it’s another move toward centralized control and financial surveillance.
Bitcoin Tools & Tech: Arc, SatoChip, and Nunchuk
Some bullish Bitcoin developments: ARC v0.5 is making Bitcoin’s second layer more efficient, SatoChip now supports Taproot and Nostr, and Nunchuk launched a group wallet with chat, making multisig collaboration easier.
The Bottom Line
The state is coming for financial privacy and control, and stablecoins are their weapon of choice. Bitcoiners need to stay focused, keep their coins in self-custody, and build out parallel systems. Expect more regulatory attacks, but don’t let them distract you—just keep stacking and transacting in ways they can’t control.
🎧 Listen to the full episode here: https://fountain.fm/episode/PYITCo18AJnsEkKLz2Ks
💰 Support the show by boosting sats on Podcasting 2.0! and I will see you on the other side.
-
@ dc4cd086:cee77c06
2025-02-09 03:35:25Have you ever wanted to learn from lengthy educational videos but found it challenging to navigate through hours of content? Our new tool addresses this problem by transforming long-form video lectures into easily digestible, searchable content.
Key Features:
Video Processing:
- Automatically downloads YouTube videos, transcripts, and chapter information
- Splits transcripts into sections based on video chapters
Content Summarization:
- Utilizes language models to transform spoken content into clear, readable text
- Formats output in AsciiDoc for improved readability and navigation
- Highlights key terms and concepts with [[term]] notation for potential cross-referencing
Diagram Extraction:
- Analyzes video entropy to identify static diagram/slide sections
- Provides a user-friendly GUI for manual selection of relevant time ranges
- Allows users to pick representative frames from selected ranges
Going Forward:
Currently undergoing a rewrite to improve organization and functionality, but you are welcome to try the current version, though it might not work on every machine. Will support multiple open and closed language models for user choice Free and open-source, allowing for personal customization and integration with various knowledge bases. Just because we might not have it on our official Alexandria knowledge base, you are still welcome to use it on you own personal or community knowledge bases! We want to help find connections between ideas that exist across relays, allowing individuals and groups to mix and match knowledge bases between each other, allowing for any degree of openness you care.
While designed with #Alexandria users in mind, it's available for anyone to use and adapt to their own learning needs.
Screenshots
Frame Selection
This is a screenshot of the frame selection interface. You'll see a signal that represents frame entropy over time. The vertical lines indicate the start and end of a chapter. Within these chapters you can select the frames by clicking and dragging the mouse over the desired range where you think diagram is in that chapter. At the bottom is an option that tells the program to select a specific number of frames from that selection.
Diagram Extraction
This is a screenshot of the diagram extraction interface. For every selection you've made, there will be a set of frames that you can choose from. You can select and deselect as many frames as you'd like to save.
Links
- repo: https://github.com/limina1/video_article_converter
- Nostr Apps 101: https://www.youtube.com/watch?v=Flxa_jkErqE
Output
And now, we have a demonstration of the final result of this tool, with some quick cleaning up. The video we will be using this tool on is titled Nostr Apps 101 by nostr:npub1nxy4qpqnld6kmpphjykvx2lqwvxmuxluddwjamm4nc29ds3elyzsm5avr7 during Nostrasia. The following thread is an analog to the modular articles we are constructing for Alexandria, and I hope it conveys the functionality we want to create in the knowledge space. Note, this tool is the first step! You could use a different prompt that is most appropriate for the specific context of the transcript you are working with, but you can also manually clean up any discrepancies that don't portray the video accurately. You can now view the article on #Alexandria https://next-alexandria.gitcitadel.eu/publication?d=nostr-apps-101
Initially published as chained kind 1's nostr:nevent1qvzqqqqqqypzp5r5hd579v2sszvvzfel677c8dxgxm3skl773sujlsuft64c44ncqy2hwumn8ghj7un9d3shjtnyv9kh2uewd9hj7qgwwaehxw309ahx7uewd3hkctcpzemhxue69uhhyetvv9ujumt0wd68ytnsw43z7qghwaehxw309aex2mrp0yhxummnw3ezucnpdejz7qgewaehxw309aex2mrp0yh8xmn0wf6zuum0vd5kzmp0qqsxunmjy20mvlq37vnrcshkf6sdrtkfjtjz3anuetmcuv8jswhezgc7hglpn
Or view on Coracle https://coracle.social /nevent1qqsxunmjy20mvlq37vnrcshkf6sdrtkfjtjz3anuetmcuv8jswhezgcppemhxue69uhkummn9ekx7mp0qgsdqa9md83tz5yqnrqjw07hhkpmfjpkuv9hlh5v8yhu8z274w9dv7qnnq0s3
-
@ e6817453:b0ac3c39
2024-12-07 15:06:43I started a long series of articles about how to model different types of knowledge graphs in the relational model, which makes on-device memory models for AI agents possible.
We model-directed graphs
Also, graphs of entities
We even model hypergraphs
Last time, we discussed why classical triple and simple knowledge graphs are insufficient for AI agents and complex memory, especially in the domain of time-aware or multi-model knowledge.
So why do we need metagraphs, and what kind of challenge could they help us to solve?
- complex and nested event and temporal context and temporal relations as edges
- multi-mode and multilingual knowledge
- human-like memory for AI agents that has multiple contexts and relations between knowledge in neuron-like networks
MetaGraphs
A meta graph is a concept that extends the idea of a graph by allowing edges to become graphs. Meta Edges connect a set of nodes, which could also be subgraphs. So, at some level, node and edge are pretty similar in properties but act in different roles in a different context.
Also, in some cases, edges could be referenced as nodes.
This approach enables the representation of more complex relationships and hierarchies than a traditional graph structure allows. Let’s break down each term to understand better metagraphs and how they differ from hypergraphs and graphs.Graph Basics
- A standard graph has a set of nodes (or vertices) and edges (connections between nodes).
- Edges are generally simple and typically represent a binary relationship between two nodes.
- For instance, an edge in a social network graph might indicate a “friend” relationship between two people (nodes).
Hypergraph
- A hypergraph extends the concept of an edge by allowing it to connect any number of nodes, not just two.
- Each connection, called a hyperedge, can link multiple nodes.
- This feature allows hypergraphs to model more complex relationships involving multiple entities simultaneously. For example, a hyperedge in a hypergraph could represent a project team, connecting all team members in a single relation.
- Despite its flexibility, a hypergraph doesn’t capture hierarchical or nested structures; it only generalizes the number of connections in an edge.
Metagraph
- A metagraph allows the edges to be graphs themselves. This means each edge can contain its own nodes and edges, creating nested, hierarchical structures.
- In a meta graph, an edge could represent a relationship defined by a graph. For instance, a meta graph could represent a network of organizations where each organization’s structure (departments and connections) is represented by its own internal graph and treated as an edge in the larger meta graph.
- This recursive structure allows metagraphs to model complex data with multiple layers of abstraction. They can capture multi-node relationships (as in hypergraphs) and detailed, structured information about each relationship.
Named Graphs and Graph of Graphs
As you can notice, the structure of a metagraph is quite complex and could be complex to model in relational and classical RDF setups. It could create a challenge of luck of tools and software solutions for your problem.
If you need to model nested graphs, you could use a much simpler model of Named graphs, which could take you quite far.The concept of the named graph came from the RDF community, which needed to group some sets of triples. In this way, you form subgraphs inside an existing graph. You could refer to the subgraph as a regular node. This setup simplifies complex graphs, introduces hierarchies, and even adds features and properties of hypergraphs while keeping a directed nature.
It looks complex, but it is not so hard to model it with a slight modification of a directed graph.
So, the node could host graphs inside. Let's reflect this fact with a location for a node. If a node belongs to a main graph, we could set the location to null or introduce a main node . it is up to youNodes could have edges to nodes in different subgraphs. This structure allows any kind of nesting graphs. Edges stay location-free
Meta Graphs in Relational Model
Let’s try to make several attempts to model different meta-graphs with some constraints.
Directed Metagraph where edges are not used as nodes and could not contain subgraphs
In this case, the edge always points to two sets of nodes. This introduces an overhead of creating a node set for a single node. In this model, we can model empty node sets that could require application-level constraints to prevent such cases.
Directed Metagraph where edges are not used as nodes and could contain subgraphs
Adding a node set that could model a subgraph located in an edge is easy but could be separate from in-vertex or out-vert.
I also do not see a direct need to include subgraphs to a node, as we could just use a node set interchangeably, but it still could be a case.Directed Metagraph where edges are used as nodes and could contain subgraphs
As you can notice, we operate all the time with node sets. We could simply allow the extension node set to elements set that include node and edge IDs, but in this case, we need to use uuid or any other strategy to differentiate node IDs from edge IDs. In this case, we have a collision of ephemeral edges or ephemeral nodes when we want to change the role and purpose of the node as an edge or vice versa.
A full-scale metagraph model is way too complex for a relational database.
So we need a better model.Now, we have more flexibility but loose structural constraints. We cannot show that the element should have one vertex, one vertex, or both. This type of constraint has been moved to the application level. Also, the crucial question is about query and retrieval needs.
Any meta-graph model should be more focused on domain and needs and should be used in raw form. We did it for a pure theoretical purpose. -
@ e6817453:b0ac3c39
2024-12-07 15:03:06Hey folks! Today, let’s dive into the intriguing world of neurosymbolic approaches, retrieval-augmented generation (RAG), and personal knowledge graphs (PKGs). Together, these concepts hold much potential for bringing true reasoning capabilities to large language models (LLMs). So, let’s break down how symbolic logic, knowledge graphs, and modern AI can come together to empower future AI systems to reason like humans.
The Neurosymbolic Approach: What It Means ?
Neurosymbolic AI combines two historically separate streams of artificial intelligence: symbolic reasoning and neural networks. Symbolic AI uses formal logic to process knowledge, similar to how we might solve problems or deduce information. On the other hand, neural networks, like those underlying GPT-4, focus on learning patterns from vast amounts of data — they are probabilistic statistical models that excel in generating human-like language and recognizing patterns but often lack deep, explicit reasoning.
While GPT-4 can produce impressive text, it’s still not very effective at reasoning in a truly logical way. Its foundation, transformers, allows it to excel in pattern recognition, but the models struggle with reasoning because, at their core, they rely on statistical probabilities rather than true symbolic logic. This is where neurosymbolic methods and knowledge graphs come in.
Symbolic Calculations and the Early Vision of AI
If we take a step back to the 1950s, the vision for artificial intelligence was very different. Early AI research was all about symbolic reasoning — where computers could perform logical calculations to derive new knowledge from a given set of rules and facts. Languages like Lisp emerged to support this vision, enabling programs to represent data and code as interchangeable symbols. Lisp was designed to be homoiconic, meaning it treated code as manipulatable data, making it capable of self-modification — a huge leap towards AI systems that could, in theory, understand and modify their own operations.
Lisp: The Earlier AI-Language
Lisp, short for “LISt Processor,” was developed by John McCarthy in 1958, and it became the cornerstone of early AI research. Lisp’s power lay in its flexibility and its use of symbolic expressions, which allowed developers to create programs that could manipulate symbols in ways that were very close to human reasoning. One of the most groundbreaking features of Lisp was its ability to treat code as data, known as homoiconicity, which meant that Lisp programs could introspect and transform themselves dynamically. This ability to adapt and modify its own structure gave Lisp an edge in tasks that required a form of self-awareness, which was key in the early days of AI when researchers were exploring what it meant for machines to “think.”
Lisp was not just a programming language—it represented the vision for artificial intelligence, where machines could evolve their understanding and rewrite their own programming. This idea formed the conceptual basis for many of the self-modifying and adaptive algorithms that are still explored today in AI research. Despite its decline in mainstream programming, Lisp’s influence can still be seen in the concepts used in modern machine learning and symbolic AI approaches.
Prolog: Formal Logic and Deductive Reasoning
In the 1970s, Prolog was developed—a language focused on formal logic and deductive reasoning. Unlike Lisp, based on lambda calculus, Prolog operates on formal logic rules, allowing it to perform deductive reasoning and solve logical puzzles. This made Prolog an ideal candidate for expert systems that needed to follow a sequence of logical steps, such as medical diagnostics or strategic planning.
Prolog, like Lisp, allowed symbols to be represented, understood, and used in calculations, creating another homoiconic language that allows reasoning. Prolog’s strength lies in its rule-based structure, which is well-suited for tasks that require logical inference and backtracking. These features made it a powerful tool for expert systems and AI research in the 1970s and 1980s.
The language is declarative in nature, meaning that you define the problem, and Prolog figures out how to solve it. By using formal logic and setting constraints, Prolog systems can derive conclusions from known facts, making it highly effective in fields requiring explicit logical frameworks, such as legal reasoning, diagnostics, and natural language understanding. These symbolic approaches were later overshadowed during the AI winter — but the ideas never really disappeared. They just evolved.
Solvers and Their Role in Complementing LLMs
One of the most powerful features of Prolog and similar logic-based systems is their use of solvers. Solvers are mechanisms that can take a set of rules and constraints and automatically find solutions that satisfy these conditions. This capability is incredibly useful when combined with LLMs, which excel at generating human-like language but need help with logical consistency and structured reasoning.
For instance, imagine a scenario where an LLM needs to answer a question involving multiple logical steps or a complex query that requires deducing facts from various pieces of information. In this case, a solver can derive valid conclusions based on a given set of logical rules, providing structured answers that the LLM can then articulate in natural language. This allows the LLM to retrieve information and ensure the logical integrity of its responses, leading to much more robust answers.
Solvers are also ideal for handling constraint satisfaction problems — situations where multiple conditions must be met simultaneously. In practical applications, this could include scheduling tasks, generating optimal recommendations, or even diagnosing issues where a set of symptoms must match possible diagnoses. Prolog’s solver capabilities and LLM’s natural language processing power can make these systems highly effective at providing intelligent, rule-compliant responses that traditional LLMs would struggle to produce alone.
By integrating neurosymbolic methods that utilize solvers, we can provide LLMs with a form of deductive reasoning that is missing from pure deep-learning approaches. This combination has the potential to significantly improve the quality of outputs for use-cases that require explicit, structured problem-solving, from legal queries to scientific research and beyond. Solvers give LLMs the backbone they need to not just generate answers but to do so in a way that respects logical rigor and complex constraints.
Graph of Rules for Enhanced Reasoning
Another powerful concept that complements LLMs is using a graph of rules. A graph of rules is essentially a structured collection of logical rules that interconnect in a network-like structure, defining how various entities and their relationships interact. This structured network allows for complex reasoning and information retrieval, as well as the ability to model intricate relationships between different pieces of knowledge.
In a graph of rules, each node represents a rule, and the edges define relationships between those rules — such as dependencies or causal links. This structure can be used to enhance LLM capabilities by providing them with a formal set of rules and relationships to follow, which improves logical consistency and reasoning depth. When an LLM encounters a problem or a question that requires multiple logical steps, it can traverse this graph of rules to generate an answer that is not only linguistically fluent but also logically robust.
For example, in a healthcare application, a graph of rules might include nodes for medical symptoms, possible diagnoses, and recommended treatments. When an LLM receives a query regarding a patient’s symptoms, it can use the graph to traverse from symptoms to potential diagnoses and then to treatment options, ensuring that the response is coherent and medically sound. The graph of rules guides reasoning, enabling LLMs to handle complex, multi-step questions that involve chains of reasoning, rather than merely generating surface-level responses.
Graphs of rules also enable modular reasoning, where different sets of rules can be activated based on the context or the type of question being asked. This modularity is crucial for creating adaptive AI systems that can apply specific sets of logical frameworks to distinct problem domains, thereby greatly enhancing their versatility. The combination of neural fluency with rule-based structure gives LLMs the ability to conduct more advanced reasoning, ultimately making them more reliable and effective in domains where accuracy and logical consistency are critical.
By implementing a graph of rules, LLMs are empowered to perform deductive reasoning alongside their generative capabilities, creating responses that are not only compelling but also logically aligned with the structured knowledge available in the system. This further enhances their potential applications in fields such as law, engineering, finance, and scientific research — domains where logical consistency is as important as linguistic coherence.
Enhancing LLMs with Symbolic Reasoning
Now, with LLMs like GPT-4 being mainstream, there is an emerging need to add real reasoning capabilities to them. This is where neurosymbolic approaches shine. Instead of pitting neural networks against symbolic reasoning, these methods combine the best of both worlds. The neural aspect provides language fluency and recognition of complex patterns, while the symbolic side offers real reasoning power through formal logic and rule-based frameworks.
Personal Knowledge Graphs (PKGs) come into play here as well. Knowledge graphs are data structures that encode entities and their relationships — they’re essentially semantic networks that allow for structured information retrieval. When integrated with neurosymbolic approaches, LLMs can use these graphs to answer questions in a far more contextual and precise way. By retrieving relevant information from a knowledge graph, they can ground their responses in well-defined relationships, thus improving both the relevance and the logical consistency of their answers.
Imagine combining an LLM with a graph of rules that allow it to reason through the relationships encoded in a personal knowledge graph. This could involve using deductive databases to form a sophisticated way to represent and reason with symbolic data — essentially constructing a powerful hybrid system that uses LLM capabilities for language fluency and rule-based logic for structured problem-solving.
My Research on Deductive Databases and Knowledge Graphs
I recently did some research on modeling knowledge graphs using deductive databases, such as DataLog — which can be thought of as a limited, data-oriented version of Prolog. What I’ve found is that it’s possible to use formal logic to model knowledge graphs, ontologies, and complex relationships elegantly as rules in a deductive system. Unlike classical RDF or traditional ontology-based models, which sometimes struggle with complex or evolving relationships, a deductive approach is more flexible and can easily support dynamic rules and reasoning.
Prolog and similar logic-driven frameworks can complement LLMs by handling the parts of reasoning where explicit rule-following is required. LLMs can benefit from these rule-based systems for tasks like entity recognition, logical inferences, and constructing or traversing knowledge graphs. We can even create a graph of rules that governs how relationships are formed or how logical deductions can be performed.
The future is really about creating an AI that is capable of both deep contextual understanding (using the powerful generative capacity of LLMs) and true reasoning (through symbolic systems and knowledge graphs). With the neurosymbolic approach, these AIs could be equipped not just to generate information but to explain their reasoning, form logical conclusions, and even improve their own understanding over time — getting us a step closer to true artificial general intelligence.
Why It Matters for LLM Employment
Using neurosymbolic RAG (retrieval-augmented generation) in conjunction with personal knowledge graphs could revolutionize how LLMs work in real-world applications. Imagine an LLM that understands not just language but also the relationships between different concepts — one that can navigate, reason, and explain complex knowledge domains by actively engaging with a personalized set of facts and rules.
This could lead to practical applications in areas like healthcare, finance, legal reasoning, or even personal productivity — where LLMs can help users solve complex problems logically, providing relevant information and well-justified reasoning paths. The combination of neural fluency with symbolic accuracy and deductive power is precisely the bridge we need to move beyond purely predictive AI to truly intelligent systems.
Let's explore these ideas further if you’re as fascinated by this as I am. Feel free to reach out, follow my YouTube channel, or check out some articles I’ll link below. And if you’re working on anything in this field, I’d love to collaborate!
Until next time, folks. Stay curious, and keep pushing the boundaries of AI!
-
@ e6817453:b0ac3c39
2024-12-07 14:54:46Introduction: Personal Knowledge Graphs and Linked Data
We will explore the world of personal knowledge graphs and discuss how they can be used to model complex information structures. Personal knowledge graphs aren’t just abstract collections of nodes and edges—they encode meaningful relationships, contextualizing data in ways that enrich our understanding of it. While the core structure might be a directed graph, we layer semantic meaning on top, enabling nuanced connections between data points.
The origin of knowledge graphs is deeply tied to concepts from linked data and the semantic web, ideas that emerged to better link scattered pieces of information across the web. This approach created an infrastructure where data islands could connect — facilitating everything from more insightful AI to improved personal data management.
In this article, we will explore how these ideas have evolved into tools for modeling AI’s semantic memory and look at how knowledge graphs can serve as a flexible foundation for encoding rich data contexts. We’ll specifically discuss three major paradigms: RDF (Resource Description Framework), property graphs, and a third way of modeling entities as graphs of graphs. Let’s get started.
Intro to RDF
The Resource Description Framework (RDF) has been one of the fundamental standards for linked data and knowledge graphs. RDF allows data to be modeled as triples: subject, predicate, and object. Essentially, you can think of it as a structured way to describe relationships: “X has a Y called Z.” For instance, “Berlin has a population of 3.5 million.” This modeling approach is quite flexible because RDF uses unique identifiers — usually URIs — to point to data entities, making linking straightforward and coherent.
RDFS, or RDF Schema, extends RDF to provide a basic vocabulary to structure the data even more. This lets us describe not only individual nodes but also relationships among types of data entities, like defining a class hierarchy or setting properties. For example, you could say that “Berlin” is an instance of a “City” and that cities are types of “Geographical Entities.” This kind of organization helps establish semantic meaning within the graph.
RDF and Advanced Topics
Lists and Sets in RDF
RDF also provides tools to model more complex data structures such as lists and sets, enabling the grouping of nodes. This extension makes it easier to model more natural, human-like knowledge, for example, describing attributes of an entity that may have multiple values. By adding RDF Schema and OWL (Web Ontology Language), you gain even more expressive power — being able to define logical rules or even derive new relationships from existing data.
Graph of Graphs
A significant feature of RDF is the ability to form complex nested structures, often referred to as graphs of graphs. This allows you to create “named graphs,” essentially subgraphs that can be independently referenced. For example, you could create a named graph for a particular dataset describing Berlin and another for a different geographical area. Then, you could connect them, allowing for more modular and reusable knowledge modeling.
Property Graphs
While RDF provides a robust framework, it’s not always the easiest to work with due to its heavy reliance on linking everything explicitly. This is where property graphs come into play. Property graphs are less focused on linking everything through triples and allow more expressive properties directly within nodes and edges.
For example, instead of using triples to represent each detail, a property graph might let you store all properties about an entity (e.g., “Berlin”) directly in a single node. This makes property graphs more intuitive for many developers and engineers because they more closely resemble object-oriented structures: you have entities (nodes) that possess attributes (properties) and are connected to other entities through relationships (edges).
The significant benefit here is a condensed representation, which speeds up traversal and queries in some scenarios. However, this also introduces a trade-off: while property graphs are more straightforward to query and maintain, they lack some complex relationship modeling features RDF offers, particularly when connecting properties to each other.
Graph of Graphs and Subgraphs for Entity Modeling
A third approach — which takes elements from RDF and property graphs — involves modeling entities using subgraphs or nested graphs. In this model, each entity can be represented as a graph. This allows for a detailed and flexible description of attributes without exploding every detail into individual triples or lump them all together into properties.
For instance, consider a person entity with a complex employment history. Instead of representing every employment detail in one node (as in a property graph), or as several linked nodes (as in RDF), you can treat the employment history as a subgraph. This subgraph could then contain nodes for different jobs, each linked with specific properties and connections. This approach keeps the complexity where it belongs and provides better flexibility when new attributes or entities need to be added.
Hypergraphs and Metagraphs
When discussing more advanced forms of graphs, we encounter hypergraphs and metagraphs. These take the idea of relationships to a new level. A hypergraph allows an edge to connect more than two nodes, which is extremely useful when modeling scenarios where relationships aren’t just pairwise. For example, a “Project” could connect multiple “People,” “Resources,” and “Outcomes,” all in a single edge. This way, hypergraphs help in reducing the complexity of modeling high-order relationships.
Metagraphs, on the other hand, enable nodes and edges to themselves be represented as graphs. This is an extremely powerful feature when we consider the needs of artificial intelligence, as it allows for the modeling of relationships between relationships, an essential aspect for any system that needs to capture not just facts, but their interdependencies and contexts.
Balancing Structure and Properties
One of the recurring challenges when modeling knowledge is finding the balance between structure and properties. With RDF, you get high flexibility and standardization, but complexity can quickly escalate as you decompose everything into triples. Property graphs simplify the representation by using attributes but lose out on the depth of connection modeling. Meanwhile, the graph-of-graphs approach and hypergraphs offer advanced modeling capabilities at the cost of increased computational complexity.
So, how do you decide which model to use? It comes down to your use case. RDF and nested graphs are strong contenders if you need deep linkage and are working with highly variable data. For more straightforward, engineer-friendly modeling, property graphs shine. And when dealing with very complex multi-way relationships or meta-level knowledge, hypergraphs and metagraphs provide the necessary tools.
The key takeaway is that only some approaches are perfect. Instead, it’s all about the modeling goals: how do you want to query the graph, what relationships are meaningful, and how much complexity are you willing to manage?
Conclusion
Modeling AI semantic memory using knowledge graphs is a challenging but rewarding process. The different approaches — RDF, property graphs, and advanced graph modeling techniques like nested graphs and hypergraphs — each offer unique strengths and weaknesses. Whether you are building a personal knowledge graph or scaling up to AI that integrates multiple streams of linked data, it’s essential to understand the trade-offs each approach brings.
In the end, the choice of representation comes down to the nature of your data and your specific needs for querying and maintaining semantic relationships. The world of knowledge graphs is vast, with many tools and frameworks to explore. Stay connected and keep experimenting to find the balance that works for your projects.
-
@ e6817453:b0ac3c39
2024-12-07 14:52:47The temporal semantics and temporal and time-aware knowledge graphs. We have different memory models for artificial intelligence agents. We all try to mimic somehow how the brain works, or at least how the declarative memory of the brain works. We have the split of episodic memory and semantic memory. And we also have a lot of theories, right?
Declarative Memory of the Human Brain
How is the semantic memory formed? We all know that our brain stores semantic memory quite close to the concept we have with the personal knowledge graphs, that it’s connected entities. They form a connection with each other and all those things. So far, so good. And actually, then we have a lot of concepts, how the episodic memory and our experiences gets transmitted to the semantic:
- hippocampus indexing and retrieval
- sanitization of episodic memories
- episodic-semantic shift theory
They all give a different perspective on how different parts of declarative memory cooperate.
We know that episodic memories get semanticized over time. You have semantic knowledge without the notion of time, and probably, your episodic memory is just decayed.
But, you know, it’s still an open question:
do we want to mimic an AI agent’s memory as a human brain memory, or do we want to create something different?
It’s an open question to which we have no good answer. And if you go to the theory of neuroscience and check how episodic and semantic memory interfere, you will still find a lot of theories, yeah?
Some of them say that you have the hippocampus that keeps the indexes of the memory. Some others will say that you semantic the episodic memory. Some others say that you have some separate process that digests the episodic and experience to the semantics. But all of them agree on the plan that it’s operationally two separate areas of memories and even two separate regions of brain, and the semantic, it’s more, let’s say, protected.
So it’s harder to forget the semantical facts than the episodes and everything. And what I’m thinking about for a long time, it’s this, you know, the semantic memory.
Temporal Semantics
It’s memory about the facts, but you somehow mix the time information with the semantics. I already described a lot of things, including how we could combine time with knowledge graphs and how people do it.
There are multiple ways we could persist such information, but we all hit the wall because the complexity of time and the semantics of time are highly complex concepts.
Time in a Semantic context is not a timestamp.
What I mean is that when you have a fact, and you just mentioned that I was there at this particular moment, like, I don’t know, 15:40 on Monday, it’s already awake because we don’t know which Monday, right? So you need to give the exact date, but usually, you do not have experiences like that.
You do not record your memories like that, except you do the journaling and all of the things. So, usually, you have no direct time references. What I mean is that you could say that I was there and it was some event, blah, blah, blah.
Somehow, we form a chain of events that connect with each other and maybe will be connected to some period of time if we are lucky enough. This means that we could not easily represent temporal-aware information as just a timestamp or validity and all of the things.
For sure, the validity of the knowledge graphs (simple quintuple with start and end dates)is a big topic, and it could solve a lot of things. It could solve a lot of the time cases. It’s super simple because you give the end and start dates, and you are done, but it does not answer facts that have a relative time or time information in facts . It could solve many use cases but struggle with facts in an indirect temporal context. I like the simplicity of this idea. But the problem of this approach that in most cases, we simply don’t have these timestamps. We don’t have the timestamp where this information starts and ends. And it’s not modeling many events in our life, especially if you have the processes or ongoing activities or recurrent events.
I’m more about thinking about the time of semantics, where you have a time model as a hybrid clock or some global clock that does the partial ordering of the events. It’s mean that you have the chain of the experiences and you have the chain of the facts that have the different time contexts.
We could deduct the time from this chain of the events. But it’s a big, big topic for the research. But what I want to achieve, actually, it’s not separation on episodic and semantic memory. It’s having something in between.
Blockchain of connected events and facts
I call it temporal-aware semantics or time-aware knowledge graphs, where we could encode the semantic fact together with the time component.I doubt that time should be the simple timestamp or the region of the two timestamps. For me, it is more a chain for facts that have a partial order and form a blockchain like a database or a partially ordered Acyclic graph of facts that are temporally connected. We could have some notion of time that is understandable to the agent and a model that allows us to order the events and focus on what the agent knows and how to order this time knowledge and create the chains of the events.
Time anchors
We may have a particular time in the chain that allows us to arrange a more concrete time for the rest of the events. But it’s still an open topic for research. The temporal semantics gets split into a couple of domains. One domain is how to add time to the knowledge graphs. We already have many different solutions. I described them in my previous articles.
Another domain is the agent's memory and how the memory of the artificial intelligence treats the time. This one, it’s much more complex. Because here, we could not operate with the simple timestamps. We need to have the representation of time that are understandable by model and understandable by the agent that will work with this model. And this one, it’s way bigger topic for the research.”
-
@ a39d19ec:3d88f61e
2024-11-21 12:05:09A state-controlled money supply can influence the development of socialist policies and practices in various ways. Although the relationship is not deterministic, state control over the money supply can contribute to a larger role of the state in the economy and facilitate the implementation of socialist ideals.
Fiscal Policy Capabilities
When the state manages the money supply, it gains the ability to implement fiscal policies that can lead to an expansion of social programs and welfare initiatives. Funding these programs by creating money can enhance the state's influence over the economy and move it closer to a socialist model. The Soviet Union, for instance, had a centralized banking system that enabled the state to fund massive industrialization and social programs, significantly expanding the state's role in the economy.
Wealth Redistribution
Controlling the money supply can also allow the state to influence economic inequality through monetary policies, effectively redistributing wealth and reducing income disparities. By implementing low-interest loans or providing financial assistance to disadvantaged groups, the state can narrow the wealth gap and promote social equality, as seen in many European welfare states.
Central Planning
A state-controlled money supply can contribute to increased central planning, as the state gains more influence over the economy. Central banks, which are state-owned or heavily influenced by the state, play a crucial role in managing the money supply and facilitating central planning. This aligns with socialist principles that advocate for a planned economy where resources are allocated according to social needs rather than market forces.
Incentives for Staff
Staff members working in state institutions responsible for managing the money supply have various incentives to keep the system going. These incentives include job security, professional expertise and reputation, political alignment, regulatory capture, institutional inertia, and legal and administrative barriers. While these factors can differ among individuals, they can collectively contribute to the persistence of a state-controlled money supply system.
In conclusion, a state-controlled money supply can facilitate the development of socialist policies and practices by enabling fiscal policies, wealth redistribution, and central planning. The staff responsible for managing the money supply have diverse incentives to maintain the system, further ensuring its continuation. However, it is essential to note that many factors influence the trajectory of an economic system, and the relationship between state control over the money supply and socialism is not inevitable.
-
@ a39d19ec:3d88f61e
2024-11-17 10:48:56This week's functional 3d print is the "Dino Clip".
Dino Clip
I printed it some years ago for my son, so he would have his own clip for cereal bags.
Now it is used to hold a bag of dog food close.
The design by "Sneaks" is a so called "print in place". This means that the whole clip with moving parts is printed in one part, without the need for assembly after the print.
The clip is very strong, and I would print it again if I need a "heavy duty" clip for more rigid or big bags. Link to the file at Printables
-
@ a367f9eb:0633efea
2024-11-05 08:48:41Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?
As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
Open-source matters When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
Open-source is for your friends, and enemies In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
Limiting open-source threatens our own advancement If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.
The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?
As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
Open-source matters
When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
Open-source is for your friends, and enemies
In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
Limiting open-source threatens our own advancement
If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.
The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
Originally published on the website of the Consumer Choice Center.
-
@ 8671a6e5:f88194d1
2025-05-07 16:31:50Users are not employees Continued from (part 1)
Bitcoiners know how a lot of stuff works, so we think everybody knows. That’s a project management assumption that belongs more in the hobby-code project sphere (where it’s even cool to do so). And sure, I enjoy that myself when I’m building a small tool for fun with no one to answer to. But that mindset doesn’t scale when the goal is to make Bitcoin more accessible and structurally sound for new users. Some argue we don’t need new users at all—that Bitcoin is so good, so perfect, that people will eventually come around. Imagine being forced to use bitcoin by economic and monetary circumstances, to realize you hate to work with things like Nunchuk ‘s multisig or a Trezor hardware wallet from 2025.
An impression: Bitcoiners know that opening a Lightning channel comes with an on-chain fee—unless the provider covers it and claws it back through subscriptions or service charges. But the average user doesn’t. So when that fee appears out of nowhere, or gets buried in cryptic devspeak jargon, so they bounce, often left confused, frustrated, and unlikely to return.
That’s not that bad on itself, as long as companies know who their audience is and take care of the way you inform users. You have to “raise” your customers. Not scare them away with a wall of nonsense.
Same goes for onboarding people into Nostr sometimes by the way.
If features like this or natural onboarding hurdles were explained with more than just puzzling error codes or vague on-screen prompts, half the users wouldn’t vanish in the first 10 seconds, confused and frustrated. It’s the same story we’ve seen with PGP: people stumble during the initial key creation, or get lost trying to sign a message with someone’s public key. The tech works; the experience doesn’t. Have trust in what you’ve build underneath!
It’s a barrier for newcomers. Just keep it in mind. Take care of the user. They do like to come on board! Most people do want to come on board. Our goal is to help free them from fiat, not drown them in confusing UX or half-baked tools. Bitcoin's underlying value might be incredible, but that doesn’t matter to someone still trapped in the fiat bubble. They won’t get onboarded through broken apps or confusing flows, just like they’ll use something like SimpleX on Linux.
To make it even worse, some bitcoin organizations and companies don’t seem to grasp the extent of this alienation of the user — because it’s always brushed off as ‘niche’ or just an edge case. It’s all so obvious for them on how to use it. But it adds up. Sites like Highlighter.com (I like their underlying service by the way!) is not working properly with Nos2x (a key handler) on the first try. A user needs to know to reload, re-try, re-load again after selecting the public key, then maybe it works). These are exactly the kind of small annoyances that new users won’t wade through. It’s also not that difficult to tackle it with proper testing.
Users who encounter such thing, certainly when they just wanted to see what the site or service did, just leave. Let someone who just discovered Nostr try logging in there for example, and unless they're unusually determined, they'll get stuck immediately, wondering why pasting their public key and clicking “login” doesn’t lead anywhere. It’s a brick wall; and the creators think it’s a nice landing page.
These things also lead to other frustrations and eventual software cycling through the interested users’ hands.
About a year ago I “onboarded” a friend of mine, and she already tried four wallets since I introduced her to the wonderful world of Bitcoin/LN wallets. First just to get things going she used Wallet of Satoshi, then when the custodial story was getting traction for this user, we moved to Phoenix (which failed due to upfront money). Next, Aqua wallet, which was too cumbersome and had hiccups, and then moving on to Mutiny wallet, which shut down a bit later.
And so we ended up back with Wallet of Satoshi, with Blink being a second choice but having to give a telephone number was too much for the user (yes, you can skip that if you carefully read and put it in test / demo mode or whatever it’s called).
People kept recommending other wallet brands, other names to try, … there are always other names. “Try Muun” “Get Zeus”, “Why don’t you go for CoinOs, just as a web app?” (that last one is actually good) … but it’s always like that … if you talk to ten bitcoiners, then you’ll have heard five different ‘best wallets’ to try, and you’ve had probably get about four referral links from all of these people. (Or some Relai squad member trying to get a few sats out of your genuine interest:)
Real testing is a ghost town
We saw in part 1 that Bitcoin companies do some testing these days—but it lacks specialization. It’s often so superficial that the cracks are visible in the software itself: clumsy onboarding flows, unclear settings, or unexpected fields asking for information users don’t have and don’t know how to get. Much of this stems from the habit of pulling in a handful of “fans” or supporters to try out a beta version.
That’s the wrong way to go (the fiat companies that do this, usually get bad or inconclusive results as well, unless you’re doing it a scale and a very diverse audience).
Most of the people brought in this way have no background in structured testing, let alone in reporting bugs clearly or identifying critical failure points. The feedback you get is vague and surface-level: “I like the colors,” “the buttons feel small,” or “I got an error when I tried to send something.”
But rarely do you hear what caused it, what preceded it, or what device or settings were involved. That kind of insight doesn’t come from random fans trying to get their hands on some products or perks. You’ll have to pay professionals to do it. And even when meaningful feedback does come in, it often ends up on the wrong or overworked hands, forwarded to whichever developer drew the short straw that weekend. There's usually no structured triage, no internal testing culture that treats usability or edge cases as part of the real product. Just a sigh, a shrug, and back to building features that sound cool when they dreamed it up.
I keep hammering this point because this might be the only time you’ll actually read about it. No company—bitcoin or fiat—tries to win users by focusing on boring but critical details like onboarding clarity or robust edge-case handling. They’ll avoid claiming “this just works,” because saying so invites scrutiny and backlash. The only company bold (or arrogant) enough to occasionally say that is Apple—and even they drop the ball more often than they like to admit.
By design
The dream I have, is “Usability by design”. And that dream is close to non-existent in the realm and everyday reality of Bitcoin. From the moment something gets drawn up or is being created, the design and user flow, the easy adoption, should be kept in mind as well as the real implementation factors for the intended purpose (and beyond even). Then you hardly have any discussions like “yeah, but technically the user has to make an input here, so we can’t do otherwise, and it works on my machine”
Users that have to trial and error, are maybe bitcoin-natives, and like that sort of stuff. Other people just want to try something out (wallet, nostr, an exchange, a node, lightning …) and expect it to work smooth. It gives them trust in the system.
These issues all create enormous opportunity for companies that would start to take software delivery quality more serious. But I’m afraid that would cost them two things most Bitcoin companies already lack: time and proper funding.
Many are stuck in a “pump-my-bags” mindset, focused more on hype than durability, while others simply don’t have the resources to invest in thoughtful UX, thorough testing, or long-term support.
The very few companies that do “get it”, and make something that just works, with good leadership and a focus on clear, usable interfaces—often catch flack for it. They’re criticized for making things “too easy,” “too centralized,” or for “lowering the bar,” as if simplicity and accessibility are somehow problems. But in reality, these are the companies pushing the space forward, making it easier for people to use Bitcoin without the constant headache. And newsflash: you can do so with keeping bitcion’s ethos alive I think, even without a company as a middleman. Which raises another problem about funding and hard money, something I’ll write more about in chapter 12 of this series.
Back to the software…. Protonmail’s wallet comes to mind where most bitcoiners I know just scoffed at like “it doesn’t have lightning”, or “why do we need another wallet?”. While they deliver an excellent product that just works.
Exactly. Take the Stack Wallet project, for example. They had the audacity to incorporate Monero, and because of that, they’re shunned by many Bitcoiners — despite offering a solid, open-source, multi-platform wallet that actually works. It’s a perfect example of how Bitcoiners can sometimes reject the very things that could help bring more users into the space, all because they don’t align perfectly with some purist ideology. While on the other hand these same bitcoiners support middlemen multi-level-marketing tactics from questionable companies.
But I guess yelling “oooh shiiiitcoooooin” is the easier answer, instead of making something that works fine. And by the way, if you want, you can fork that wallet and take a bitcoin-only version to market, however, the same people say “Oh, but that’s no my task”. (Yeah, we all know what your “task” is, gluing a sticker on a pole).
Another nice illustration is the kind of reaction you get if you “provoke the beast” by using the really usable, always working, always compatible, fast starting, lightning wallet “Wallet of Satoshi”. If you use that wallet at a bitcoin meetup, you’ll get clever remarks (from people that ar technically right, I mind saying) like: “You know… that’s custodial right?” (this came after I gave a presentation, and some smartass walked in when I ordered a beer from the honesty bar at the local meetup…) “Yeah I know man, but I’m scanning an LNurl here, so I just want it to work fine” (like I have to defend myself to them) “But you’re supporting these custodial thieves, they have already so much power man”, … said the dude that never even lifted a finger at the meetups to help anyone out or get things set up. “So… make something better.” I answered “Yeah, there’s like Zeus and stuff” “Uhuh, I tried it… I never got it to work properly. I just use this WoS today”
They usually get mad. Because they want everyone to follow their lead, and that lead is always the way of most resistance and acting like a normal user-repellent. I know we’re all rat-poison in bitcoin, but not take it too far please.
Other discussions like this always evolve into the “you’re dumb” argument (I like to provoke these types at meetups by scanning a qr and getting a payment through, while they’re fiddling with their whatever it is that runs on a node they need to reboot every few hours or so). calling out the ones who act like they know it all, but don’t have a solid grasp on the fundamentals themselves. We all make mistakes, and that’s where the real growth happens.
The other answer you can give is: “Hey why are you sending me a WhatsApp message man? Why don’t you use a Free BDS and a GNUPGP encrypted message brought to me on a micro-sd card through a sneaker net currier?” And they would be like “eh now, I just send you a WhatsApp message” “Oh you know these are all collecting your meta data right?”…
The double standard among bitcoiners these days regarding usability is incredible.
I know there’s plenty of software trying to be both non-custodial and user-friendly—don’t remind me. My point is: in the Bitcoin world, usability is often treated like a dirty word, something suspicious or even dangerous. But it’s the opposite—we're the hope, and usability is our fire starter, the spark that lights the fuse for real adoption. Without it, we’ll just be another niche tool used by a few, and not the global movement we could be.
Nod to the Node So, I can finally say it like I think it is: 90% of bitcoin software sucks donkey balls when it comes to usability, UX and UI.
A few things I want to mention in that regard, because things move too fast to write it all down in a book to keep up with.
Some examples of bad UX and/or rotten software experiences:
In Sparrow 2.0, commonly praised as as a “good” wallet, lacks of good interface. Try to create a multisig wallet there, and you’ll soon be met with a persistent bug that frustrates users when signing a transaction. The software prompts for a hardware wallet for example, even when a software wallet's seed is loaded, creating a confusing and poorly designed user interface experience. And yes, for bitcoin this is considered a good wallet, as the others are even worse (Nunchuk and Electrum don’t do much better).
Or Blue Wallet, which finally in 2025 fixed a few UI bugs and annoyances, but otherwise has a few really rotten design choices which makes using it not intuitive enough. Users don’t get much further on some aspects without looking anything up in youtube tutorials. And only bitcoiners do that anyway. Users just stop opening the app after a while.
Try to create a multi-sig wallet for example in Blue wallet, then take a random part away from the setup and try to use it with the leftover parts. It works. But you’ll be really pressed to get it done within 40 minutes. (Unless that’s your job and you demo it every so often in a studio).
Bitcoin Core, is also a prime example of bitcoin software, that has a command line interface repelling users like it’s a steaming turd on the street. It has a (let’s say) “spartan” way of working. And yes, I know this piece of software is not meant to be the next Instagram-like user interface for everyday use by the masses, but it’s a far cry from being usable in the really real world. Getting anything done inside that software is a constant battle against clunky commands, and their cryptic error messages. Even getting a private key out for one of your addresses of your own wallet, is hell. bitcoin core’s infamous command line at work
Also puzzling to me is the “success” of Bitcoin Core’s wallet. It’s command line “help” is enough to frustrate even the most willing of new users.
For example (and there are a dozen things like this) , try to get a command like “dumppriv” key (to see the private key from a wallet address) working.
And some more:
Jade wallet’s inability to store BIP39 compatible seed phrases (at the time of testing, beginning of 2024), when the seed contained a double word. Don’t know if they ever fixed it, as I couldn’t get into my Jade wallet v1 anymore after the pin code entry screen froze and was not even coming back after a factory reset.
We have Phoenix (where finding the right URL for downloading it, is already a first hurdle to take by the way:) try to tell the URL by heart to another person, without searching online… I’ll wait.
And Strike app, alongside the much despised Wallet of Satoshi (in my opinion the only people in bitcoin together with the creators of the Minibits.cash creators) that get the importance of a simple to use interface and a good well-thought out inner working.
And when I took a shot at the new Trezor Safe 5 wallet I got some critique because I “tested it like an end user” (yeah, it was my fault… I made the exact same mistakes than the end user that had this wallet and asked me for help, after 30 min. of trying, we figured out that the words were in fact not seed words but some verification method that also created a 20 words SLIP39 seed by default… and we fat fingered the stupid interface design a few times, on which the whole thing had to be restarted after a reset to defaults… Try to explain that to a new user that just wanted to have a wallet and had about 1 hour time…
But I guess people demonstrating such things in a studio don’t mind that. It’s just “what do you sell?” now. If the hardware with the unknown supply chain attack vectors sells well, then everyone is happy… the manufacturer, the marketing team, the podcasts that get sponsorship and the events that can have a budget; the user is really at the very back-end last in line… usually queuing up for coffee with the rest of the liquidity-cows.
Also it had accompanying software that kept hanging though some updating loops, it has a clumsy swipe/touch (and sometimes) hold, then swipe again-interface that no one I tested it with, could get through when creating a new wallet. But of course, it’s always the dumb users’ fault I guess.
The Zeus wallet that claims to be super user friendly and cool, is also something… weird. Where you can’t really set it up, without some very technical guidance. To their advantage: the very first thing you read on their website is “To start using ZEUS you will need to be running your own Bitcoin lightning node.” (they at least mention it, that’s progress) But… there it ends for most users of course. Babysitting a Lightning node is absolutely not something you want to entrust a new user with. Not in order to get a wallet up and running at least :)
Then you get the Linuxsplaining : “Then you’re not the intended user”.
NWC (nostr wallet connect) is some very promising tool, and it has features and way of working I like in theory. But I’ve yet have to see the first smooth implementation that can be understood by normal non-tech people (even the wording of the text fields it completely unclear).
A small example of making your software unusable? Well… do like NWC does and indicate that the user needs to “connect” through this service with : “nostr+walletconnect://”
So when I asked some people what I needed to fill in there, and how I would get these values… they said “it’s a string”. which tells me nothing. And so, the user left
Tip: add real user guidance. When you connect to a service, at least point to WHERE people get explained how to create such a wallet string, or where to get it, from which website or service.
It’s the same as telling someone “hey you have to call a number to reach our catering service” ”Ok, your website says “phone us“ ”yeah, its a number man.. a phone number” ”Ok; but where do I get which phone number I need to call” ”It’s like… a number man, duh you’re so dumb”
So, if there’s no user guidance, the user will leave. After searching for a few minutes I just gave up.
Nostr is also starting to feel like that usability-averse stance is getting traction, although there are promising signs, as they need new users to thrive and seem to realize that all too well. But still… it has it’s moments of user-repelling snags. Nostr relay lists don’t allow you to copy relay addresses, making it a hassle to set up on mobile. No one seems to realize that users want to simply copy these values rather than struggle to recall if it’s “ssw,” “wss,” or “wws://” and type them out manually. And make mistakes eventually.
puzzling for people who don’t have Alby and want to get such code Arrays are not human Then there’s the BIP39 seed system (12 or 24 words representing a key derived via a KDF and mapped through a lookup table). Mathematically, this wordlist is an array—and arrays start at 0. So, word 0 = "abandon". But most humans naturally count from 1, making "abandon" word 1 in their eyes. Both are valid depending on perspective: 0 is technically correct, 1 is intuitive. There’s no clear winner—both versions float around in Bitcoinland. Same mess with compressed vs. uncompressed keys (don’t get me started). So, when I made bip39tool.com I gave users the option: go full math mode with 0 = abandon, or go human mode and start with 1 (the default). Even hardware makers don’t agree. Blockplate starts at 1 = abandon (source), While the widely used master BIP39 list has no number attached (just a raw list), but appears to be starting at 1 because of Githubs’s line numbering (source) while some others use 0 = abandon
As one user (Codebender) excellently put it: ”Array is an offset, not a cardinal number. The first entry is zero away from the beginning of the array, the second entry is one away.”
Sink through the ceiling
Most tools today cater purely to Bitcoiners, built with a 'Bitcoin' mindset that expects non-Bitcoin users to adapt instead of being taken along for the ride.
That works as we’ve seen, but hits an “orange colored glass ceiling” at some point. You can build the next “Lotus notes” and be really happy about Lotus notes enthusiasts and the consultants that got hired to implement and migrate that office note system, and it’s e-mail software at a hefty fee. But you’re still in your own niche bubble, thinking your software owns the world and you can be bothered to look further than your own audience.
The same way people that were into Lotus Notes were very keen on a big player like IBM acquiring the software and brand to build on it some more.
And of course,… Bitcoin is bitcoin, there can’t be a second best, there can’t be a replacement that comes in and swoops up the market share or replaces the functions like Lotus Notes was replaced by Google workspace, Microsoft Exchange or Slack.
I don’t want to make the point that bitcoin will be replaced by a new player (I’m not a shitcoiner). I do however, want to make the point that we’ve become collectively lazy, complacent about usability, to the point that we’re actually the Google, Microsoft and so on… but with the interface of our own underlying “Lotus Notes” or PGP .
If We Don’t Fix UX, Bitcoin Becomes the next PGP
To understand how bad it is, we again need to take a small look at the past. To better understand the now, and to avoid some future mistakes.
Let’s quickly recall some examples from the '90s, like Microsoft’s Clippy, Bob, Vista, and Netscape Navigator 4—failures driven by poor usability, feature creep, or mishaps that eroded trust. Lotus Notes serves as a warning: even widely used platforms can lose their edge if usability and modernization are neglected, leaving room for competitors. Bitcoin doesn’t face competition in the traditional sense, but it does face something worse: the erosion of usability and trust, which threatens the very foundation of hard money we rely on. It’s our only shot at hard money we’re ever going to have. It’s do-or-die.
And that’s our Achilles heel: we stand or die with that trust.
For now, Bitcoin’s trust comes from its decentralized, secure network and its value propositions. But usability has been sidelined for years, largely due to a lack of serious testing. Some companies recently hired 14 new team members: - 4 Software Engineers - 3 R&D Engineers - 2 Data Scientists - 1 Machine Learning Engineer - 1 Talent Acquisition Specialist - 1 Global Controller, 2 Marketeers… No testers.
I’m serious about this: when the broader bitcoin space (from Bitcoin Core to the newest coolest and latest Nostr plugin) don’t take testing more serious, then we’ll end up just being the creators of the software equivalent of a “Bonzi Buddy” or the next “Lotus Notes”.
Because we’ll be catering to the same people that liked the system 10+ years ago, and have no clue why new users don’t flock to it anymore. Then our core value we’re so proud of right now, will be nothing more than a laughing stock because it stands only through trust.
If we don’t take this seriously, Bitcoin’s core value will fade, and we’ll lose trust—not to altcoins, but to our own neglect.
Fiat parasites and our own complacency is our real “competitor”.
When we lose this battle for usability and relevancy, then the math, code and the core of bitcoin would still go on to exist, with more and more users being locked out because the complexity rises. While others would reluctantly try to make efforts to get in. This will impact how you interact with and maintain nodes, as well as manage lightning channels and participate in the P2P economy of the Bitcoin standard. We would become a small island that “gets it”. A curiosum. We’de be the Moloka‘i1 of decentralization. (Make your own ‘Father Damien’ joke here if you like).
You can’t see me
This is happening right now.
The real usability repels new users, with only a few exceptions holding the fort.
It doesn’t bother the store-of-value and pump-my-bags crowd—they're not using Bitcoin anyway and don’t care either way, as long as they make their fiat gains in a quarterly report or at the end of a year. When you onboard a business and teach them self custody, they’re usually set... then forget. You encounter the real issues along the way. There’s little real use, as average people still need a "specialist" to hold their hand.
After a while, Bitcoiners who explain things end up like Lotus Notes consultants, trying to make a buck on a system no one else understands or really cares about.
Usability in the Bitcoin ecosystem is stagnating. The so-called "studio usability" presented by Bitcoin influencers with nice podcasts, who demo new stuff and ignore flaws to stay “positive,” is part of the problem. It's the same with the flood of metal plate seed bearers (as an example) We have about 25 products that aren’t as innovative as they’re made out to be. And it’s all fan-tas-tic and cool on every review. Unless you really test it. (luckily some actually do that)
The real issue is that nobody dares saying: “This hardware wallet sucks” or “This product is too buggy to trust.”
On the flip side of that coin we’ve got the LinuxSplaining crowd—treating lack of usability like a virtue. For them, being one of the ten people on a metaphorical Bitcoin leper colony who can navigate some convoluted tool is a badge of honor. They’ll call it a success even if the other eight billion people can’t—or won’t—bother opening the app, dismissing those users as simply too clueless to matter.
These folks would happily sit beside the 30th Satoshi Nakamoto statue, ignoring the peanuts tossed at their face by passersby. Some will reach 60, sporting stained Star Wars t-shirts, proud to be the only ones who still understand Bitcoin. To them, that’s success — because Bitcoin was always meant to be their obscure triumph and it’s becoming a way of life for them to be that weird uncle that’s into computers and stuff like “crypto”. ’No man, it’s bitcoin, not cryptooh!’
I want bitcoin to open the door to freedom and abundance of ideas and real-life solutions, and not becoming a barbed wire fence around a lepper colony. Even in that grim outpost, you’d still find two Bitcoiners barricaded in their hut without AC, boycotting the òther eight because one tweaked their node settings the wrong way.
Let’s build tools and bridges towards bitcoin, as a "usable bitcoin” (because that’s bitcoin too!) Build tools that invite everyone to the table, not just the converted, the Linuxsplainers and know-it-alls.
Only then will we move onwards, to a thriving, open ecosystem where you’re not feeling like a Lotus Notes consultant that ran away from 1994, but a bitcoiner who’s part of positive changes in the world. ”Fix your bugs, before you try to fix the world.”
by AVB
Support my work here : coinos / avb
-
@ 6be5cc06:5259daf0
2025-01-21 23:17:29A seguir, veja como instalar e configurar o Privoxy no Pop!_OS.
1. Instalar o Tor e o Privoxy
Abra o terminal e execute:
bash sudo apt update sudo apt install tor privoxy
Explicação:
- Tor: Roteia o tráfego pela rede Tor.
- Privoxy: Proxy avançado que intermedia a conexão entre aplicativos e o Tor.
2. Configurar o Privoxy
Abra o arquivo de configuração do Privoxy:
bash sudo nano /etc/privoxy/config
Navegue até a última linha (atalho:
Ctrl
+/
depoisCtrl
+V
para navegar diretamente até a última linha) e insira:bash forward-socks5 / 127.0.0.1:9050 .
Isso faz com que o Privoxy envie todo o tráfego para o Tor através da porta 9050.
Salve (
CTRL
+O
eEnter
) e feche (CTRL
+X
) o arquivo.
3. Iniciar o Tor e o Privoxy
Agora, inicie e habilite os serviços:
bash sudo systemctl start tor sudo systemctl start privoxy sudo systemctl enable tor sudo systemctl enable privoxy
Explicação:
- start: Inicia os serviços.
- enable: Faz com que iniciem automaticamente ao ligar o PC.
4. Configurar o Navegador Firefox
Para usar a rede Tor com o Firefox:
- Abra o Firefox.
- Acesse Configurações → Configurar conexão.
- Selecione Configuração manual de proxy.
- Configure assim:
- Proxy HTTP:
127.0.0.1
- Porta:
8118
(porta padrão do Privoxy) - Domínio SOCKS (v5):
127.0.0.1
- Porta:
9050
- Proxy HTTP:
- Marque a opção "Usar este proxy também em HTTPS".
- Clique em OK.
5. Verificar a Conexão com o Tor
Abra o navegador e acesse:
text https://check.torproject.org/
Se aparecer a mensagem "Congratulations. This browser is configured to use Tor.", a configuração está correta.
Dicas Extras
- Privoxy pode ser ajustado para bloquear anúncios e rastreadores.
- Outros aplicativos também podem ser configurados para usar o Privoxy.
-
@ 09fbf8f3:fa3d60f0
2024-11-02 08:00:29> ### 第三方API合集:
免责申明:
在此推荐的 OpenAI API Key 由第三方代理商提供,所以我们不对 API Key 的 有效性 和 安全性 负责,请你自行承担购买和使用 API Key 的风险。
| 服务商 | 特性说明 | Proxy 代理地址 | 链接 | | --- | --- | --- | --- | | AiHubMix | 使用 OpenAI 企业接口,全站模型价格为官方 86 折(含 GPT-4 )| https://aihubmix.com/v1 | 官网 | | OpenAI-HK | OpenAI的API官方计费模式为,按每次API请求内容和返回内容tokens长度来定价。每个模型具有不同的计价方式,以每1,000个tokens消耗为单位定价。其中1,000个tokens约为750个英文单词(约400汉字)| https://api.openai-hk.com/ | 官网 | | CloseAI | CloseAI是国内规模最大的商用级OpenAI代理平台,也是国内第一家专业OpenAI中转服务,定位于企业级商用需求,面向企业客户的线上服务提供高质量稳定的官方OpenAI API 中转代理,是百余家企业和多家科研机构的专用合作平台。 | https://api.openai-proxy.org | 官网 | | OpenAI-SB | 需要配合Telegram 获取api key | https://api.openai-sb.com | 官网 |
持续更新。。。
推广:
访问不了openai,去
低调云
购买VPN。官网:https://didiaocloud.xyz
邀请码:
w9AjVJit
价格低至1元。
-
@ 2e8970de:63345c7a
2025-05-07 15:26:35Beijing has stopped publishing hundreds of statistics, making it harder to know what’s going on in the country
Data stops. Data stops. Data stops.
https://www.wsj.com/world/china/china-economy-data-missing-096cac9a?st=j7V11b&reflink=article_copyURL_share
originally posted at https://stacker.news/items/973942
-
@ 9e69e420:d12360c2
2025-01-19 04:48:31A new report from the National Sports Shooting Foundation (NSSF) shows that civilian firearm possession exceeded 490 million in 2022. The total from 1990 to 2022 is estimated at 491.3 million firearms. In 2022, over ten million firearms were domestically produced, leading to a total of 16,045,911 firearms available in the U.S. market.
Of these, 9,873,136 were handguns, 4,195,192 were rifles, and 1,977,583 were shotguns. Handgun availability aligns with the concealed carry and self-defense market, as all states allow concealed carry, with 29 having constitutional carry laws.
-
@ 4c48cf05:07f52b80
2024-10-30 01:03:42I believe that five years from now, access to artificial intelligence will be akin to what access to the Internet represents today. It will be the greatest differentiator between the haves and have nots. Unequal access to artificial intelligence will exacerbate societal inequalities and limit opportunities for those without access to it.
Back in April, the AI Index Steering Committee at the Institute for Human-Centered AI from Stanford University released The AI Index 2024 Annual Report.
Out of the extensive report (502 pages), I chose to focus on the chapter dedicated to Public Opinion. People involved with AI live in a bubble. We all know and understand AI and therefore assume that everyone else does. But, is that really the case once you step out of your regular circles in Seattle or Silicon Valley and hit Main Street?
Two thirds of global respondents have a good understanding of what AI is
The exact number is 67%. My gut feeling is that this number is way too high to be realistic. At the same time, 63% of respondents are aware of ChatGPT so maybe people are confounding AI with ChatGPT?
If so, there is so much more that they won't see coming.
This number is important because you need to see every other questions and response of the survey through the lens of a respondent who believes to have a good understanding of what AI is.
A majority are nervous about AI products and services
52% of global respondents are nervous about products and services that use AI. Leading the pack are Australians at 69% and the least worried are Japanise at 23%. U.S.A. is up there at the top at 63%.
Japan is truly an outlier, with most countries moving between 40% and 60%.
Personal data is the clear victim
Exaclty half of the respondents believe that AI companies will protect their personal data. And the other half believes they won't.
Expected benefits
Again a majority of people (57%) think that it will change how they do their jobs. As for impact on your life, top hitters are getting things done faster (54%) and more entertainment options (51%).
The last one is a head scratcher for me. Are people looking forward to AI generated movies?
Concerns
Remember the 57% that thought that AI will change how they do their jobs? Well, it looks like 37% of them expect to lose it. Whether or not this is what will happen, that is a very high number of people who have a direct incentive to oppose AI.
Other key concerns include:
- Misuse for nefarious purposes: 49%
- Violation of citizens' privacy: 45%
Conclusion
This is the first time I come across this report and I wil make sure to follow future annual reports to see how these trends evolve.
Overall, people are worried about AI. There are many things that could go wrong and people perceive that both jobs and privacy are on the line.
Full citation: Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.
The AI Index 2024 Annual Report by Stanford University is licensed under Attribution-NoDerivatives 4.0 International.
-
@ eac63075:b4988b48
2025-03-07 14:35:26Listen the Podcast:
https://open.spotify.com/episode/7lJWc1zaqA9CNhB8coJXaL?si=4147bca317624d34
https://www.fountain.fm/episode/YEGnlBLZhvuj96GSpuk9
Abstract
This paper examines a hypothetical scenario in which the United States, under Trump’s leadership, withdraws from NATO and reduces its support for Europe, thereby enabling a Russian conquest of Ukraine and the subsequent expansion of Moscow’s influence over Eurasia, while the US consolidates its dominance over South America. Drawing on classical geopolitical theories—specifically those of Halford Mackinder, Alfred Thayer Mahan, Rudolf Kjellén, and Friedrich Ratzel—the study analyzes how these frameworks can elucidate the evolving power dynamics and territorial ambitions in a reconfigured global order. The discussion highlights Mackinder’s notion of the Eurasian Heartland and its strategic importance, Mahan’s emphasis on maritime power and control of strategic routes, Kjellén’s view of the state as an expanding organism, and Ratzel’s concept of Lebensraum as a justification for territorial expansion. The paper also explores contemporary developments, such as the US–Ukraine economic agreement and Trump’s overt territorial ambitions involving Greenland and Canada, in light of these theories. By juxtaposing traditional geopolitical concepts with current international relations, the study aims to shed light on the potential implications of such shifts for regional stability, global security, and the balance of power, particularly in relation to emerging neocolonial practices in Latin America.
Introduction
In recent years, the geopolitical dynamics involving the United States, Russia, and Ukraine have sparked analyses from different theoretical perspectives. This paper examines recent events – presupposing a scenario in which Donald Trump withdraws the US from NATO and reduces its support for Europe, allowing a Russian conquest of Ukraine and the expansion of Moscow’s influence over Eurasia, while the US consolidates its dominance over South America – in light of classical geopolitical theories. The ideas of Halford Mackinder, Alfred Thayer Mahan, Rudolf Kjellén, and Friedrich Ratzel are used as reference points. The proposal is to impartially evaluate how each theory can elucidate the developments of this hypothetical scenario, relating Russian territorial expansion in Eurasia to the strategic retreat of the US to the Western Hemisphere.
Initially, we will outline Mackinder’s conception of the Heartland (the central Eurasian territory) and the crucial role of Eastern Europe and Ukraine in the quest for global dominance. Next, we will discuss Mahan’s ideas regarding maritime power and the control of strategic routes, considering the impacts on the naval power balance among the US, Russia, and other maritime powers such as the United Kingdom and Japan. Subsequently, we will examine Kjellén’s organic theory of the state, interpreting the Russian expansionist strategy as a reflection of a state organism in search of vital space. In the same vein, Ratzel’s concept of “Lebensraum” will be explored, along with how Russia could justify territorial expansion based on resources and territory. Finally, the paper connects these theories to the current political context, analyzing the direct negotiations between Washington and Moscow (overlooking Ukraine and Europe), the US policy toward authoritarian regimes in Latin America, and the notion of a hemispheric division of power – the “Island of the Americas” under North American hegemony versus an Eurasia dominated by Russia. Lastly, it considers the possibility that such a geopolitical arrangement may foster the strengthening of authoritarian governments globally, rather than containing them, thus altering the paradigms of the liberal world order.
The Heartland of Mackinder: Ukraine, Eurasia, and Global Dominance
Halford J. Mackinder, a British geographer and pioneer of geopolitics, proposed the celebrated Heartland Theory in the early twentieth century. Mackinder divided the world into geostrategic zones and identified the Heartland—the central continental mass of Eurasia—as the “geographical pivot of history” [5]. His most famous maxim encapsulates this vision: “who rules Eastern Europe commands the Heartland; who rules the Heartland commands the World Island; who rules the World Island commands the world” [5]. Eastern Europe and, in particular, the region of present-day Ukraine, play a key role in this formula. This is because, for Mackinder, Eastern Europe functions as a gateway to the Heartland, providing access to resources and a strategic position for the projection of continental power [5].
Applying this theory to our scenario, the conquest of Ukraine and Eastern European countries by Russia would have profound geopolitical implications. From a Mackinderian point of view, such a conquest would enormously strengthen Russia’s position in the Heartland by adding manpower (population) and Ukraine’s industrial and agricultural resources to its power base [5]. In fact, Mackinder argued that controlling the Heartland conferred formidable geostrategic advantages—a vast terrestrial “natural fortress” protected from naval invasions and rich in resources such as wheat, minerals, and fuels [5]. Thus, if Moscow were to incorporate Ukraine (renowned for its fertile soil and grain production, as well as its mineral reserves) and extend its influence over Eastern Europe, Russia would consolidate the Heartland under its direct control. In this context, the absence of the USA (withdrawn from NATO and less engaged in Europe) would remove an important obstacle to Russian predominance in the region.
With central and eastern Eurasia under Russian influence, it would be possible to move toward the realization of the geopolitical nightmare described by Mackinder for Western maritime powers: a hegemonic continental power capable of projecting power to both Europe and Asia. Mackinder himself warned that if a Heartland power gained additional access to an oceanic coastline—in other words, if it combined land power with a significant maritime front—it would constitute a “danger” to global freedom [5]. In the scenario considered, besides advancing into Eastern Europe, Russia would already possess strategic maritime outlets (for example, in the Black Sea, via Crimea, and in the Baltic, via Kaliningrad or the Baltic States if influenced). Thus, the control of Ukraine would reinforce Russia’s position in the Black Sea and facilitate projection into the Eastern Mediterranean, expanding its oceanic front. From a Mackinderian perspective, this could potentially transform Russia into the dominant power of the “World Island” (the combined mass of Europe, Asia, and Africa), thereby unbalancing the global geopolitical order [5].
It is worth noting that, historically, Mackinder’s doctrine influenced containment strategies: both in the interwar period and during the Cold War, efforts were made to prevent a single power from controlling the Heartland and Eastern Europe. NATO, for example, can be seen as an instrument to prevent Soviet/Russian advances in Europe, in line with Mackinder’s imperative to “contain the Heartland.” Thus, if the USA were to abandon that role—by leaving NATO and tacitly accepting the Russian sphere of influence in Eurasia—we would be witnessing an inversion of the principles that have guided Western policy for decades. In short, under Mackinder’s theory, the Russian conquest of Ukraine and beyond would represent the key for Russia to command the Heartland and, potentially, challenge global hegemony, especially in a scenario where the USA self-restricts to the Western Hemisphere.
The Maritime Power of Mahan and the Naval Balance between West and East
While Mackinder emphasized continental land power, Alfred Thayer Mahan, a nineteenth-century American naval strategist, highlighted the crucial role of maritime power in global dominance. In his work The Influence of Sea Power upon History (1890), Mahan studied the example of the British Empire and concluded that control of the seas paved the way for British supremacy as a world power [10]. He argued that a strong navy and the control of strategic maritime routes were decisive factors for projecting military, political, and economic power. His doctrine can be summarized in the following points: (1) the United States should aspire to be a world power; (2) control of the seas is necessary to achieve that status; (3) such control is obtained through a powerful fleet of warships [17]. In other words, for Mahan, whoever dominates the maritime routes and possesses naval superiority will be in a position to influence global destinies, ensuring trade, supplies, and the rapid movement of military forces.
In the proposed scenario, in which the USA withdraws militarily from Europe and possibly from the Eurasian stage, Mahan’s ideas raise questions about the distribution of maritime power and its effects. Traditionally, the US Navy operates globally, ensuring freedom of navigation and deterring challenges in major seas (Atlantic, Pacific, Indian, etc.). A withdrawal of the USA from NATO could also signal a reduction in its naval presence in the Northeast Atlantic, the Mediterranean Sea, and other areas close to Eurasia. In such a case, who would fill this naval vacuum? Russia, although primarily a land power, has been attempting to modernize its navy and has specific interests—for example, consolidating its dominance in the Black Sea and maintaining a presence in the Mediterranean (with a naval base in Tartus, Syria). The United Kingdom, a historic European maritime power, would remain aligned with the USA but, without American military support in Europe, might potentially be overwhelmed trying to contain an increasingly assertive Russian navy in European waters on its own. Japan, another significant maritime actor allied with the USA, is concerned with the naval balance in the Pacific; without full American engagement, Tokyo might be compelled to expand its own naval power to contain both Russia in the Far East (which maintains a fleet in the Pacific) and, especially, the growing Chinese navy.
According to Mahan’s thinking, strategic maritime routes and choke points (crucial straits and channels) become contested prizes in this power game. With the USA focusing on the Americas, one could imagine Washington reinforcing control over the Panama Canal and Caribbean routes—reviving an “American Gulf” policy in the Western Atlantic and Eastern Pacific. In fact, indications of this orientation emerge in statements attributed to Trump, who once suggested reclaiming direct control over Panama, transforming Canada into a North American state, and even “annexing” Greenland due to its Arctic geopolitical importance [18]. These aspirations reflect a quest to secure advantageous maritime positions near the American continent.
Conversely, in the absence of American presence in the Eastern Atlantic and Mediterranean, Russia would have free rein for regional maritime projection. This could include anything from the unrestricted use of the Black Sea (after dominating Ukraine, thereby ensuring full access to Crimea and Ukrainian ports) to greater influence in the Eastern Mediterranean via Syria and partnerships with countries such as Iran or Egypt. The Baltic Sea would also become an area of expanded Russian interest, pressuring coastal countries and perhaps reducing NATO’s traditional local naval supremacy. However, it is worth noting that even with these regional expansions, Russia lacks a blue-water navy comparable to that of the USA; thus, its initial global maritime impact would be limited without alliances.
An important aspect of Mahan’s theories is that naval power serves as a counterbalance to the land power of the Heartland. Therefore, even if Russia were to dominate the Eurasian continental mass, the continued presence of American naval might on the oceans could prevent complete global domination by Moscow. However, if the USA voluntarily restricts its naval reach to the Americas, it would forgo influencing the power balance in the seas adjacent to Eurasia. Consequently, the balance of maritime power would tend to shift in favor of regional Eurasian actors. The United Kingdom and Japan, traditional allies of the USA, could intensify their naval capabilities to defend regional interests—the United Kingdom safeguarding the North Atlantic and the North Sea, and Japan patrolling the Northwest Pacific—but both would face budgetary and structural limitations in fully compensating for the absence of the American superpower. Consequently, Mahan’s vision suggests that the withdrawal of the USA from the extra-regional scene would weaken the liberal maritime regime, possibly opening space for revisionist powers to contest routes that were previously secured (for example, Russia and China encountering less opposition on the routes of the Arctic and the Indo-Pacific, respectively). In summary, naval hegemony would fragment, and control of strategic seas would become contested, reconfiguring the relative influence of the USA, Russia, and maritime allies such as the United Kingdom and Japan.
Kjellén and the State as a Living Organism: Russian Expansion as an Organic Necessity
Another useful theoretical lens to interpret Russian geopolitical posture is that of Rudolf Kjellén, a Swedish political scientist of the early twentieth century who conceived the State as a living organism. Kjellén, who even coined the term “geopolitics,” was influenced by Friedrich Ratzel’s ideas and by social Darwinism, arguing that States are born, grow, and decline analogously to living beings [13]. In his work Staten som livsform (The State as a Form of Life, 1916), he maintained that States possess an organic dimension in addition to the legal one and that “just as any form of life, States must expand or die” [14]. This expansion would not be motivated merely by aggressive conquest but seen as a necessary growth for the self-preservation of the state organism [14]. In complement, Kjellén echoed Ratzel’s “law of expanding spaces” by asserting that large States expand at the expense of smaller ones, with it being only a matter of time before the great realms fill the available spaces [14]. That is, from the organic perspective, vigorous States tend to incorporate smaller neighboring territories, consolidating territorially much like an organism absorbing nutrients.
Applying this theory to the strategy of contemporary Russia, we can interpret Moscow’s actions—including the invasion of Ukraine and the ambition to restore its sphere of influence in Eurasia—as the expression of an organic drive for expansion. For a strategist influenced by this school, Russia (viewed as a state organism with a long imperial history) needs to expand its territory and influence to ensure its survival and security. The loss of control over spaces that once were part of the Russian Empire or the Soviet Union (such as Ukraine itself, the Caucasus, or Central Asia) may be perceived by Russian elites as an atrophy of the state organism, rendering it vulnerable. Thus, the reincorporation of these territories—whether directly (annexation) or indirectly (political vassalage)—would equate to restoring lost members or strengthening vital organs of the state body. In fact, official Russian arguments often portray Ukraine as an intrinsic part of “Russian historicity,” denying it a fully separate identity—a narrative that aligns with the idea that Russian expansion in that region is natural and necessary for the Russian State (seen as encompassing also Russian speakers beyond its current borders).
Kjellén would thus provide a theoretical justification for Russian territorial expansion as an organic phenomenon. As a great power, Russia would inevitably seek to expand at the expense of smaller neighbors (Ukraine, Georgia, the Baltic States, etc.), as dictated by the tendency of “great spaces to organize” to the detriment of the small [14]. This view can be identified in contemporary Russian doctrines that value spheres of influence and the notion that neighboring countries must gravitate around Moscow in order for the natural order to be maintained. The very idea of “Eurasia” united under Russian leadership (advocated by modern Russian thinkers) echoes this organic conception of vital space and expansion as a sign of the State’s vitality.
However, Kjellén’s theory also warns of the phenomenon of “imperial overstretch,” should a State exceed its internal cohesion limits by expanding excessively [14]. He recognized that extending borders too far could increase friction and vulnerabilities, making it difficult to maintain cohesion—a very large organism may lack functional integration. In the Russian context, this suggests that although expansion is seen as necessary, there are risks if Russia tries to encompass more than it can govern effectively. Conquering Ukraine and subjugating Eastern Europe, for example, could economically and militarily overburden the Russian State, especially if it faced resistance or had to manage hostile populations. However, in the hypothetical scenario we adopt (isolated USA and a weakened Europe), Russia might calculate that the organic benefits of expansion (territory, resources, strategic depth) would outweigh the costs, since external interference would be limited. Thus, through Kjellén’s lens, expansionist Russia behaves as an organism following its instinct for survival and growth, absorbing weaker neighbors; yet such a process is not devoid of challenges, requiring that the “organism Russia” manages to assimilate these new spaces without collapsing under its own weight.
Ratzel and Lebensraum: Resources, Territory, and the Justification for Expansion
Parallel to Kjellén’s organic view, Friedrich Ratzel’s theory offers another conceptual basis for understanding Russian expansion: the concept of Lebensraum (vital space). Ratzel, a German geographer of the late nineteenth century, proposed that the survival and development of a people or nation depended critically on the available physical space and resources. Influenced by Darwinist ideas, he applied the notion of “survival of the fittest” to nations, arguing that human societies need to conquer territory and resources to prosper, and that the stronger and fittest civilizations will naturally prevail over the weaker ones [12]. In 1901, Ratzel coined the term Lebensraum to describe this need for “vital space” as a geographical factor in national power [15].
Subsequently, this idea would be adopted—and extremely distorted—by Nazi ideology to justify Germany’s aggressions in Europe. However, the core of Ratzel’s concept is that territorial expansion is essential for the survival and growth of a State, especially to secure food, raw materials, and space for its population [12].
When examining Russia’s stance under this perspective, we can see several narratives that evoke the logic of Lebensraum. Russia is the largest country in the world by area; however, much of its territory is characterized by adverse climates (tundra, taiga) and is relatively sparsely populated in Siberia. On the other hand, adjacent regions such as Ukraine possess highly arable lands (chernozem—black soil), significant Slavic population density, and additional natural resources (coal in the Donbass, for example). An implicit justification for Russian expansion could be the search for supplementary resources and fertile lands to secure its self-sufficiency and power—exactly as Ratzel described that vigorous nations do. Historical records show that Ratzel emphasized agrarian primacy: he believed that new territories should be colonized by farmers, providing the food base for the nation [12]. Ukraine, historically called the “breadbasket of Europe,” fits perfectly into this vision of conquest for sustenance and agricultural wealth.
Furthermore, Ratzel viewed geography as a determinant of the destiny of nations—peoples adapted to certain habitats seek to expand them if they aspire to grow. In contemporary Russian discourse, there is often mention of the need to ensure security and territorial depth in the face of NATO, or to unite brotherly peoples (Russians and Russian speakers) within a single political space. Such arguments can be read as a modern translation of Lebensraum: the idea that the Russian nation, in order to be secure and flourish, must control a larger space, encompassing buffer zones and critical resources. This Russian “vital space” would naturally include Ukraine and other former Soviet republics, given the historical and infrastructural interdependence. Ratzel emphasized that peoples migrated and expanded when their original homeland no longer met their needs or aspirations [12]. Although contemporary Russia does not suffer from demographic pressure (on the contrary, it faces population decline), under the logic of a great power there is indeed a sentiment of geopolitical insufficiency for having lost influence over areas considered strategic. Thus, reconquering these areas would mean recovering the “habitat” necessary for the Russian nation to prosper and feel secure.
It is important to mention that, in Ratzel’s and Kjellén’s formulations, the pursuit of Lebensraum or organic expansion is not morally qualified—it is treated as a natural process in the politics of power. Thus, on the discursive level, Russia can avoid overly aggressive rhetoric and resort to “natural” justifications: for example, claiming that it needs to occupy Ukraine for defensive purposes (security space) or to reunify peoples (a common cultural and historical space). Beneath these justifications, however, resonates the geopolitical imperative to acquire more territory and resources as a guarantee of national survival, something consonant with Ratzel’s theory. In fact, Russian Realpolitik frequently prioritizes the control of energy resources (gas, oil) and transportation routes. Expanding its influence over central Eurasia would also mean controlling oil pipelines, gas lines, and logistical corridors—essential elements of modern Lebensraum understood as access to vital resources and infrastructure.
In summary, by conquering Ukraine and extending its reach into Eurasia, Russia could effectively invoke the concept of Lebensraum: presenting its expansion not as mere imperialism, but as a necessity to secure indispensable lands and resources for its people and to correct the “injustice” of a vital space diminished by post-Cold War territorial losses. The theories of Ratzel and Kjellén together paint a picture in which Russian expansion emerges almost as a natural law—the great State reclaiming space to ensure its survival and development at the expense of smaller neighbors.
Trump, NATO, and the Threat of American Withdrawal
One of the most alarming changes with Trump's return to power is the tense relationship with the North Atlantic Treaty Organization (NATO). Trump has long criticized allies for not meeting military spending targets, even threatening during his first term to withdraw the US from the alliance if members did not increase their contributions [2]. This threat, initially viewed with skepticism, became concrete after his re-election, leading European allies to seriously consider the possibility of having to defend themselves without American support [1]. In fact, Trump suggested in post-election interviews that the US would only remain in NATO if the allies “paid their bills” – otherwise, he “would seriously consider” leaving [2]. Such statements reinforced the warning that the US might not honor NATO's mutual defense commitment, precisely at a time of continuous Russian threat due to the war in Ukraine [1].
From a theoretical point of view, this posture of American retrenchment evokes the classic tension between maritime power and land power. Alfred Thayer Mahan emphasized that the global power of the US derived largely from its naval superiority and from alliances that ensured control over strategic maritime routes [9]. NATO, since 1949, has served not only to deter Soviet terrestrial advances in Eurasia, but also to secure the US naval presence in the North Atlantic and the Mediterranean – a fundamental element according to Mahan. In turn, Halford Mackinder warned that the balance of global power depended on the control of the Eurasian “Heartland” (the central region of Eurasia). The withdrawal or disengagement of the US (a maritime power) from this region could open the way for a continental power (such as Russia) to expand its influence in Eastern Europe, unbalancing the power balance [3]. In other words, by threatening to leave NATO, Trump jeopardizes the principle of containment that prevented Russian dominance over Eastern Europe – something that Mackinder would see as a dangerous shift in global power in favor of the Heartland power.
Adopting an impartial tone, it is observed that European countries have reacted to this new reality with precautionary measures. Strategic reports already calculate the cost of an autonomous European defense: hundreds of thousands of additional soldiers and investments of hundreds of billions of euros would be required if the US ceased to guarantee the security of the continent [1]. European dependence on American military power is significant and, without it, there would be a need for a major reinforcement of European Armed Forces [1]. This mobilization practically reflects the anticipation of a power vacuum left by the US – a scenario in which Mackinder’s theory (on the primacy of the Heartland and the vulnerability of the “external crescent” where Western Europe is located) regains its relevance.
The US–Ukraine Economic Agreement: Strategic Minerals in Exchange for Support?
Another novelty of Trump's second term is the unprecedented and transactional manner in which Washington has been dealing with the war in Ukraine. Instead of emphasizing security guarantees and alliances, the Trump administration proposed a trade agreement with Ukraine focused on the exploitation of strategic minerals, linking American support to a direct economic benefit. According to sources close to the negotiations, the US and Ukraine are about to sign a pact to share the revenues from the exploitation of critical mineral resources on Ukrainian territory [19]. Materials such as titanium, lithium, rare earths, and uranium – vital for high-tech and defense industries – would be at the core of this agreement [6]. According to the known draft, Ukraine would allocate 50% of the profits from new mineral ventures to a fund controlled by the US, which would reinvest part of the resources in the country’s own reconstruction [6] [19].
It is noteworthy that the pact does not include explicit security guarantees for Kyiv, despite Ukraine remaining under direct military threat from Russia [19]. Essentially, the Trump administration offers financial support and economic investment in exchange for a share in Ukrainian natural resources, but without formally committing to Ukraine's defense in the event of a renewed Russian offensive [19]. American authorities argue that this economic partnership would already be sufficient to “secure Ukrainian interests,” as it would provide the US with its own incentives to desire Ukraine’s stability [19]. “What could be better for Ukraine than being in an economic partnership with the United States?” stated Mike Waltz, a US national security advisor, defending the proposal [19].
Analysts, however, assess the agreement in divided terms. For some, it represents a form of economic exploitation at a time of Ukraine's fragility – comparing the demand to share mineral wealth amid war to a scheme of “mafia protection” [19]. Steven Cook, from the Council on Foreign Relations, classified the offer as “extortion,” and political scientist Virginia P. Fortna observed that charging resources from an invaded country resembles predatory practices [19]. Joseph Nye adds that it is a short-term gain strategy that could be “disastrous in the long run” for American credibility, reflecting the transactional approach that Trump even adopted with close allies in other contexts [19]. On the other hand, some see a future advantage for Kyiv: journalist Pierre Briançon suggests that at least this agreement aligns American commercial interests with Ukraine’s future, which could, in theory, keep the US involved in Ukrainian prosperity in the long term [19]. It is even recalled that President Zelensky himself proposed last year the idea of sharing natural resources with the US to bring the interests of the two countries closer together [19].
From the perspective of geopolitical theories, this agreement illustrates a shift towards economic pragmatism in international relations, approaching concepts proposed by Kjellén. Rudolf Kjellén, who coined the term “geopolitics,” saw the State as a territorial organism that seeks to ensure its survival through self-sufficiency and the control of strategic resources [4]. Trump's demand for a share in Ukrainian resources in order to continue supporting the country reflects a logic of autarky and direct national interest – that is, foreign policy serving primarily to reinforce the economic and material position of the US. This view contrasts with the traditional cooperative approach, but aligns with Kjellén’s idea that powerful States tend to transform international relations into opportunities for their own gain, ensuring access to vital raw materials. Similarly, Friedrich Ratzel argued that States have a “propensity to expand their borders according to their capacities,” seeking vital space (Lebensraum) and resources to sustain their development [11]. The US–Ukraine pact, by conditioning military/economic aid on obtaining tangible advantages (half of the mineral profits), is reminiscent of Ratzel’s perspective: the US, as a rising economic power, expands its economic influence over Ukrainian territory like an organism extending itself to obtain the necessary resources for its well-being. It is, therefore, a form of economic expansionism at the expense of purely ideological commitments or collective security.
Peace Negotiations Excluding Ukraine and the Legitimacy of the Agreement
Another controversial point is the manner in which peace negotiations between Russia and the West have been conducted under Trump's administration. Since taking office, the American president has engaged directly with Moscow in pursuit of a ceasefire, deliberately keeping the Ukrainian government out of the initial discussions [6]. Trump expressed his desire to “leave Zelensky out of the conversation” and also excluded the European Union from any influence in the process [6]. This negotiation strategy—conducted without the presence of the primary interested party, Ukraine—raises serious questions about the legitimacy and sustainability of any resulting agreement.
Historically, peace agreements reached without the direct participation of one of the conflicting parties tend to face problems in implementation and acceptance.
The exclusion of Ukraine in the decision-making phase brings to light the issue of guarantees. As noted, the emerging agreement lacks formal US security guarantees for Ukraine. This implies that, after the agreement is signed, nothing will prevent Russia from launching a new offensive if it deems it convenient, knowing that the US has not committed to defending it militarily. Experts have already warned that a ceasefire without robust protection may only be a pause for Russian rearmament, rendering the conflict “frozen” temporarily and potentially resumed in the near future. The European strategic community has expressed similar concern: without American deterrence, the risk of further Russian aggressions in the region increases considerably [1]. Denmark, for example, has released intelligence reports warning of possible imminent Russian attacks, prompting neighboring countries to accelerate plans for independent defense [1].
The legitimacy of this asymmetric peace agreement (negotiated without Ukraine fully at the table and under economic coercion) is also questionable from a legal and moral point of view. It violates the principle of self-determination by imposing terms decided by great powers on a sovereign country—a practice reminiscent of dark chapters in diplomacy, such as the Munich Agreement of 1938, when powers determined the fate of Czechoslovakia without its consent. In the current case, Ukraine would end up signing the agreement, but from a position of weakness, raising doubts about how durable such a commitment would be.
From Mackinder’s perspective, Ukraine’s removal from the battlefield without guarantees essentially means admitting a greater influence of Russia (the Heartland power) over Eastern Europe. This would alter the balance in Eurasia in a potentially lasting way. Furthermore, the fact that great powers negotiate over the heads of a smaller country evokes the imperial logic of the nineteenth and early twentieth centuries, when empires decided among themselves the divisions of foreign territories—a behavior that Mackinder saw as likely in a world of a “closed system.” With the entire world already occupied by States, Mackinder predicted that powers would begin to compete for influence within this consolidated board, often subjugating smaller states to gain advantage [3]. The US–Russia negotiation regarding Ukraine, without proper Ukrainian representation, exemplifies this type of neo-imperial dynamic in the twenty-first century.
Also noteworthy is the consonance with the ideas of Ratzel and Kjellén: both viewed smaller states as easily relegated to the status of satellites or even “parasitic organisms” in the orbit of larger states. Kjellén spoke of the intrinsic vulnerability of states with little territorial depth or economic dependence, making them susceptible to external pressures [4][20]. Ukraine, weakened by war and dependent on external aid, becomes a concrete example of this theorized vulnerability: it has had to cede strategic resources and accept terms dictated against its will in an attempt to secure its immediate survival. The resulting agreement, therefore, reflects a power imbalance characteristic of the hierarchical international relations described by classical geopolitical theorists.
Implicit Territorial Concessions and Trump’s Public Discourse
A central and controversial point in Trump’s statements regarding the war in Ukraine is the insinuation of territorial concessions to Russia as part of the conflict’s resolution. Publicly, Trump avoided explicitly condemning Russian aggression and even stated that he considered it “unlikely” that Ukraine would be able to retake all the areas occupied by the Russians [16]. In debates and interviews, he suggested that “if I were president, the war would end in 24 hours,” implying that he would force an understanding between Kyiv and Moscow that would likely involve ceding some territory in exchange for peace. This position marks a break with the previous US policy of not recognizing any territorial acquisitions made by force and fuels speculations that a future peace agreement sponsored by Trump would legitimize at least part of Russia’s gains since 2014 (Crimea, Donbass, and areas seized during the 2022 invasion).
The actions of his administration corroborate this interpretation. As discussed, the economic agreement focuses on the exploitation of Ukrainian natural resources, many of which are located precisely in regions currently under Russian military control, such as parts of the Zaporizhzhia Oblast, Donetsk, Lugansk, and the Azov Sea area [6]. A Ukrainian geologist, Hanna Liventseva, highlighted that “most of these elements (strategic minerals) are found in the south of the Ukrainian Shield, mainly in the Azov region, and most of these territories are currently invaded by Russia” [6]. This means that, to make joint exploitation viable, Russia’s de facto control over these areas would have to be recognized—or at least tolerated—in the short term. In other words, the pact indirectly and tacitly accepts Russian territorial gains, as it involves sharing the profits from resources that are not currently accessible to the Kyiv government.
Furthermore, figures close to Trump have made explicit statements regarding the possibility of territorial cession. Mike Waltz, Trump’s national security advisor, publicly stated that Zelensky might need to “cede land to Russia” to end the war [8]. This remark—made public in March 2025—confirms that the Trump White House considers it natural for Ukraine to relinquish parts of its territory in favor of an agreement. Such a stance marks a break from the previous Western consensus, which condemned any territorial gains by force. Under Trump, a pragmatic view (in the eyes of his supporters) or a cynical one (according to his critics) seems to prevail: sacrificing principles of territorial integrity to quickly end hostilities and secure immediate economic benefits.
In theoretical terms, this inclination to validate territorial gains by force recalls the concept of Realpolitik and the geopolitical Darwinism that influenced thinkers such as Ratzel. In Ratzel’s organic conception, expanding states naturally absorb neighboring territories when they are strong enough to do so, while declining states lose territory—a process almost biological in the selection of the fittest [11]. The Trump administration’s acceptance that Ukraine should “give something” to Moscow to seal peace reflects a normalization of this geopolitical selection process: it recognizes the aggressor (Russia) as having the “right” to retain conquered lands, because that is how power realities on the ground dictate. Mackinder, although firmly opposed to allowing Russia to dominate the Heartland, would see this outcome as the logical consequence of the lack of engagement from maritime powers (the USA and the United Kingdom, for example) in sustaining the Ukrainian counterattack. Without the active involvement of maritime power to balance the dispute, land power prevails in Eastern Europe.
From the perspective of international legitimacy, the cession of Ukrainian territories—whether de jure or de facto—creates a dangerous precedent in the post-Cold War era. Rewarding violent aggression with territorial gains may encourage similar strategies in other parts of the world, undermining the architecture of collective security. This is possibly a return to a world of spheres of influence, where great powers define borders and zones of control according to their convenience—something that the rules-based order after 1945 sought to avoid. Here, academic impartiality requires noting that coercion for territorial concessions rarely produces lasting peace, as the aggrieved party—in this case, Ukraine—may accept temporarily but will continue to assert its rights in the long term, as has occurred with other territorial injustices in history.
Territorial Ambitions of Trump: Greenland and Canada
Beyond the Eurasian theater of war, Trump revived geopolitical ambitions involving territories traditionally allied with the US: Greenland (an autonomous territory of Denmark) and Canada. As early as 2019, during his first term, Trump shocked the world by proposing to buy Greenland—rich in minerals and strategically positioned in the Arctic. Upon his return to power, he went further: expressing a “renewed interest” in acquiring Greenland and publicly suggesting the incorporation of Canada as the 51st American state [2].
In January 2025, during a press conference at Mar-a-Lago, he even displayed maps in which the US and Canada appeared merged into a single country, while Greenland was marked as a future American possession [2]. Posts by the president on social media included satirical images with a map of North America where Canada was labeled “51st” and Greenland designated as “Our Land” [2].
Such moves were met with concern and disbelief by allies. Canadian Prime Minister Justin Trudeau was caught on an open microphone warning that Trump’s fixation on annexation “is real” and not just a joke [7]. Trudeau emphasized that Washington appeared to covet Canada’s vast mineral resources, which would explain the insistence on the idea of absorption [7]. In public, Trump argued that Canadians “would be more prosperous as American citizens,” promising tax cuts and better services should they become part of the US [7]. On the Danish side, the reaction to the revived plan regarding Greenland was firmly negative—as it was in 2019—reaffirming that the territory is not for sale. Trump, however, insinuated that the issue might be one of national security, indicating that American possession of Greenland would prevent adverse influences (a reference to China and Russia in the Arctic) [2]. More worryingly, he refused to rule out the use of military means to obtain the island, although he assured that he had no intention of invading Canada by force (in the Canadian case, he spoke of “economic force” to forge a union) [2].
This series of initiatives reflects an unprecedented expansionist impetus by the US in recent times, at least in discourse. Analyzing this through the lens of classical geopolitics offers interesting insights. Friedrich Ratzel and his notion of Lebensraum suggest that powerful states, upon reaching a certain predominance, seek to expand their territory by influencing or incorporating adjacent areas. Trump, by targeting the immediate neighbor (Canada) and a nearby strategic territory (Greenland), appears to resurrect this logic of territorial expansion for the sake of gaining space and resources. Ratzel saw such expansion almost as a natural process for vigorous states, comparable to the growth of an organism [11]. From this perspective, the US would be exercising its “right” of expansion in North America and the polar region, integrating areas of vital interest.
Additionally, Alfred Mahan’s view on maritime power helps to understand the strategic value of Greenland. Mahan postulated that control of key maritime chokepoints and naval bases ensures global advantage [9]. Greenland, situated between the North Atlantic and the Arctic, has become increasingly relevant as climate change opens new polar maritime routes and reveals vast mineral deposits (including rare earth elements and oil). For the US, having a presence or sovereignty over Greenland would mean dominating the gateway to the Arctic and denying this space to rivals. This aligns with Mahan’s strategy of securing commercial and military routes (in this case, potential Arctic routes) and resources to consolidate naval supremacy. On the other hand, the incorporation of Canada—with its enormous territory, Arctic coastline, and abundant natural resources—would provide the US with formidable geoeconomic and geopolitical reinforcement, practically eliminating vulnerabilities along its northern border. This is an ambitious project that also echoes ideas of Kjellén, for whom an ideal State should seek territorial completeness and economic self-sufficiency within its region. Incorporating Canada would be the pinnacle of American regional autarky, turning North America into a unified bloc under Washington (a scenario reminiscent of the “pan-regions” conceived by twentieth-century geopoliticians influenced by Kjellén).
It is important to note, however, that these ambitions face enormous legal and political obstacles. The sovereignty of Canada and Greenland (Denmark) is guaranteed by international law, and both peoples categorically reject the idea of annexation. Any hostile action by the US against these countries would shake alliances and the world order itself. Even so, the very fact that an American president suggests such possibilities already produces geopolitical effects: traditional partners begin to distrust Washington’s intentions, seek alternative alliances, and strengthen nationalist discourses of resistance. In summary, Trump’s expansionist intentions in Greenland and Canada rekindle old territorial issues and paradoxically place the US in the position of a revisionist power—a role once associated with empires in search of colonies.
Implications for Brazil and South America: A New Neocolonization?
In light of this geopolitical reconfiguration driven by Trump's USA—with a reordering of alliances and a possible partition of spheres of influence among great powers—the question arises: what is the impact on Brazil and the other countries of South America? Traditionally, Latin America has been under the aegis of the Monroe Doctrine (1823), which established non-interference by Europe in the region and, implicitly, the primacy of the USA in the Western Hemisphere. In the post–Cold War period, this influence translated more into political and economic leadership, without formal annexations or direct territorial domination. However, the current context points to a kind of “neocolonization” of the Global South, in which larger powers seek to control resources and peripheral governments in an indirect yet effective manner.
Mackinder’s theories can be used to illuminate this dynamic. As mentioned, Mackinder envisioned the twentieth-century world as a closed system, in which there were no longer any unknown lands to be colonized—hence, the powers would fight among themselves for control over already occupied regions [3]. He predicted that Africa and Latin America (then largely European colonies or semi-colonies) would continue as boards upon which the great powers would project their disputes, a form of neocolonialism. In the current scenario, we see the USA proposing exchanges of protection for resources (as in Ukraine) and even leaders of developing countries seeking similar agreements. A notable example: the President of the Democratic Republic of the Congo, Felix Tshisekedi, praised the USA–Ukraine initiative and suggested an analogous agreement involving Congolese mineral wealth in exchange for US support against internal rebels (M23) [19]. In other words, African countries and possibly South American ones may enter into this logic of offering privileged access to resources (cobalt, lithium, food, biodiversity) in order to obtain security guarantees or investments. This represents a regression to the times when external powers dictated the directions of the South in exchange for promises of protection, characterizing a strategic neocolonialism.
For Brazil, in particular, this rearrangement generates both opportunities and risks. As a regional power with considerable diplomatic autonomy, Brazil has historically sought to balance relationships with the USA, Europe, China, and other actors, avoiding automatic alignments. However, in a world where Trump’s USA is actively redefining spheres of influence—possibly making deals with Russia that divide priorities (for example, Washington focusing on the Western Hemisphere and Moscow on the Eastern)—South America could once again be seen as an exclusive American sphere of influence. From this perspective, Washington could pressure South American countries to align with its directives, limiting partnerships with rivals (such as China) and seeking privileged access to strategic resources (such as the Amazon, fresh water, minerals, and agricultural commodities). Some indications are already emerging: Trump’s transactional approach mentioned by Nye included pressures on Canada and Mexico regarding border and trade issues, under the threat of commercial sanctions. It would not be unthinkable to adopt a hard line, for example, with regard to Brazilian environmental policies (linked to the Amazon) or Brazil’s relations with China, using tariffs or incentives as leverage—a sort of geopolitics of economic coercion.
On the other hand, Brazil and its neighbors could also attempt to take advantage of the Sino–North American competition. If the USA is distracted consolidating its hemispheric “hard power” hegemony (even with annexation fantasies in the north), powers such as China may advance their economic presence in South America through investments and trade (Belt and Road, infrastructure financing)—which is already happening. This would constitute an indirect neocolonial dispute in the South: Chinese loans and investments versus American demands and agreements, partly reminiscent of the nineteenth-century imperial competition (when the United Kingdom, USA, and others competed for Latin American markets and resources).
From a conceptual standpoint, Mackinder might classify South America as part of the “Outer Crescent” (external insular crescent)—peripheral to the great Eurasian “World-Island,” yet still crucial as a source of resources and a strategic position in the South Atlantic and Pacific. If the USA consolidates an informal empire in the Americas, it would be reinforcing its “insular bastion” far from the Eurasian Heartland, a strategy that Mackinder once suggested for maritime powers: to control islands and peripheral continents to compensate for the disadvantage of not controlling the Heartland. However, an excessive US dominance in the South could lead to local resistance and alternative alignments, unbalancing the region.
Kjellén would add that for Brazil to maintain its decisive sovereignty, it will need to strengthen its autarky and internal cohesion—in other words, reduce vulnerabilities (economic, military, social) that external powers might exploit [4]. Meanwhile, Mahan might point out the importance for Brazil of controlling its maritime routes and coastlines (South Atlantic) to avoid being at the mercy of a naval power like the USA. And Ratzel would remind us that states that do not expand their influence tend to be absorbed by foreign influences—which, in the context of Brazil, does not mean conquering neighboring territories, but rather actively leading South American integration to create a block more resilient to external intrusion.
In summary, South America finds itself in a more competitive and segmented world, where major players are resurrecting practices from past eras. The notion of “neocolonization” here does not imply direct occupation, but rather mechanisms of dependency: whether through unequal economic agreements or through diplomatic or military pressure for alignment. Brazil, as the largest economy and territory on the subcontinent, will have to navigate with heightened caution. A new global power balance, marked by the division of spheres of influence among the USA, China, and Russia, may reduce the sovereign maneuvering space of South American countries unless they act jointly. Thus, theoretical reflection suggests the need for South–South strategies, reinforcement of regional organizations, and diversification of partnerships to avoid falling into modern “neocolonial traps.”
Conclusion
The emerging post–re-election geopolitical conjuncture of Donald Trump signals a return to classical geopolitical principles, after several decades of predominance of institutional liberal views. We witness the revaluation of concepts such as spheres of influence, exchanges of protection for resources, naval power versus land power, and disputes over territory and raw materials—all central themes in the writings of Mackinder, Mahan, Kjellén, and Ratzel at the end of the nineteenth and the beginning of the twentieth century. An impartial analysis of these events, in light of these theories, shows internal coherence in Trump’s actions: although controversial, they follow a logic of maximizing national interest and the relative power of the USA on the world stage, even at the expense of established principles and alliances.
Halford Mackinder reminds us that, in a closed world with no new lands to conquer, the great powers will seek to redistribute the world among themselves [3]. This seems to manifest in the direct understandings between the USA and Russia over the fate of Ukraine, and in American ambitions in the Arctic and the Western Hemisphere. Alfred Mahan emphasizes that the control of the seas and strategic positions ensures supremacy—we see reflections of this in Trump’s obsession with Greenland (Arctic) and the possible neglect of the importance of maintaining NATO (and therefore the North Atlantic) as a cohesive bloc, something that Mahan’s theory would criticize due to the risk of a naval vacuum. Rudolf Kjellén and Friedrich Ratzel provide the framework to understand the more aggressive facet of expansionist nationalism: the idea of the State as an organism that needs to grow, secure resources, and seek self-sufficiency explains everything from the extortionate agreement imposed on Ukraine to the annexation rhetoric regarding Canada.
The potential consequences are profound. In the short term, we may witness a precarious ceasefire in the Ukraine war, with consolidated Russian territorial gains and Ukraine economically tied to the USA, but without formal military protection—a fragile “armed peace.” Western Europe, alarmed, may accelerate its independent militarization, perhaps marking the beginning of European defense autonomy, as is already openly debated [1]. At the far end of the globe, American activism in the Arctic and the Americas may reshape alliances: countries like Canada, once aligned with Washington, might seek to guarantee their sovereignty by distancing themselves from it; powers like China could take advantage of the openings to increase their presence in Latin America and Africa through economic diplomacy; and emerging countries of the Global South may have to choose between submitting to new “guardianships” or strengthening South–South cooperation.
Ultimately, the current situation reinforces the relevance of studying geopolitics through historical lenses. The actions of the Trump administration indicate that, despite all technological and normative advances, the competition for geographic power has not disappeared—it has merely assumed new formats. Academic impartiality obliges us not to prematurely judge whether these strategies will be successful or beneficial, but history and theory warn that neo-imperial movements tend to generate counter-reactions. As Mackinder insinuated, “every shock or change anywhere reverberates around the world,” and a sudden move by a superpower tends to provoke unforeseen adjustments and chain conflicts. It remains to be seen how the other actors—including Brazil and its neighbors—will adapt to this new chapter in the great struggle for global power, in which centuries-old theories once again have a surprising explanatory power over present events.
Bibliography
[1] A Referência. (2025). Europa calcula o custo de se defender sem os EUA: 300 mil soldados e 250 bilhões de euros a mais. Recuperado em 3 de março de 2025, de https://areferencia.com/europa/europa-calcula-o-custo-de-se-defender-sem-os-eua-300-mil-soldados-e-250-bilhoes-de-euros-a-mais/#:\~:text=Europa%20calcula%20o%20custo%20de,bilh%C3%B5es%20de%20euros%20a%20mais
[2] Brexit Institute. (2025). What happens if Trump invades Greenland? Recuperado em 3 de março de 2025, de https://dcubrexitinstitute.eu/2025/01/what-happens-if-trump-invades-greenland/#:\~:text=Ever%20since%20Donald%20Trump%20announced,agreed%20in%20Wales%20in%202014
[3] Cfettweis C:CST22(2)8576.DVI. (2025). Mackinder and Angell. Recuperado em 3 de março de 2025, de https://cfettweis.com/wp-content/uploads/Mackinder-and-Angell.pdf#:\~:text=meant%20the%20beginning%20of%20an,Mackinder
[4] Diva-Portal. (2025). The geopolitics of territorial relativity. Poland seen by Rudolf Kjellén. Recuperado em 3 de março de 2025, de https://www.diva-portal.org/smash/get/diva2:1696547/FULLTEXT02#:\~:text=,The%20state%20territory
[5] Geopolitical Monitor. (2025). The Russo-Ukrainian War and Mackinder’s Heartland Thesis. Recuperado em 3 de março de 2025, de https://www.geopoliticalmonitor.com/the-ukraine-war-and-mackinders-heartland-thesis/#:\~:text=In%201904%2C%20Sir%20Halford%20J,in%20adding%20a%20substantial%20oceanic
[6] Instituto Humanitas Unisinos. (2025). Trump obriga Zelensky a hipotecar a exploração de minerais críticos em troca do seu apoio. Recuperado em 3 de março de 2025, de https://www.ihu.unisinos.br/648986-trump-obriga-zelensky-a-hipotecar-a-exploracao-de-minerais-criticos-em-troca-do-seu-apoio#:\~:text=Essa%20troca%20inclui%20os%20cobi%C3%A7ados,s%C3%A3o%20praticamente%20inexploradas%20no%20pa%C3%ADs
[7] Politico. (2025). Trump’s annexation fixation is no joke, Trudeau warns. Recuperado em 3 de março de 2025, de https://www.politico.com/news/2025/02/07/canada-trudeau-trump-51-state-00203156#:\~:text=TORONTO%20%E2%80%94%20Prime%20Minister%20Justin,Canada%20becoming%20the%2051st%20state%2C%E2%80%9D%20Trudeau%20said
[8] The Daily Beast. (2025). Top Trump Adviser Moves Goalpost for Ukraine to End War. Recuperado em 3 de março de 2025, de https://www.thedailybeast.com/top-trump-adviser-moves-goalpost-for-ukraine-to-end-war/#:\~:text=LAND%20GRAB
[9] The Geostrata. (2025). Alfred Thayer Mahan and Supremacy of Naval Power. Recuperado em 3 de março de 2025, de https://www.thegeostrata.com/post/alfred-thayer-mahan-and-supremacy-of-naval-power#:\~:text=Alfred%20Thayer%20Mahan%20and%20Supremacy,control%20over%20maritime%20trade%20routes
[10] U.S. Department of State. (2025). Mahan’s The Influence of Sea Power upon History: Securing International Markets in the 1890s. Recuperado em 3 de março de 2025, de https://history.state.gov/milestones/1866-1898/mahan#:\~:text=Mahan%20argued%20that%20British%20control,American%20politicians%20believed%20that%20these
[11] Britannica. (2025a). Friedrich Ratzel | Biogeography, Anthropogeography, Political Geography. Recuperado em 3 de março de 2025, de https://www.britannica.com/biography/Friedrich-Ratzel#:\~:text=webster,Swedish%20political%20scientist%20%2076
[12] Britannica. (2025b). Lebensraum. Recuperado em 3 de março de 2025, de https://www.britannica.com/topic/Lebensraum#:\~:text=defined,The
[13] Britannica. (2025c). Rudolf Kjellén. Recuperado em 3 de março de 2025, de https://www.britannica.com/biography/Rudolf-Kjellen
[14] Wikipedia (ZH). (2025). Rudolf Kjellén. Recuperado em 3 de março de 2025, de https://zh.wikipedia.org/wiki/w:Rudolf_Kjell%C3%A9n#:\~:text=Besides%20legalistic%2C%20states%20have%20organic,preservation.%20%5B%203
[15] Wikipedia. (2025). Lebensraum. Recuperado em 3 de março de 2025, de https://en.wikipedia.org/wiki/Lebensraum#:\~:text=The%20German%20geographer%20and%20ethnographer,into%20the%20Greater%20Germanic%20Reich
[16] YouTube. (2025). Trump says Ukraine 'unlikely to get all land back' or join NATO [Vídeo]. Recuperado em 3 de março de 2025, de https://www.youtube.com/watch?v=BmHzAVLhsXU#:\~:text=Trump%20says%20Ukraine%20%27unlikely%20to,for%20it%20to%20join%20NATO
[17] U.S. Naval Institute. (2025) Operation World Peace. Recuperado em 3 de março de 2025, de https://www.usni.org/magazines/proceedings/1955/june/operation-world-peace#:\\~:text=“The Mahan doctrine%2C” according to,the word “airships” is more
[18] Emissary. (2024) Trump’s Greenland and Panama Canal Threats Are a Throwback to an Old, Misguided Foreign Policy. Recuperado em 3 de março de 2025, de https://carnegieendowment.org/emissary/2025/01/trump-greenland-panama-canal-monroe-doctrine-policy?lang=en
[19] A Referência. Acordo EUA-Ucrânia está praticamente fechado, mas analistas se dividem sobre quem sairá ganhando. Recuperado em 3 de março de 2025, de https://areferencia.com/europa/acordo-eua-ucrania-esta-praticamente-fechado-mas-analistas-se-dividem-sobre-quem-saira-ganhando/#:\\~:text=EUA e 17,o acordo a seu favor
[20] Wikipedia. (2025) Geopolitik. Recuperado em 3 de março de 2025, de https://en.wikipedia.org/wiki/Geopolitik#:\\~:text=Rudolph Kjellén was Ratzel's Swedish,Kjellén's State
-
@ 0e29efc2:ff142af2
2025-05-07 15:09:46Table of Contents
- Intro
- Important Terminology
- Getting Started
- Where do I buy bitcoin?
- Okay, I bought some bitcoin-now what?
- Less than 0.01 BTC
- More than 0.01 BTC and less than 0.1 BTC
- More than 0.1 BTC
- How Bitcoin Works
- Skepticism
- Someone will hack it
- The government will try to stop it
- It’s not backed by anything
- Conclusion
Intro
Maybe you saw an article in Forbes, a news segment about MicroStrategy (MSTR), or you glanced at the bitcoin price chart; whatever the spark, your curiosity led you here. Enough friends and relatives keep asking me about bitcoin that I finally organized my thoughts into a single reference. This is not a comprehensive guide—it assumes you trust me as a heuristic.
Important Terminology
Sat (satoshi) – the smallest unit of bitcoin. One bitcoin (₿) equals 100 000 000 sats.
Getting Started
Where do I buy bitcoin?
I use River because it publishes proof‑of‑reserves, supports the Lightning Network, and pays interest on idle USD balances (currently 3.8 %).
Okay, I bought some bitcoin-now what?
Withdraw it immediately. Centralized exchanges can and do fail. Your next step depends on how much bitcoin you hold.
If at any point you're struggling, please reach out to me.
Less than 0.01 BTC
- On your phone open Safari (iOS) or Chrome (Android).
- Paste
https://wallet.cashu.me?mint=https://mint.westernbtc.com
. Confirm the prompt that asks whether you trusthttps://mint.westernbtc.com
. I run this mint so beginners can skip the gnarly parts. - Complete setup.
- Tap Receive → LIGHTNING → enter amount → COPY.
- In River choose Send → Send to a Bitcoin wallet, paste the invoice, verify, and send.
- Return to the wallet; your sats should appear.
More than 0.01 BTC and less than 0.1 BTC
It's time for cold storage. Cold storage means a dedicated signing device not connected to the internet. Think of it like keys to a house. If you have the keys (your cold storage signing device), you can get into your house (the bitcoin). I recommend and use the COLDCARD Q or COLDCARD MK4 from COLDCARD. See this thorough walkthrough.
The creator nostr:npub1rxysxnjkhrmqd3ey73dp9n5y5yvyzcs64acc9g0k2epcpwwyya4spvhnp8 makes reliable content.
More than 0.1 BTC
The next security upgrade involves something called multisig. It requires the use of multiple devices instead of one. Think of those nuclear launch silos in movies where two keys need to be turned in order to launch the missile. One person can't reach both keys, so you need two people. Like the two keys needing to be turned, we need a certain number of keys (signing devices) to be used.
This offers a number of benefits. Say you have a 2-of-3 multisig setup. You would need two of the three keys to move the bitcoin. If you were to lose one, you could use the two others to move it instead. Many choose to geographically distribute the keys; choosing to keep one at a friend’s house or with a bank.
The previous video I linked covers multisig as well. Again, please reach out to me if you need help.
How Bitcoin Works
I'm going to paint a scene portraying the basics of how bitcoin works. Picture a race that's supposed to take 10 minutes to run start-to-finish, and there's a crowd of people spectating. When the fastest runner crosses the finish line, they're awarded 50 bitcoin. Everyone in the crowd recognizes who won, and writes it down on their own scoreboard. Then, the next race begins.
Now, let's say more racers who've had special training join. They start winning consistently because of it, and now the race only lasts about 9 minutes. There's a special rule everyone in the crowd agreed to, that they can make the race harder to ensure it's around 10 minutes long. So they make the race harder to counteract the faster runners.
With this in mind, let's get to the skepticism you might have.
Skepticism
Someone will hack it
Think of bitcoin as the people in the crowd. If someone tries to cheat and writes on their scoreboard that they have a billion bitcoin, their scoreboard is going to look different than everybody else’s. The other people in the crowd will cross-reference with each other and decide to ignore that person who cheated.
The government will try to stop it
Again, think of the crowd. In reality, the "crowd participants" are scattered all around the world. You might be able to stop many of them, but it would be almost impossible to stop everyone. Imagine people are watching the race on TV, can you find everyone who's spectating? Ironically, attempted bans often increase interest.
It’s not backed by anything.
Think of the runners. The runners are bitcoin miners. They have to expend real energy to participate in the race. The more bitcoin miners, the more secure the network. In summary, it's backed by electricity and work.
Conclusion
There are too many topics to cover in one article. I haven't even touched on the history of money, what money is, scarcity, etc. The best way to learn is to research the topics you're interested in for yourself. It took months of deep diving before I was sold on bitcoin, and I had many touch points before that.
Once you see it though, you can't unsee it.
-
@ dc4cd086:cee77c06
2024-10-18 04:08:33Have you ever wanted to learn from lengthy educational videos but found it challenging to navigate through hours of content? Our new tool addresses this problem by transforming long-form video lectures into easily digestible, searchable content.
Key Features:
Video Processing:
- Automatically downloads YouTube videos, transcripts, and chapter information
- Splits transcripts into sections based on video chapters
Content Summarization:
- Utilizes language models to transform spoken content into clear, readable text
- Formats output in AsciiDoc for improved readability and navigation
- Highlights key terms and concepts with [[term]] notation for potential cross-referencing
Diagram Extraction:
- Analyzes video entropy to identify static diagram/slide sections
- Provides a user-friendly GUI for manual selection of relevant time ranges
- Allows users to pick representative frames from selected ranges
Going Forward:
Currently undergoing a rewrite to improve organization and functionality, but you are welcome to try the current version, though it might not work on every machine. Will support multiple open and closed language models for user choice Free and open-source, allowing for personal customization and integration with various knowledge bases. Just because we might not have it on our official Alexandria knowledge base, you are still welcome to use it on you own personal or community knowledge bases! We want to help find connections between ideas that exist across relays, allowing individuals and groups to mix and match knowledge bases between each other, allowing for any degree of openness you care.
While designed with #Alexandria users in mind, it's available for anyone to use and adapt to their own learning needs.
Screenshots
Frame Selection
This is a screenshot of the frame selection interface. You'll see a signal that represents frame entropy over time. The vertical lines indicate the start and end of a chapter. Within these chapters you can select the frames by clicking and dragging the mouse over the desired range where you think diagram is in that chapter. At the bottom is an option that tells the program to select a specific number of frames from that selection.
Diagram Extraction
This is a screenshot of the diagram extraction interface. For every selection you've made, there will be a set of frames that you can choose from. You can select and deselect as many frames as you'd like to save.
Links
- repo: https://github.com/limina1/video_article_converter
- Nostr Apps 101: https://www.youtube.com/watch?v=Flxa_jkErqE
Output
And now, we have a demonstration of the final result of this tool, with some quick cleaning up. The video we will be using this tool on is titled Nostr Apps 101 by nostr:npub1nxy4qpqnld6kmpphjykvx2lqwvxmuxluddwjamm4nc29ds3elyzsm5avr7 during Nostrasia. The following thread is an analog to the modular articles we are constructing for Alexandria, and I hope it conveys the functionality we want to create in the knowledge space. Note, this tool is the first step! You could use a different prompt that is most appropriate for the specific context of the transcript you are working with, but you can also manually clean up any discrepancies that don't portray the video accurately.
nostr:nevent1qvzqqqqqqypzp5r5hd579v2sszvvzfel677c8dxgxm3skl773sujlsuft64c44ncqy2hwumn8ghj7un9d3shjtnyv9kh2uewd9hj7qgwwaehxw309ahx7uewd3hkctcpzemhxue69uhhyetvv9ujumt0wd68ytnsw43z7qghwaehxw309aex2mrp0yhxummnw3ezucnpdejz7qgewaehxw309aex2mrp0yh8xmn0wf6zuum0vd5kzmp0qqsxunmjy20mvlq37vnrcshkf6sdrtkfjtjz3anuetmcuv8jswhezgc7hglpn
Or view on Coracle nostr:nevent1qqsxunmjy20mvlq37vnrcshkf6sdrtkfjtjz3anuetmcuv8jswhezgcppemhxue69uhkummn9ekx7mp0qgsdqa9md83tz5yqnrqjw07hhkpmfjpkuv9hlh5v8yhu8z274w9dv7qnnq0s3
-
@ 8947a945:9bfcf626
2024-10-17 08:06:55สวัสดีทุกคนบน Nostr ครับ รวมไปถึง watchersและ ผู้ติดตามของผมจาก Deviantart และ platform งานศิลปะอื่นๆนะครับ
ตั้งแต่ต้นปี 2024 ผมใช้ AI เจนรูปงานตัวละครสาวๆจากอนิเมะ และเปิด exclusive content ให้สำหรับผู้ที่ชื่นชอบผลงานของผมเป็นพิเศษ
ผมโพสผลงานผมทั้งหมดไว้ที่เวบ Deviantart และค่อยๆสร้างฐานผู้ติดตามมาเรื่อยๆอย่างค่อยเป็นค่อยไปมาตลอดครับ ทุกอย่างเติบโตไปเรื่อยๆของมัน ส่วนตัวผมมองว่ามันเป็นพิร์ตธุรกิจออนไลน์ ของผมพอร์ตนึงได้เลย
เมื่อวันที่ 16 กย.2024 มีผู้ติดตามคนหนึ่งส่งข้อความส่วนตัวมาหาผม บอกว่าชื่นชอบผลงานของผมมาก ต้องการจะขอซื้อผลงาน แต่ขอซื้อเป็น NFT นะ เสนอราคาซื้อขายต่อชิ้นที่สูงมาก หลังจากนั้นผมกับผู้ซื้อคนนี้พูดคุยกันในเมล์ครับ
นี่คือข้อสรุปสั่นๆจากการต่อรองซื้อขายครับ
(หลังจากนี้ผมขอเรียกผู้ซื้อว่า scammer นะครับ เพราะไพ่มันหงายมาแล้ว ว่าเขาคือมิจฉาชีพ)
- Scammer รายแรก เลือกผลงานที่จะซื้อ เสนอราคาซื้อที่สูงมาก แต่ต้องเป็นเวบไซต์ NFTmarket place ที่เขากำหนดเท่านั้น มันทำงานอยู่บน ERC20 ผมเข้าไปดูเวบไซต์ที่ว่านี้แล้วรู้สึกว่ามันดูแปลกๆครับ คนที่จะลงขายผลงานจะต้องใช้ email ในการสมัครบัญชีซะก่อน ถึงจะผูก wallet อย่างเช่น metamask ได้ เมื่อผูก wallet แล้วไม่สามารถเปลี่ยนได้ด้วย ตอนนั้นผมใช้ wallet ที่ไม่ได้ link กับ HW wallet ไว้ ทดลองสลับ wallet ไปๆมาๆ มันทำไม่ได้ แถมลอง log out แล้ว เลข wallet ก็ยังคาอยู่อันเดิม อันนี้มันดูแปลกๆแล้วหนึ่งอย่าง เวบนี้ค่า ETH ในการ mint 0.15 - 0.2 ETH … ตีเป็นเงินบาทนี่แพงบรรลัยอยู่นะครับ
-
Scammer รายแรกพยายามชักจูงผม หว่านล้อมผมว่า แหม เดี๋ยวเขาก็มารับซื้องานผมน่า mint งานเสร็จ รีบบอกเขานะ เดี๋ยวเขารีบกดซื้อเลย พอขายได้กำไร ผมก็ได้ค่า gas คืนได้ แถมยังได้กำไรอีก ไม่มีอะไรต้องเสีนจริงมั้ย แต่มันเป้นความโชคดีครับ เพราะตอนนั้นผมไม่เหลือทุนสำรองที่จะมาซื้อ ETH ได้ ผมเลยต่อรองกับเขาตามนี้ครับ :
-
ผมเสนอว่า เอางี้มั้ย ผมส่งผลงานของผมแบบ low resolution ให้ก่อน แลกกับให้เขาช่วยโอน ETH ที่เป็นค่า mint งานมาให้หน่อย พอผมได้ ETH แล้ว ผมจะ upscale งานของผม แล้วเมล์ไปให้ ใจแลกใจกันไปเลย ... เขาไม่เอา
- ผมเสนอให้ไปซื้อที่ร้านค้าออนไลน์ buymeacoffee ของผมมั้ย จ่ายเป็น USD ... เขาไม่เอา
- ผมเสนอให้ซื้อขายผ่าน PPV lightning invoice ที่ผมมีสิทธิ์เข้าถึง เพราะเป็น creator ของ Creatr ... เขาไม่เอา
- ผมยอกเขาว่างั้นก็รอนะ รอเงินเดือนออก เขาบอก ok
สัปดาห์ถัดมา มี scammer คนที่สองติดต่อผมเข้ามา ใช้วิธีการใกล้เคียงกัน แต่ใช้คนละเวบ แถมเสนอราคาซื้อที่สูงกว่าคนแรกมาก เวบที่สองนี้เลวร้ายค่าเวบแรกอีกครับ คือต้องใช้เมล์สมัครบัญชี ไม่สามารถผูก metamask ได้ พอสมัครเสร็จจะได้ wallet เปล่าๆมาหนึ่งอัน ผมต้องโอน ETH เข้าไปใน wallet นั้นก่อน เพื่อเอาไปเป็นค่า mint NFT 0.2 ETH
ผมบอก scammer รายที่สองว่า ต้องรอนะ เพราะตอนนี้กำลังติดต่อซื้อขายอยู่กับผู้ซื้อรายแรกอยู่ ผมกำลังรอเงินเพื่อมาซื้อ ETH เป็นต้นทุนดำเนินงานอยู่ คนคนนี้ขอให้ผมส่งเวบแรกไปให้เขาดูหน่อย หลังจากนั้นไม่นานเขาเตือนผมมาว่าเวบแรกมันคือ scam นะ ไม่สามารถถอนเงินออกมาได้ เขายังส่งรูป cap หน้าจอที่คุยกับผู้เสียหายจากเวบแรกมาให้ดูว่าเจอปัญหาถอนเงินไม่ได้ ไม่พอ เขายังบลัฟ opensea ด้วยว่าลูกค้าขายงานได้ แต่ถอนเงินไม่ได้
Opensea ถอนเงินไม่ได้ ตรงนี้แหละครับคือตัวกระตุกต่อมเอ๊ะของผมดังมาก เพราะ opensea อ่ะ ผู้ใช้ connect wallet เข้ากับ marketplace โดยตรง ซื้อขายกันเกิดขึ้น เงินวิ่งเข้าวิ่งออก wallet ของแต่ละคนโดยตรงเลย opensea เก็บแค่ค่า fee ในการใช้ platform ไม่เก็บเงินลูกค้าไว้ แถมปีนี้ค่า gas fee ก็ถูกกว่า bull run cycle 2020 มาก ตอนนี้ค่า gas fee ประมาณ 0.0001 ETH (แต่มันก็แพงกว่า BTC อยู่ดีอ่ะครับ)
ผมเลยเอาเรื่องนี้ไปปรึกษาพี่บิท แต่แอดมินมาคุยกับผมแทน ทางแอดมินแจ้งว่ายังไม่เคยมีเพื่อนๆมาปรึกษาเรื่องนี้ กรณีที่ผมทักมาถามนี่เป็นรายแรกเลย แต่แอดมินให้ความเห็นไปในทางเดียวกับสมมุติฐานของผมว่าน่าจะ scam ในเวลาเดียวกับผมเอาเรื่องนี้ไปถามในเพจ NFT community คนไทนด้วย ได้รับการ confirm ชัดเจนว่า scam และมีคนไม่น้อยโดนหลอก หลังจากที่ผมรู้ที่มาแล้ว ผมเลยเล่นสงครามปั่นประสาท scammer ทั้งสองคนนี้ครับ เพื่อดูว่าหลอกหลวงมิจฉาชีพจริงมั้ย
โดยวันที่ 30 กย. ผมเลยปั่นประสาน scammer ทั้งสองรายนี้ โดยการ mint ผลงานที่เขาเสนอซื้อนั่นแหละ ขึ้น opensea แล้วส่งข้อความไปบอกว่า
mint ให้แล้วนะ แต่เงินไม่พอจริงๆว่ะโทษที เลย mint ขึ้น opensea แทน พอดีบ้านจน ทำได้แค่นี้ไปถึงแค่ opensea รีบไปซื้อล่ะ มีคนจ้องจะคว้างานผมเยอะอยู่ ผมไม่คิด royalty fee ด้วยนะเฮ้ย เอาไปขายต่อไม่ต้องแบ่งกำไรกับผม
เท่านั้นแหละครับ สงครามจิตวิทยาก็เริ่มขึ้น แต่เขาจนมุม กลืนน้ำลายตัวเอง ช็อตเด็ดคือ
เขา : เนี่ยอุส่ารอ บอกเพื่อนในทีมว่าวันจันทร์ที่ 30 กย. ได้ของแน่ๆ เพื่อนๆในทีมเห็นงานผมแล้วมันสวยจริง เลยใส่เงินเต็มที่ 9.3ETH (+ capture screen ส่งตัวเลขยอดเงินมาให้ดู)ไว้รอโดยเฉพาะเลยนะ ผม : เหรอ ... งั้น ขอดู wallet address ที่มี transaction มาให้ดูหน่อยสิ เขา : 2ETH นี่มัน 5000$ เลยนะ ผม : แล้วไง ขอดู wallet address ที่มีการเอายอดเงิน 9.3ETH มาให้ดูหน่อย ไหนบอกว่าเตรียมเงินไว้มากแล้วนี่ ขอดูหน่อย ว่าใส่ไว้เมื่อไหร่ ... เอามาแค่ adrress นะเว้ย ไม่ต้องทะลึ่งส่ง seed มาให้ เขา : ส่งรูปเดิม 9.3 ETH มาให้ดู ผม : รูป screenshot อ่ะ มันไม่มีความหมายหรอกเว้ย ตัดต่อเอาก็ได้ง่ายจะตาย เอา transaction hash มาดู ไหนว่าเตรียมเงินไว้รอ 9.3ETH แล้วอยากซื้องานผมจนตัวสั่นเลยไม่ใช่เหรอ ถ้าจะส่ง wallet address มาให้ดู หรือจะช่วยส่ง 0.15ETH มาให้ยืม mint งานก่อน แล้วมากดซื้อ 2ETH ไป แล้วผมใช้ 0.15ETH คืนให้ก็ได้ จะซื้อหรือไม่ซื้อเนี่ย เขา : จะเอา address เขาไปทำไม ผม : ตัดจบ รำคาญ ไม่ขายให้ละ เขา : 2ETH = 5000 USD เลยนะ ผม : แล้วไง
ผมเลยเขียนบทความนี้มาเตือนเพื่อนๆพี่ๆทุกคนครับ เผื่อใครกำลังเปิดพอร์ตทำธุรกิจขาย digital art online แล้วจะโชคดี เจอของดีแบบผม
ทำไมผมถึงมั่นใจว่ามันคือการหลอกหลวง แล้วคนโกงจะได้อะไร
อันดับแรกไปพิจารณาดู opensea ครับ เป็นเวบ NFTmarketplace ที่ volume การซื้อขายสูงที่สุด เขาไม่เก็บเงินของคนจะซื้อจะขายกันไว้กับตัวเอง เงินวิ่งเข้าวิ่งออก wallet ผู้ซื้อผู้ขายเลย ส่วนทางเวบเก็บค่าธรรมเนียมเท่านั้น แถมค่าธรรมเนียมก็ถูกกว่าเมื่อปี 2020 เยอะ ดังนั้นการที่จะไปลงขายงานบนเวบ NFT อื่นที่ค่า fee สูงกว่ากันเป็นร้อยเท่า ... จะทำไปทำไม
ผมเชื่อว่า scammer โกงเงินเจ้าของผลงานโดยการเล่นกับความโลภและความอ่อนประสบการณ์ของเจ้าของผลงานครับ เมื่อไหร่ก็ตามที่เจ้าของผลงานโอน ETH เข้าไปใน wallet เวบนั้นเมื่อไหร่ หรือเมื่อไหร่ก็ตามที่จ่ายค่า fee ในการ mint งาน เงินเหล่านั้นสิ่งเข้ากระเป๋า scammer ทันที แล้วก็จะมีการเล่นตุกติกต่อแน่นอนครับ เช่นถอนไม่ได้ หรือซื้อไม่ได้ ต้องโอนเงินมาเพิ่มเพื่อปลดล็อค smart contract อะไรก็ว่าไป แล้วคนนิสัยไม่ดีพวกเนี้ย ก็จะเล่นกับความโลภของคน เอาราคาเสนอซื้อที่สูงโคตรๆมาล่อ ... อันนี้ไม่ว่ากัน เพราะบนโลก NFT รูปภาพบางรูปที่ไม่ได้มีความเป็นศิลปะอะไรเลย มันดันขายกันได้ 100 - 150 ETH ศิลปินที่พยายามสร้างตัวก็อาจจะมองว่า ผลงานเรามีคนรับซื้อ 2 - 4 ETH ต่องานมันก็มากพอแล้ว (จริงๆมากเกินจนน่าตกใจด้วยซ้ำครับ)
บนโลกของ BTC ไม่ต้องเชื่อใจกัน โอนเงินไปหากันได้ ปิดสมุดบัญชีได้โดยไม่ต้องเชื่อใจกัน
บบโลกของ ETH "code is law" smart contract มีเขียนอยู่แล้ว ไปอ่าน มันไม่ได้ยากมากในการทำความเข้าใจ ดังนั้น การจะมาเชื่อคำสัญญาจากคนด้วยกัน เป็นอะไรที่ไม่มีเหตุผล
ผมไปเล่าเรื่องเหล่านี้ให้กับ community งานศิลปะ ก็มีทั้งเสียงตอบรับที่ดี และไม่ดีปนกันไป มีบางคนยืนยันเสียงแข็งไปในทำนองว่า ไอ้เรื่องแบบเนี้ยไม่ได้กินเขาหรอก เพราะเขาตั้งใจแน่วแน่ว่างานศิลป์ของเขา เขาไม่เอาเข้ามายุ่งในโลก digital currency เด็ดขาด ซึ่งผมก็เคารพมุมมองเขาครับ แต่มันจะดีกว่ามั้ย ถ้าเราเปิดหูเปิดตาให้ทันเทคโนโลยี โดยเฉพาะเรื่อง digital currency , blockchain โดนโกงทีนึงนี่คือหมดตัวกันง่ายกว่าเงิน fiat อีก
อยากจะมาเล่าให้ฟังครับ และอยากให้ช่วยแชร์ไปให้คนรู้จักด้วย จะได้ระวังตัวกัน
Note
- ภาพประกอบ cyber security ทั้งสองนี่ของผมเองครับ ทำเอง วางขายบน AdobeStock
- อีกบัญชีนึงของผม "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw กำลังค่อยๆเอาผลงานจากโลกข้างนอกเข้ามา nostr ครับ ตั้งใจจะมาสร้างงานศิลปะในนี้ เพื่อนๆที่ชอบงาน จะได้ไม่ต้องออกไปหาที่ไหน
ผลงานของผมครับ - Anime girl fanarts : HikariHarmony - HikariHarmony on Nostr - General art : KeshikiRakuen - KeshikiRakuen อาจจะเป็นบัญชี nostr ที่สามของผม ถ้าไหวครับ
-
@ bbef5093:71228592
2025-05-07 15:09:39Az Európai Bizottság terve az orosz urán- és energiafüggőség felszámolására
Az Európai Bizottság bejelentette, hogy korlátozni kívánja az új urán-, dúsított urán- és egyéb, Oroszországból származó nukleáris anyagokra vonatkozó ellátási szerződéseket, ezzel is elősegítve, hogy az Európai Unió „teljesen megszüntesse” az orosz energiától való függését[8][6][2].
A Bizottság új ütemtervet mutatott be, amely részletesen tartalmazza, hogyan kívánja megszüntetni az orosz energiafüggőséget, miközben biztosítja az EU energiaellátásának és árainak stabilitását[6][2][15].
Főbb intézkedések és célok
- Az EU az orosz gázimport arányát 45%-ról 19%-ra csökkentette a 2022 májusában indított REPowerEU tervnek köszönhetően, de 2024-ben ismét növekedett az orosz gáz behozatala[2][20].
- Az új ütemterv szerint az orosz olaj, gáz és nukleáris energia fokozatosan, összehangoltan és biztonságosan kerül ki az uniós piacokról, miközben az EU a tiszta energiára való átállást gyorsítja fel[6][15][7].
- Az EU-tagállamoknak 2025 végéig nemzeti terveket kell készíteniük arról, hogyan járulnak hozzá az orosz gáz, nukleáris energia és olaj importjának megszüntetéséhez[13][7][18].
- Az orosz eredetű urán, dúsított urán és egyéb nukleáris anyagok esetében új korlátozásokat vezetnek be: az Euratom Ellátási Ügynökség (ESA) nem hagy jóvá új orosz beszállítási szerződéseket, és gazdasági eszközökkel is igyekeznek visszaszorítani az importot[4][5][16].
- A meglévő rövid távú szerződéseket 2025 végéig meg kell szüntetni, új szerződéseket pedig nem lehet kötni; a hosszú távú szerződéseket 2027 végéig kell felmondani[5][7][6].
- Az intézkedések célja, hogy a teljes orosz gáz- és olajimport 2027 végéig megszűnjön, az orosz atomenergia pedig fokozatosan kivezetésre kerüljön[3][9][17].
Nukleáris háttér
- Az ESA jelentése szerint 2023-ban az EU-ban felhasznált természetes urán 23,4%-a érkezett Oroszországból, ami 72,6%-os növekedést jelentett, főként a VVER típusú orosz atomerőművek üzemanyag-felhalmozása miatt[16].
- Az EU-ban 19 VVER reaktor működik (Bulgáriában, Csehországban, Finnországban, Magyarországon és Szlovákiában).
- Az EU természetes uránszükséglete a globális igények mintegy 22%-át teszi ki, a beszerzések 91%-a Kanadából, Oroszországból, Kazahsztánból és Nigerből származik[16].
Célkitűzés és indoklás
A Bizottság szerint az orosz energiafüggőség felszámolása nemcsak gazdasági, hanem biztonságpolitikai kérdés is, mivel Oroszország többször is eszközként használta az energiát az EU-val szemben[2][12]. A lépések célja, hogy az EU energiaellátása biztonságos, stabil és kiszámítható maradjon, miközben az orosz energiaimportból származó bevételek ne finanszírozhassák tovább az Ukrajna elleni háborút[6][2][12].
Források alapján készült magyar összefoglaló és fordítás
Citations: [1] Döntött az Európai Bizottság: teljes mértékben megszüntetik ... - 444 https://444.hu/2025/05/06/dontott-az-europai-bizottsag-teljes-mertekben-megszuntetik-az-orosz-energiatol-valo-fuggest [2] Három éven belül felszámolná az orosz energiafüggőséget az ... https://hu.euronews.com/my-europe/2025/05/06/harom-even-belul-felszamolna-az-orosz-energiafuggoseget-az-europai-bizottsag [3] 2027-re teljesen leállítaná az Európai Bizottság az orosz ... - Új Szó https://ujszo.com/kozelet/2027-re-teljesen-leallitana-az-europai-bizottsag-az-orosz-energiabehozatalt-a-nuklearis [4] Bejelentették Brüsszelben: megkerülik Magyarországot, teljesen ... https://www.portfolio.hu/gazdasag/20250506/bejelentettek-brusszelben-megkerulik-magyarorszagot-teljesen-levalik-az-orosz-olajrol-es-gazrol-az-eu-759267 [5] Érik az újabb ütközés: Brüsszel betiltaná az orosz energiát https://www.valaszonline.hu/2025/05/06/energia-szankcio-oroszorszag-haboru-eu-olaj-gaz-uran/ [6] Az EU teljes mértékben megszünteti az orosz energiától való függését https://hungary.representation.ec.europa.eu/az-eu-teljes-mertekben-megszunteti-az-orosz-energiatol-valo-fuggeset-2025-05-06_hu?prefLang=en [7] Megvan az ütemterv, végleg betiltaná az orosz energiát az Európai ... https://index.hu/kulfold/2025/05/06/orosz-energiafuggoseg-orosz-gaz-olaj-import-europai-unio-repowereu/ [8] European Commission Unveils Plans To Restrict New Uranium ... https://www.nucnet.org/news/european-commission-unveils-plans-to-restrict-new-uranium-deals-with-russia-5-3-2025 [9] Az EU teljes mértékben megszünteti az orosz energiától való ... https://infostart.hu/belfold/2025/05/06/az-eu-teljes-mertekben-megszunteti-az-orosz-energiatol-valo-fuggoseget-a-nap-hirei [10] [PDF] EURÓPAI BIZOTTSÁG Brüsszel, 2025.4.9. COM(2025) 159 final ... https://secure.ipex.eu/IPEXL-WEB/download/file/082d29089612ec1e019619f955940250 [11] Kiszivárgott az Európai Bizottság 2025-ös munkaprogramja https://www.eu-monitor.hu/hu/cikk/20250206-kiszivargott-az-europai-bizottsag-2025-os-munkaprogramja [12] EU says it will end dependency on Russian energy supplies https://www3.nhk.or.jp/nhkworld/en/news/20250507_B4/ [13] Végleg leválik az orosz energiáról az Európai Unió ... - Népszava https://nepszava.hu/3278673_oroszorszag-europai-unio-foldgaz-koolaj-levalas-terv [14] Egyre több európai ország támogatja az atomenergiát https://www.vg.hu/nemzetkozi-gazdasag/2025/03/atomenergia-energiatarolas-europa [15] Az EU teljes mértékben megszünteti az orosz energiától való ... https://karpatinfo.net/energiafuggetlenseg-orosz-foldgaz-orosz-energiafuggoseg-2025-05-07 [16] EU outlines measures to end Russian gas, oil imports by end-2027 https://balkangreenenergynews.com/eu-outlines-measures-to-end-russian-gas-oil-imports-by-end-2027/ [17] Az Európai Unió 2027 végére betiltaná az orosz gáz importját https://www.korkep.sk/cikkek/gazdasag/2025/05/05/az-europai-unio-2027-vegere-betiltana-az-orosz-gazimportot/ [18] Ficónak és Orbánnak sem tetszik, hogy az EU teljesen kitiltaná az ... https://napunk.dennikn.sk/hu/4623240/ficonak-es-orbannak-sem-tetszik-hogy-az-eu-teljesen-kitiltana-az-orosz-energiat/ [19] Várhelyi Olivér késlelteti az EU orosz energiafüggőségét felszámoló ... https://telex.hu/kulfold/2025/05/05/varhelyi-oliver-europai-bizottsag-orosz-energia-kivaltas-hatraltatas [20] REPowerEU roadmap - Energy - European Commission https://energy.ec.europa.eu/strategy/repowereu-roadmap_en
-
@ 83279ad2:bd49240d
2025-05-07 14:22:43 -
@ 8947a945:9bfcf626
2024-10-17 07:33:00Hello everyone on Nostr and all my watchersand followersfrom DeviantArt, as well as those from other art platforms
I have been creating and sharing AI-generated anime girl fanart since the beginning of 2024 and have been running member-exclusive content on Patreon.
I also publish showcases of my artworks to Deviantart. I organically build up my audience from time to time. I consider it as one of my online businesses of art. Everything is slowly growing
On September 16, I received a DM from someone expressing interest in purchasing my art in NFT format and offering a very high price for each piece. We later continued the conversation via email.
Here’s a brief overview of what happened
- The first scammer selected the art they wanted to buy and offered a high price for each piece. They provided a URL to an NFT marketplace site running on the Ethereum (ETH) mainnet or ERC20. The site appeared suspicious, requiring email sign-up and linking a MetaMask wallet. However, I couldn't change the wallet address later. The minting gas fees were quite expensive, ranging from 0.15 to 0.2 ETH
-
The scammers tried to convince me that the high profits would easily cover the minting gas fees, so I had nothing to lose. Luckily, I didn’t have spare funds to purchase ETH for the gas fees at the time, so I tried negotiating with them as follows:
-
I offered to send them a lower-quality version of my art via email in exchange for the minting gas fees, but they refused.
- I offered them the option to pay in USD through Buy Me a Coffee shop here, but they refused.
- I offered them the option to pay via Bitcoin using the Lightning Network invoice , but they refused.
- I asked them to wait until I could secure the funds, and they agreed to wait.
The following week, a second scammer approached me with a similar offer, this time at an even higher price and through a different NFT marketplace website.
This second site also required email registration, and after navigating to the dashboard, it asked for a minting fee of 0.2 ETH. However, the site provided a wallet address for me instead of connecting a MetaMask wallet.
I told the second scammer that I was waiting to make a profit from the first sale, and they asked me to show them the first marketplace. They then warned me that the first site was a scam and even sent screenshots of victims, including one from OpenSea saying that Opensea is not paying.
This raised a red flag, and I began suspecting I might be getting scammed. On OpenSea, funds go directly to users' wallets after transactions, and OpenSea charges a much lower platform fee compared to the previous crypto bull run in 2020. Minting fees on OpenSea are also significantly cheaper, around 0.0001 ETH per transaction.
I also consulted with Thai NFT artist communities and the ex-chairman of the Thai Digital Asset Association. According to them, no one had reported similar issues, but they agreed it seemed like a scam.
After confirming my suspicions with my own research and consulting with the Thai crypto community, I decided to test the scammers’ intentions by doing the following
I minted the artwork they were interested in, set the price they offered, and listed it for sale on OpenSea. I then messaged them, letting them know the art was available and ready to purchase, with no royalty fees if they wanted to resell it.
They became upset and angry, insisting I mint the art on their chosen platform, claiming they had already funded their wallet to support me. When I asked for proof of their wallet address and transactions, they couldn't provide any evidence that they had enough funds.
Here’s what I want to warn all artists in the DeviantArt community or other platforms If you find yourself in a similar situation, be aware that scammers may be targeting you.
My Perspective why I Believe This is a Scam and What the Scammers Gain
From my experience with BTC and crypto since 2017, here's why I believe this situation is a scam, and what the scammers aim to achieve
First, looking at OpenSea, the largest NFT marketplace on the ERC20 network, they do not hold users' funds. Instead, funds from transactions go directly to users’ wallets. OpenSea’s platform fees are also much lower now compared to the crypto bull run in 2020. This alone raises suspicion about the legitimacy of other marketplaces requiring significantly higher fees.
I believe the scammers' tactic is to lure artists into paying these exorbitant minting fees, which go directly into the scammers' wallets. They convince the artists by promising to purchase the art at a higher price, making it seem like there's no risk involved. In reality, the artist has already lost by paying the minting fee, and no purchase is ever made.
In the world of Bitcoin (BTC), the principle is "Trust no one" and “Trustless finality of transactions” In other words, transactions are secure and final without needing trust in a third party.
In the world of Ethereum (ETH), the philosophy is "Code is law" where everything is governed by smart contracts deployed on the blockchain. These contracts are transparent, and even basic code can be read and understood. Promises made by people don’t override what the code says.
I also discuss this issue with art communities. Some people have strongly expressed to me that they want nothing to do with crypto as part of their art process. I completely respect that stance.
However, I believe it's wise to keep your eyes open, have some skin in the game, and not fall into scammers’ traps. Understanding the basics of crypto and NFTs can help protect you from these kinds of schemes.
If you found this article helpful, please share it with your fellow artists.
Until next time Take care
Note
- Both cyber security images are mine , I created and approved by AdobeStock to put on sale
- I'm working very hard to bring all my digital arts into Nostr to build my Sats business here to my another npub "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw
Link to my full gallery - Anime girl fanarts : HikariHarmony - HikariHarmony on Nostr - General art : KeshikiRakuen
-
@ e6817453:b0ac3c39
2024-10-06 11:21:27Hey folks, today we're diving into an exciting and emerging topic: personal artificial intelligence (PAI) and its connection to sovereignty, privacy, and ethics. With the rapid advancements in AI, there's a growing interest in the development of personal AI agents that can work on behalf of the user, acting autonomously and providing tailored services. However, as with any new technology, there are several critical factors that shape the future of PAI. Today, we'll explore three key pillars: privacy and ownership, explainability, and bias.
1. Privacy and Ownership: Foundations of Personal AI
At the heart of personal AI, much like self-sovereign identity (SSI), is the concept of ownership. For personal AI to be truly effective and valuable, users must own not only their data but also the computational power that drives these systems. This autonomy is essential for creating systems that respect the user's privacy and operate independently of large corporations.
In this context, privacy is more than just a feature—it's a fundamental right. Users should feel safe discussing sensitive topics with their AI, knowing that their data won’t be repurposed or misused by big tech companies. This level of control and data ownership ensures that users remain the sole beneficiaries of their information and computational resources, making privacy one of the core pillars of PAI.
2. Bias and Fairness: The Ethical Dilemma of LLMs
Most of today’s AI systems, including personal AI, rely heavily on large language models (LLMs). These models are trained on vast datasets that represent snapshots of the internet, but this introduces a critical ethical challenge: bias. The datasets used for training LLMs can be full of biases, misinformation, and viewpoints that may not align with a user’s personal values.
This leads to one of the major issues in AI ethics for personal AI—how do we ensure fairness and minimize bias in these systems? The training data that LLMs use can introduce perspectives that are not only unrepresentative but potentially harmful or unfair. As users of personal AI, we need systems that are free from such biases and can be tailored to our individual needs and ethical frameworks.
Unfortunately, training models that are truly unbiased and fair requires vast computational resources and significant investment. While large tech companies have the financial means to develop and train these models, individual users or smaller organizations typically do not. This limitation means that users often have to rely on pre-trained models, which may not fully align with their personal ethics or preferences. While fine-tuning models with personalized datasets can help, it's not a perfect solution, and bias remains a significant challenge.
3. Explainability: The Need for Transparency
One of the most frustrating aspects of modern AI is the lack of explainability. Many LLMs operate as "black boxes," meaning that while they provide answers or make decisions, it's often unclear how they arrived at those conclusions. For personal AI to be effective and trustworthy, it must be transparent. Users need to understand how the AI processes information, what data it relies on, and the reasoning behind its conclusions.
Explainability becomes even more critical when AI is used for complex decision-making, especially in areas that impact other people. If an AI is making recommendations, judgments, or decisions, it’s crucial for users to be able to trace the reasoning process behind those actions. Without this transparency, users may end up relying on AI systems that provide flawed or biased outcomes, potentially causing harm.
This lack of transparency is a major hurdle for personal AI development. Current LLMs, as mentioned earlier, are often opaque, making it difficult for users to trust their outputs fully. The explainability of AI systems will need to be improved significantly to ensure that personal AI can be trusted for important tasks.
Addressing the Ethical Landscape of Personal AI
As personal AI systems evolve, they will increasingly shape the ethical landscape of AI. We’ve already touched on the three core pillars—privacy and ownership, bias and fairness, and explainability. But there's more to consider, especially when looking at the broader implications of personal AI development.
Most current AI models, particularly those from big tech companies like Facebook, Google, or OpenAI, are closed systems. This means they are aligned with the goals and ethical frameworks of those companies, which may not always serve the best interests of individual users. Open models, such as Meta's LLaMA, offer more flexibility and control, allowing users to customize and refine the AI to better meet their personal needs. However, the challenge remains in training these models without significant financial and technical resources.
There’s also the temptation to use uncensored models that aren’t aligned with the values of large corporations, as they provide more freedom and flexibility. But in reality, models that are entirely unfiltered may introduce harmful or unethical content. It’s often better to work with aligned models that have had some of the more problematic biases removed, even if this limits some aspects of the system’s freedom.
The future of personal AI will undoubtedly involve a deeper exploration of these ethical questions. As AI becomes more integrated into our daily lives, the need for privacy, fairness, and transparency will only grow. And while we may not yet be able to train personal AI models from scratch, we can continue to shape and refine these systems through curated datasets and ongoing development.
Conclusion
In conclusion, personal AI represents an exciting new frontier, but one that must be navigated with care. Privacy, ownership, bias, and explainability are all essential pillars that will define the future of these systems. As we continue to develop personal AI, we must remain vigilant about the ethical challenges they pose, ensuring that they serve the best interests of users while remaining transparent, fair, and aligned with individual values.
If you have any thoughts or questions on this topic, feel free to reach out—I’d love to continue the conversation!
-
@ 83279ad2:bd49240d
2025-05-07 14:20:50 -
@ e31e84c4:77bbabc0
2024-12-02 10:44:07Bitcoin and Fixed Income was Written By Wyatt O’Rourke. If you enjoyed this article then support his writing, directly, by donating to his lightning wallet: ultrahusky3@primal.net
Fiduciary duty is the obligation to act in the client’s best interests at all times, prioritizing their needs above the advisor’s own, ensuring honesty, transparency, and avoiding conflicts of interest in all recommendations and actions.
This is something all advisors in the BFAN take very seriously; after all, we are legally required to do so. For the average advisor this is a fairly easy box to check. All you essentially have to do is have someone take a 5-minute risk assessment, fill out an investment policy statement, and then throw them in the proverbial 60/40 portfolio. You have thousands of investment options to choose from and you can reasonably explain how your client is theoretically insulated from any move in the \~markets\~. From the traditional financial advisor perspective, you could justify nearly anything by putting a client into this type of portfolio. All your bases were pretty much covered from return profile, regulatory, compliance, investment options, etc. It was just too easy. It became the household standard and now a meme.
As almost every real bitcoiner knows, the 60/40 portfolio is moving into psyop territory, and many financial advisors get clowned on for defending this relic on bitcoin twitter. I’m going to specifically poke fun at the ‘40’ part of this portfolio.
The ‘40’ represents fixed income, defined as…
An investment type that provides regular, set interest payments, such as bonds or treasury securities, and returns the principal at maturity. It’s generally considered a lower-risk asset class, used to generate stable income and preserve capital.
Historically, this part of the portfolio was meant to weather the volatility in the equity markets and represent the “safe” investments. Typically, some sort of bond.
First and foremost, the fixed income section is most commonly constructed with U.S. Debt. There are a couple main reasons for this. Most financial professionals believe the same fairy tale that U.S. Debt is “risk free” (lol). U.S. debt is also one of the largest and most liquid assets in the market which comes with a lot of benefits.
There are many brilliant bitcoiners in finance and economics that have sounded the alarm on the U.S. debt ticking time bomb. I highly recommend readers explore the work of Greg Foss, Lawrence Lepard, Lyn Alden, and Saifedean Ammous. My very high-level recap of their analysis:
-
A bond is a contract in which Party A (the borrower) agrees to repay Party B (the lender) their principal plus interest over time.
-
The U.S. government issues bonds (Treasury securities) to finance its operations after tax revenues have been exhausted.
-
These are traditionally viewed as “risk-free” due to the government’s historical reliability in repaying its debts and the strength of the U.S. economy
-
U.S. bonds are seen as safe because the government has control over the dollar (world reserve asset) and, until recently (20 some odd years), enjoyed broad confidence that it would always honor its debts.
-
This perception has contributed to high global demand for U.S. debt but, that is quickly deteriorating.
-
The current debt situation raises concerns about sustainability.
-
The U.S. has substantial obligations, and without sufficient productivity growth, increasing debt may lead to a cycle where borrowing to cover interest leads to more debt.
-
This could result in more reliance on money creation (printing), which can drive inflation and further debt burdens.
In the words of Lyn Alden “Nothing stops this train”
Those obligations are what makes up the 40% of most the fixed income in your portfolio. So essentially you are giving money to one of the worst capital allocators in the world (U.S. Gov’t) and getting paid back with printed money.
As someone who takes their fiduciary responsibility seriously and understands the debt situation we just reviewed, I think it’s borderline negligent to put someone into a classic 60% (equities) / 40% (fixed income) portfolio without serious scrutiny of the client’s financial situation and options available to them. I certainly have my qualms with equities at times, but overall, they are more palatable than the fixed income portion of the portfolio. I don’t like it either, but the money is broken and the unit of account for nearly every equity or fixed income instrument (USD) is fraudulent. It’s a paper mache fade that is quite literally propped up by the money printer.
To briefly be as most charitable as I can – It wasn’t always this way. The U.S. Dollar used to be sound money, we used to have government surplus instead of mathematically certain deficits, The U.S. Federal Government didn’t used to have a money printing addiction, and pre-bitcoin the 60/40 portfolio used to be a quality portfolio management strategy. Those times are gone.
Now the fun part. How does bitcoin fix this?
Bitcoin fixes this indirectly. Understanding investment criteria changes via risk tolerance, age, goals, etc. A client may still have a need for “fixed income” in the most literal definition – Low risk yield. Now you may be thinking that yield is a bad word in bitcoin land, you’re not wrong, so stay with me. Perpetual motion machine crypto yield is fake and largely where many crypto scams originate. However, that doesn’t mean yield in the classic finance sense does not exist in bitcoin, it very literally does. Fortunately for us bitcoiners there are many other smart, driven, and enterprising bitcoiners that understand this problem and are doing something to address it. These individuals are pioneering new possibilities in bitcoin and finance, specifically when it comes to fixed income.
Here are some new developments –
Private Credit Funds – The Build Asset Management Secured Income Fund I is a private credit fund created by Build Asset Management. This fund primarily invests in bitcoin-backed, collateralized business loans originated by Unchained, with a secured structure involving a multi-signature, over-collateralized setup for risk management. Unchained originates loans and sells them to Build, which pools them into the fund, enabling investors to share in the interest income.
Dynamics
- Loan Terms: Unchained issues loans at interest rates around 14%, secured with a 2/3 multi-signature vault backed by a 40% loan-to-value (LTV) ratio.
- Fund Mechanics: Build buys these loans from Unchained, thus providing liquidity to Unchained for further loan originations, while Build manages interest payments to investors in the fund.
Pros
- The fund offers a unique way to earn income via bitcoin-collateralized debt, with protection against rehypothecation and strong security measures, making it attractive for investors seeking exposure to fixed income with bitcoin.
Cons
- The fund is only available to accredited investors, which is a regulatory standard for private credit funds like this.
Corporate Bonds – MicroStrategy Inc. (MSTR), a business intelligence company, has leveraged its corporate structure to issue bonds specifically to acquire bitcoin as a reserve asset. This approach allows investors to indirectly gain exposure to bitcoin’s potential upside while receiving interest payments on their bond investments. Some other publicly traded companies have also adopted this strategy, but for the sake of this article we will focus on MSTR as they are the biggest and most vocal issuer.
Dynamics
-
Issuance: MicroStrategy has issued senior secured notes in multiple offerings, with terms allowing the company to use the proceeds to purchase bitcoin.
-
Interest Rates: The bonds typically carry high-yield interest rates, averaging around 6-8% APR, depending on the specific issuance and market conditions at the time of issuance.
-
Maturity: The bonds have varying maturities, with most structured for multi-year terms, offering investors medium-term exposure to bitcoin’s value trajectory through MicroStrategy’s holdings.
Pros
-
Indirect Bitcoin exposure with income provides a unique opportunity for investors seeking income from bitcoin-backed debt.
-
Bonds issued by MicroStrategy offer relatively high interest rates, appealing for fixed-income investors attracted to the higher risk/reward scenarios.
Cons
-
There are credit risks tied to MicroStrategy’s financial health and bitcoin’s performance. A significant drop in bitcoin prices could strain the company’s ability to service debt, increasing credit risk.
-
Availability: These bonds are primarily accessible to institutional investors and accredited investors, limiting availability for retail investors.
Interest Payable in Bitcoin – River has introduced an innovative product, bitcoin Interest on Cash, allowing clients to earn interest on their U.S. dollar deposits, with the interest paid in bitcoin.
Dynamics
-
Interest Payment: Clients earn an annual interest rate of 3.8% on their cash deposits. The accrued interest is converted to Bitcoin daily and paid out monthly, enabling clients to accumulate Bitcoin over time.
-
Security and Accessibility: Cash deposits are insured up to $250,000 through River’s banking partner, Lead Bank, a member of the FDIC. All Bitcoin holdings are maintained in full reserve custody, ensuring that client assets are not lent or leveraged.
Pros
-
There are no hidden fees or minimum balance requirements, and clients can withdraw their cash at any time.
-
The 3.8% interest rate provides a predictable income stream, akin to traditional fixed-income investments.
Cons
-
While the interest rate is fixed, the value of the Bitcoin received as interest can fluctuate, introducing potential variability in the investment’s overall return.
-
Interest rate payments are on the lower side
Admittedly, this is a very small list, however, these types of investments are growing more numerous and meaningful. The reality is the existing options aren’t numerous enough to service every client that has a need for fixed income exposure. I challenge advisors to explore innovative options for fixed income exposure outside of sovereign debt, as that is most certainly a road to nowhere. It is my wholehearted belief and call to action that we need more options to help clients across the risk and capital allocation spectrum access a sound money standard.
Additional Resources
-
River: The future of saving is here: Earn 3.8% on cash. Paid in Bitcoin.
-
MicroStrategy: MicroStrategy Announces Pricing of Offering of Convertible Senior Notes
Bitcoin and Fixed Income was Written By Wyatt O’Rourke. If you enjoyed this article then support his writing, directly, by donating to his lightning wallet: ultrahusky3@primal.net
-
-
@ e6817453:b0ac3c39
2024-09-30 14:52:23In the modern world of AI, managing vast amounts of data while keeping it relevant and accessible is a significant challenge, mainly when dealing with large language models (LLMs) and vector databases. One approach that has gained prominence in recent years is integrating vector search with metadata, especially in retrieval-augmented generation (RAG) pipelines. Vector search and metadata enable faster and more accurate data retrieval. However, the process of pre- and post-search filtering results plays a crucial role in ensuring data relevance.
The Vector Search and Metadata Challenge
In a typical vector search, you create embeddings from chunks of text, such as a PDF document. These embeddings allow the system to search for similar items and retrieve them based on relevance. The challenge, however, arises when you need to combine vector search results with structured metadata. For example, you may have timestamped text-based content and want to retrieve the most relevant content within a specific date range. This is where metadata becomes critical in refining search results.
Unfortunately, most vector databases treat metadata as a secondary feature, isolating it from the primary vector search process. As a result, handling queries that combine vectors and metadata can become a challenge, particularly when the search needs to account for a dynamic range of filters, such as dates or other structured data.
LibSQL and vector search metadata
LibSQL is a more general-purpose SQLite-based database that adds vector capabilities to regular data. Vectors are presented as blob columns of regular tables. It makes vector embeddings and metadata a first-class citizen that naturally builds deep integration of these data points.
create table if not exists conversation ( id varchar(36) primary key not null, startDate real, endDate real, summary text, vectorSummary F32_BLOB(512) );
It solves the challenge of metadata and vector search and eliminates impedance between vector data and regular structured data points in the same storage.
As you can see, you can access vector-like data and start date in the same query.
select c.id ,c.startDate, c.endDate, c.summary, vector_distance_cos(c.vectorSummary, vector(${vector})) distance from conversation where ${startDate ? `and c.startDate >= ${startDate.getTime()}` : ''} ${endDate ? `and c.endDate <= ${endDate.getTime()}` : ''} ${distance ? `and distance <= ${distance}` : ''} order by distance limit ${top};
vector_distance_cos calculated as distance allows us to make a primitive vector search that does a full scan and calculates distances on rows. We could optimize it with CTE and limit search and distance calculations to a much smaller subset of data.
This approach could be calculation intensive and fail on large amounts of data.
Libsql offers a way more effective vector search based on FlashDiskANN vector indexed.
vector_top_k('idx_conversation_vectorSummary', ${vector} , ${top}) i
vector_top_k is a table function that searches for the top of the newly created vector search index. As you can see, we could use only vector as a function parameter, and other columns could be used outside of the table function. So, to use a vector index together with different columns, we need to apply some strategies.
Now we get a classical problem of integration vector search results with metadata queries.
Post-Filtering: A Common Approach
The most widely adopted method in these pipelines is post-filtering. In this approach, the system first retrieves data based on vector similarities and then applies metadata filters. For example, imagine you’re conducting a vector search to retrieve conversations relevant to a specific question. Still, you also want to ensure these conversations occurred in the past week.
Post-filtering allows the system to retrieve the most relevant vector-based results and subsequently filter out any that don’t meet the metadata criteria, such as date range. This method is efficient when vector similarity is the primary factor driving the search, and metadata is only applied as a secondary filter.
const sqlQuery = ` select c.id ,c.startDate, c.endDate, c.summary, vector_distance_cos(c.vectorSummary, vector(${vector})) distance from vector_top_k('idx_conversation_vectorSummary', ${vector} , ${top}) i inner join conversation c on i.id = c.rowid where ${startDate ? `and c.startDate >= ${startDate.getTime()}` : ''} ${endDate ? `and c.endDate <= ${endDate.getTime()}` : ''} ${distance ? `and distance <= ${distance}` : ''} order by distance limit ${top};
However, there are some limitations. For example, the initial vector search may yield fewer results or omit some relevant data before applying the metadata filter. If the search window is narrow enough, this can lead to complete results.
One working strategy is to make the top value in vector_top_K much bigger. Be careful, though, as the function's default max number of results is around 200 rows.
Pre-Filtering: A More Complex Approach
Pre-filtering is a more intricate approach but can be more effective in some instances. In pre-filtering, metadata is used as the primary filter before vector search takes place. This means that only data that meets the metadata criteria is passed into the vector search process, limiting the scope of the search right from the beginning.
While this approach can significantly reduce the amount of irrelevant data in the final results, it comes with its own challenges. For example, pre-filtering requires a deeper understanding of the data structure and may necessitate denormalizing the data or creating separate pre-filtered tables. This can be resource-intensive and, in some cases, impractical for dynamic metadata like date ranges.
In certain use cases, pre-filtering might outperform post-filtering. For instance, when the metadata (e.g., specific date ranges) is the most important filter, pre-filtering ensures the search is conducted only on the most relevant data.
Pre-filtering with distance-based filtering
So, we are getting back to an old concept. We do prefiltering instead of using a vector index.
WITH FilteredDates AS ( SELECT c.id, c.startDate, c.endDate, c.summary, c.vectorSummary FROM YourTable c WHERE ${startDate ? `AND c.startDate >= ${startDate.getTime()}` : ''} ${endDate ? `AND c.endDate <= ${endDate.getTime()}` : ''} ), DistanceCalculation AS ( SELECT fd.id, fd.startDate, fd.endDate, fd.summary, fd.vectorSummary, vector_distance_cos(fd.vectorSummary, vector(${vector})) AS distance FROM FilteredDates fd ) SELECT dc.id, dc.startDate, dc.endDate, dc.summary, dc.distance FROM DistanceCalculation dc WHERE 1=1 ${distance ? `AND dc.distance <= ${distance}` : ''} ORDER BY dc.distance LIMIT ${top};
It makes sense if the filter produces small data and distance calculation happens on the smaller data set.
As a pro of this approach, you have full control over the data and get all results without omitting some typical values for extensive index searches.
Choosing Between Pre and Post-Filtering
Both pre-filtering and post-filtering have their advantages and disadvantages. Post-filtering is more accessible to implement, especially when vector similarity is the primary search factor, but it can lead to incomplete results. Pre-filtering, on the other hand, can yield more accurate results but requires more complex data handling and optimization.
In practice, many systems combine both strategies, depending on the query. For example, they might start with a broad pre-filtering based on metadata (like date ranges) and then apply a more targeted vector search with post-filtering to refine the results further.
Conclusion
Vector search with metadata filtering offers a powerful approach for handling large-scale data retrieval in LLMs and RAG pipelines. Whether you choose pre-filtering or post-filtering—or a combination of both—depends on your application's specific requirements. As vector databases continue to evolve, future innovations that combine these two approaches more seamlessly will help improve data relevance and retrieval efficiency further.
-
@ 3bf0c63f:aefa459d
2024-09-06 12:49:46Nostr: a quick introduction, attempt #2
Nostr doesn't subscribe to any ideals of "free speech" as these belong to the realm of politics and assume a big powerful government that enforces a common ruleupon everybody else.
Nostr instead is much simpler, it simply says that servers are private property and establishes a generalized framework for people to connect to all these servers, creating a true free market in the process. In other words, Nostr is the public road that each market participant can use to build their own store or visit others and use their services.
(Of course a road is never truly public, in normal cases it's ran by the government, in this case it relies upon the previous existence of the internet with all its quirks and chaos plus a hand of government control, but none of that matters for this explanation).
More concretely speaking, Nostr is just a set of definitions of the formats of the data that can be passed between participants and their expected order, i.e. messages between clients (i.e. the program that runs on a user computer) and relays (i.e. the program that runs on a publicly accessible computer, a "server", generally with a domain-name associated) over a type of TCP connection (WebSocket) with cryptographic signatures. This is what is called a "protocol" in this context, and upon that simple base multiple kinds of sub-protocols can be added, like a protocol for "public-square style microblogging", "semi-closed group chat" or, I don't know, "recipe sharing and feedback".
-
@ 005bc4de:ef11e1a2
2025-05-07 14:19:15The beautiful evil of horse racing
Horse racing intrigues me. And, it appalls me. I find it to be both gloriously beautiful and brutally cruel.
One of the fun facts shared tirelessly around social media for Kentucky Derby #151 was something like this: "This is the first Derby where every horse is in the bloodline of Secretariat." Secretariat, if you don't know, won the Triple Crown in 1973 (KY Derby, Preakness, Belmont) and still holds the fastest times in all three of those races.
That's really a nice fun fact when you first hear it, but maybe it shouldn't be too surprising. After a successful racing career, a male racehorse "retires" to a life of studding himself out, which is where the real horse money is. His post-racing stats: he bred 60 mares per year, he sired 660 foals, and he earned an estimated $120 million in stud fees. When you start branching out the Secretariat family tree over several generations, well, the sheer numbers must be very large. That means the chances that any given Thoroughbred might have a hint of Secretariat blood must get rather high. Grok AI estimates there are 500,000 Thoroughbreds today worldwide, and that beteen 250,000 to 400,000 are in Secretariat's lineage, that's 50% to 80% of every Thoroughbred. Suddenly, the social media snippet from Derby #151 is less surprising, less cool.
Secretariat, retired from racing.
The beautiful side of horse racing
Horse racing is beautiful. This is the easy part to write. If you've ever been to a horse track, especially on a big race day, it's a true multi-sensory experience.
- There are smells that we typically don't smell often in this modern world...especially if you hang out near the paddock. Personally, I don't find horse dung particularly stinky, but earthy.
- There are tastes and good smells. Food and drink are a huge part of horse racing. There is a reason that the Derby has its own pie (a chocolate pecan pie) and each major race has its own drink. Feasting and tailgating are huge parts of horse racing.
- There are things to feel, actually to bodily feel. Aside from crowds of people to bump into, if you stand close to the track, you can feel the reverberation of hooves beating the dirt. We hear the term "thundering herd" sometimes in college sports, but, that term is not just words. You can actually feel the thunder of those hooves.
- The sounds are distinctly horse racing. The announcer's calls of "Less than a minute," "They're in the gates," and "And they're off!" are iconic, not to mention the terms "down to the wire," "won by a nose," or "photo finish." And then there's the bugle's announcement, the singing of "My Old Kentucky Home" at the Derby, the roar of the crowd, and moans of loss from bad bets, shrieks of joy from good bets, and that thunder from the herd, of course.
- The visuals are just stunning. People-wise, the women in their pastel sundresses, the men are snazzy in their colors too (though some go too over-the-top for my liking; they move from classy to clownish), and then there are the hats which are their own category altogether. There's the track, and the spires, and the grass and dirt (or mud) and roses. And there's the jockeys and their colorful silks. But, mostly, there's the horses. A Thoroughbred racehorse at full speed, in full stride, is incredible to look upon. It is a beast that is entirely built for one pure reason: speed. You might be familiar with ESPN's "The Body Issue" that features elite, pro athletes in the nude so that their incredible physiques are displayed. Horse racing is the same thing, equine style. The Derby, in particular, is a sports photographer's bonanza. If you actually know what you're doing, you can't not get great results. Below are some photos amateur me point-and-clicked on Derby Day at a horse track (not Churchill Downs):
!(image)[https://files.peakd.com/file/peakd-hive/crrdlx/Eq7LSG39v5H5NpQppxhzwhfAtJVQikYVppRJsgZXh6KxGXU2YochRXqoJaW7NMZ8Yd8.jpg]
!(image)[https://files.peakd.com/file/peakd-hive/crrdlx/23wWw7ZbXPJxKFAyLwuraK1QypVcLV6QpsyG6Ccr6ZLiPYgNtUBa3ALWx1XR4wPYayhmT.png]
!(image)[https://files.peakd.com/file/peakd-hive/crrdlx/2432HqW3ZtUCjvGD7WTkg2z2ngoByX2rV6htgENN1eytUYXycRCaQdevL7xn1mdKC8qG8.gif]
!(image)[https://files.peakd.com/file/peakd-hive/crrdlx/245HijMM8pQ7c2EdJwrzUPa3LDjm1P51WqU6j5mYkAJnAXJrkbAn6XBNCzR7G28MSR62u.png]
!(image)[https://files.peakd.com/file/peakd-hive/crrdlx/EpVAhnScSoAVCYvw2Faf7ZyipskYLvu9MuBXzmHN3jdVPoDBVAVR8yqrrGf1c7Apxzb.jpg]
!(image)[https://files.peakd.com/file/peakd-hive/crrdlx/245Hm6k5HafqiMfzUiNK7Z3pUG752f4CmLc5pMVuonkQVY1sKG9ucSrdGgvNVQGNud628.gif]
Horse racing appeals to all senses and is viscerally exhilarating in so many ways. It is beautiful.
Genetics
But, let's get back to the real point: this game is all about Brave New World-like breeding and genetics. It is horse eugenics. The idea is simple: fast Mommy horse and fast Daddy horse means fast baby horse. In horse racing, a horse's blood lineage is called its dosage. Personally, I pay zero attention to dosage (I focus on track length and closing the long races), but dosage is a mathematical stat that tries to answer, "How much is this horse truly a Thoroughbred and a genetic winner?" This question of dosage begs another question, "What actually is a Thoroughbred?"
A Thoroughbred is a horse breed. There are a lot of horse breeds, a lot. For a novice like me, it's very hard to distinguish one from another. I think most people can see a difference between a draft horse, bred for pulling heavy loads, and a Thoroughbred, bred for speed. I think most people, if betting on a foot race, would bet on the Thoroughbred below, left and not on "Jupiter, the largest draft horse in America" on the right. If betting on hauling a wagon load of beer up a steep hill, most would bet Jupiter.
!(image)[https://files.peakd.com/file/peakd-hive/crrdlx/23yx8AjtVZkpE7jXJ2RXzV78hhXSvsgU97i2FkvfcFcEZevfshNgwPw2diJNhmL344gmR.png]
But, when comparing racing horses, there are also Sandardbreds which are bred for harness racing and thus have a heavier build than Thoroughbreds. The two breeds are shown below, but their distinctions are not particularly outstanding to my novice eye. Can you tell the difference, which is the Standardbred versus the Thoroughbred? (Answer at the bottom of page.) Maybe side-by-side you can tell, but could you tell if you saw one standing alone? If you saw two of the same breed, could you judge by appearance which one runs faster? If you can, I tip my cap to you.
!(image)[https://files.peakd.com/file/peakd-hive/crrdlx/23zbTqFJgYpKyXwxbGsVeQiKv4tTZSj8S8QboTJWEhTETPqjnaUVDtX2BirjBXH5KVNo6.png]
I imagine most people are much more familiar with, and can more readily notice, the differences in dog breeds. For instance, take the French Bulldog, the Greyhound, or the world's best dog breed, the Labrador Retriever (totally unbiased here).
!(image)[https://files.peakd.com/file/peakd-hive/crrdlx/244oozGkxAwS4sqoWiT8phQv8ssrqq4caB9bHgDufsogds7scUUfhp54WTKmosDzfL5WT.png]
The French Bulldog is bred for cosmetics, the greyhound for pure speed, and the Labrador for all-around everything...intelligence, sturdy athleticism, loyal companionship (totally unbiased here). In these three dogs, we can clearly see the differences that have resulted from breeding.
The evil side of horse racing
Horse racing is evil. And, it is cruel. But, for now, let's step back to the dogs. Dog breeding can be cruel as well.
The French Bulldog is something of the "it" dog right now. A quick Google Gemini search reported it as the most popular breed in 2025.
I remember when "101 Dalmatians" came out 1996. Dalmatians skyrocketed in popularity. But, that popularity was anything but a blessing for Dalmatian pups. They were overbred (and are too inbred as it is), oversold, and were taken in by people eager to get in on the "it" dog then and scoop up the cute spotted pups. But, Dalmatians are very active pups that grow into very active dogs. When the novelty of the spotted pup wore off, many were returned or given away or taken to shelters as being uncontrollable.
The French Poodle situation is not too different. The dogs were bred for little purpose beyond the sin of human vanity. People wanted a short, stocky, smoosh-faced dog that they perceived as cute. And, that's what they got: an unathletic dog that looked a certain way, with middle-of-the-road intelligence, and little use aside from its appearance.
Worse, seeking out this certain "toy dog" look, French Bulldogs suffer from a plethora of health issues. Summed up, they have the lowest life expectancy "by a large margin" of all dogs at only 4.5 years (average is 11.2. years).
There is a neighbor near to me who breeds French Bulldogs. Evidently, it's a lucrative business as they apparently sell for an estimated $2,000 to $8,000 dollars each. I don't know how many litters the neighbor's have bred and pawned, but it has been several. The breeder bitch is constantly given a little trot outside before being hauled to the vet for insemination. (Sadly, this seems to be about the only time she is taken out for exercise and family "fun.") Considering he and his wife have no real job, this seems to be their job. Breed, advertise (complete with foofy tutu outfit photos), market, sell, repeat. With only a 4.5 year life span, I see the lucrative nature in this business.
All told, it's basically a sin and a shame that humans do this to these dogs. A certain segment of people desire a certain unnatural smooshed face in a dog. And because we vainly want a certain look in a dog, so as to accessorize our own look, we breed them into forms unnatural to a canine, curse them with severe breathing difficulties and other serious health issues, and short lives.
A Greyhound is essentially a canine Thoroughbred. From generations of selective breeding, it has a massive chest, long body with a narrow waist, and long, spindly legs. It's sole purpose is speed. Ironically, both the Thoroughbred and Greyhound can race at about the same speed...44 mph, give or take.
Man's sinful nature has abused the Greyhound too. These hounds are racing dogs and racing means gambling. So, dog tracks for have been common. The pups are bred, they race a few years, then they are hopefully adopted out. A good friend of mine once adopted a retired racer to become the family dog. "Bandit" initially had a post-race job as a business's guard dog. But, due to him constantly doing nothing but laying around and sleeping, he was fired as a guard dog (who gets a Greyhound for a guard dog anyway?). Bandit eventually went to my friend, was a bit neurotic, but turned out to be a good family dog.
I think most Greyhounds don't have the fortunate story of Bandit. Once raced out, they're done and forgotten. Man's thrills are fleeting, whatever sparkles in his eye soon fades. To combat the ills of dog racing, I know that the citizens of Florida voted to outlaw dog racing in 2018 (and it indeed ended Dec. 31, 2020). Now, dog tracks lay rusting away, and Greyhounds are largely forgotten.
And then, there's the Labrador Retriever. What's not to like? These dogs can do it all, and they do it all well. Name a dog task, Labs do that well. Full disclosure: I once had a Lab (or rather, half Lab, and half...Great Dane? Doberman? Something?). Her mother was Lab and my dog looked Lab, though a bit taller and leaner. She was incredible. So, yes, I favor Labs.
But, even my beloved Labs and all that they excel in, even Labs have their issues, such as high rates of hip dysplasia. Selective breeding, and a too-narrow gene pool, have consequences.
Back to horses
Let's try to bring this back to horses. Thoroughbreds and horse racing mirror both of the characteristics seen in the French Bulldog and the Greyhound.
- Thoroughbreds have been, and still are, extremely selectively bred to accentuate certain qualities: speed, speed, speed.
- Thoroughbreds are bred for money. Literally, a champion stud or mare doesn't breed for free.
Regarding speed, Thoroughbreds have a massive muscular chest, almost no waist at all, massive muscular hindquarters, and long, spindly, almost cartoonishly thin legs. And, this built-for-speed physique brings up one of the cruelest aspects of horse racing: Thoroughbreds are prone to "break down."
These horses are structured unnaturally, like aliens. Having such a massive, muscular, powerful architecture stilted on such twig-like legs (and getting more massive and twiggier due to constant selective breeding of these traits) is a recipe for disaster. "Breaking down" in horse terms is a rather correct term. Their leg bones break under the stress and force of running, then the horse's weight and thrust breaks the legs down further.
The name Barabaro might come to memory. Barbaro won the Kentucky Derby in 2006 impressively, by a full 6.5 lengths. Hopes were high for a Triple Crown winner. At the Preakness two weeks later, Barabaro broke down. Actually, in his pre-start excitement he broke through the gate to false start. These animals are bred to run and race, they know when it's race time. He was so jacked up and ready to run while in the starting gate, he bucked up, banged his head hard, then literally broke through the gate to false start. Then, after reentering, he started off the race clean before breaking down in front of the main grandstand of viewers. Horse's can break any of their several leg bones. Barbaro broke the cannon, sesamoid, long pastern, and dislocated the fetlock (ankle joint). In other words, he shattered his leg.
When horses break their legs, they're usually put down. As to why, there are lots of reasons, but it comes down to the fact that horses are built for standing and running, especially Thoroughbreds. They are not built for laying down to recuperate, and actually suffer health consequences for not standing. Understandably, a broken leg causes the horse to favor weight to the other legs while standing and this, in turn, can cause other issues. Altogether, the horse suffers.
In Barbaro's case, they tried to rehab him. I think normally he would have been put down on the track in the equine ambulance (the "meat wagon"), but this was Barbaro. The resources were there, he was beloved, and millions had witnessed his injury live. Not trying to save him would have been a public relations nightmare. Putting him down on live TV would have been even worse. There was surgery, then laminitis (inflammation under the hoof) in his opposite good hoof, the result of standing unnaturally. He rallied, then had more setbacks including laminitis in his front hooves. He was in pain, with no way to stand, and then was euthanized in 2007.
!(image)[https://files.peakd.com/file/peakd-hive/crrdlx/23wgoS1v6e2i2gizAqeMeXHf6Zwhz3BTuYMuYNL676tqbPZWwvXUhw7R1J6K4r7DmRj2K.png]
Regarding the money, top studs earn $200 to $400,000 for stud service. A top mare can be bought for $100 to $300,000, then you need the stud service. This is only to breed the foal. This has nothing to do with stabling or training the animals. In other words, it's extremely expensive.
Also regarding the money, there is, of course, the gambling. You might say this is the whole point of horse racing. It's certainly the whole point for breeding Thoroughbreds. It was the whole point for Greyhounds in Florida, before that point was banned.
This year, an estimated $200 million was bet on the Kentucky Derby, the one race alone. $300 million was bet on the races combined. Grok estimated that globally in 2022, horse racing was a $402 billion dollar industry and expects it to grow to $793 billion by 2030.
Those numbers are staggering. But, again, I come back to sinful man. Our love of money is the root of this beautiful evil called horse racing.
I really don't know many scenes more beautiful than a Kentucky horse ranch and a Thoroughbred running across the bluegrass. Add a colt running with his mother, the beauty is staggering. But, underneath that beauty, there is an evil side to horse racing. That side is fueled by sinful man's pride to win and his love of money.
Horse racing is a beautiful evil.
Image sources: https://wikipedia.org, original by me at the track, equine bones at https://www.anatomy-of-the-equine.com/distal-limb-bones.html, the final two images from https://pixabay.com
Note: In the "Can you tell the difference" side-by-side images above, the Thoroughbred was on the left, the Standardbred on the right.
-
@ b83a28b7:35919450
2025-05-07 12:46:19This article was originally part of the sermon of Plebchain Radio Episode 109 (April 25, 2025) that nostr:nprofile1qyxhwumn8ghj7mn0wvhxcmmvqyg8wumn8ghj7mn0wd68ytnvv9hxgqpqtvqc82mv8cezhax5r34n4muc2c4pgjz8kaye2smj032nngg52clq7fgefr and I did with Noa Gruman from nostr:nprofile1qyv8wumn8ghj7urjv4kkjatd9ec8y6tdv9kzumn9wsqzqvfdqratfpsvje7f3w69skt34vd7l9r465d5hm9unucnl95yq0ethzx7cf and nostr:nprofile1qye8wumn8ghj7mrwvf5hguewwpshqetjwdshguewd9hj7mn0wd68ycmvd9jkuap0v9cxjtmkxyhhyetvv9usz9rhwden5te0dehhxarj9ehx2cn4w5hxccgqyqj8hd6eed2x5w8pqgx82yyrrpfx99uuympcxmkxgz9k2hklg8te7pq0y72 . You can listen to the full episode here:
https://fountain.fm/episode/gdBHcfDgDXEgALjX7nBu
Let’s start with the obvious: Bitcoin is metal because it’s loud, it’s aggressive, it’s uncompromising. It’s the musical equivalent of a power chord blasted through a wall of amps—a direct challenge to the establishment, to the fiat system, to the sanitized, soulless mainstream. Metal has always been about rebellion, about standing outside the norm and refusing to be tamed. Bitcoin, too, was born in the shadows, dismissed as the currency of outlaws and freaks, and it thrived there, fueled by the energy of those who refused to bow down
But Bitcoin isn’t just any metal. It’s progressive metal. Prog metal is the genre that takes metal’s aggression and fuses it with experimentation, complexity, and a relentless drive to push boundaries. It’s not satisfied with three chords and a chorus. Prog metal is about odd time signatures, intricate solos, unexpected detours, and stories that dig into philosophy, psychology, and the human condition. It’s music for those who want more than just noise—they want meaning, depth, and innovation.
That’s Bitcoin. Bitcoin isn’t just a blunt instrument of rebellion; it’s a living, evolving experiment. It’s code that’s open to anyone, a protocol that invites innovation, a system that’s constantly being pushed, prodded, and reimagined by its community.
Like prog metal, Bitcoin is for the thinkers, the tinkerers, the relentless questioners. It’s for those who see the flaws in the mainstream and dare to imagine something radically different.
Both prog metal and Bitcoin are about freedom — freedom from the tyranny of the predictable, the safe, the centrally controlled. They are countercultures within countercultures, refusing to be boxed in by genre or by law. Both attract those who crave complexity, who aren’t afraid to get lost in the weeds, who want to build something new and beautiful from the chaos.
If you want to reach the heart of Bitcoin’s counterculture, you don’t do it with bland, safe, mainstream pop. You do it with prog metal—with music that refuses to compromise, that demands your attention, that rewards those who dig deeper. Prog metal is the true voice of Bitcoin’s core: the plebs, the builders, the dreamers who refuse to accept the world as it is.
Bitcoin is prog metal. It’s technical, it’s rebellious, it’s unafraid to be different. It’s music and money for those who want to break free—not just from the old systems, but from the old ways of thinking. And as the mainstream tries to water down both, the true counterculture survives at the core, pushing boundaries, making noise, and refusing to die.
The sermon and episode clearly had an impact on people, as evidenced by the fountain charts here (snapshot taken on May 6, 2025)
nostr:nevent1qvzqqqqqqypzqkcpsw4kc03j906dg8rt8thes432z3yy0d6fj4phylz48xs3g437qqsy7rfh8n6vgxppkwzq2ntjps0lmt4njkxjrv3rv5r59l7lkv6ahps2eavd9 And here's the clip of the sermon:
nostr:nevent1qvzqqqqqqypzpwp69zm7fewjp0vkp306adnzt7249ytxhz7mq3w5yc629u6er9zsqqsptkpkd0458yshe7gfshck2f9nfxnqe0nrjz0ptlkm9rhv094rxagapyv4d
-
@ 76c71aae:3e29cafa
2024-08-13 04:30:00On social media and in the Nostr space in particular, there’s been a lot of debate about the idea of supporting deletion and editing of notes.
Some people think they’re vital features to have, others believe that more honest and healthy social media will come from getting rid of these features. The discussion about these features quickly turns to the feasibility of completely deleting something on a decentralized protocol. We quickly get to the “We can’t really delete anything from the internet, or a decentralized network.” argument. This crowds out how Delete and Edit can mimic elements of offline interactions, how they can be used as social signals.
When it comes to issues of deletion and editing content, what matters more is if the creator can communicate their intentions around their content. Sure, on the internet, with decentralized protocols, there’s no way to be sure something’s deleted. It’s not like taking a piece of paper and burning it. Computers make copies of things all the time, computers don’t like deleting things. In particular, distributed systems tend to use a Kafka architecture with immutable logs, it’s just easier to keep everything around, as deleting and reindexing is hard. Even if the software could be made to delete something, there’s always screenshots, or even pictures of screens. We can’t provably make something disappear.
What we need to do in our software is clearly express intention. A delete is actually a kind of retraction. “I no longer want to associate myself with this content, please stop showing it to people as part of what I’ve published, stop highlighting it, stop sharing it.” Even if a relay or other server keeps a copy, and keeps sharing it, being able to clearly state “hello world, this thing I said, was a mistake, please get rid of it.” Just giving users the chance to say “I deleted this” is a way of showing intention. It’s also a way of signaling that feedback has been heard. Perhaps the post was factually incorrect or perhaps it was mean and the person wants to remove what they said. In an IRL conversation, for either of these scenarios there is some dialogue where the creator of the content is learning something and taking action based on what they’ve learned.
Without delete or edit, there is no option to signal to the rest of the community that you have learned something because of how the content is structured today. On most platforms a reply or response stating one’s learning will be lost often in a deluge of replies on the original post and subsequent posts are often not seen especially when the original goes viral. By providing tools like delete and edit we give people a chance to signal that they have heard the feedback and taken action.
The Nostr Protocol supports delete and expiring notes. It was one of the reasons we switched from secure scuttlebutt to build on Nostr. Our nos.social app offers delete and while we know that not all relays will honor this, we believe it’s important to provide social signaling tools as a means of making the internet more humane.
We believe that the power to learn from each other is more important than the need to police through moral outrage which is how the current platforms and even some Nostr clients work today.
It’s important that we don’t say Nostr doesn’t support delete. Not all apps need to support requesting a delete, some might want to call it a retraction. It is important that users know there is no way to enforce a delete and not all relays may honor their request.
Edit is similar, although not as widely supported as delete. It’s a creator making a clear statement that they’ve created a new version of their content. Maybe it’s a spelling error, or a new version of the content, or maybe they’re changing it altogether. Freedom online means freedom to retract a statement, freedom to update a statement, freedom to edit your own content. By building on these freedoms, we’ll make Nostr a space where people feel empowered and in control of their own media.
-
@ da18e986:3a0d9851
2024-08-14 13:58:24After months of development I am excited to officially announce the first version of DVMDash (v0.1). DVMDash is a monitoring and debugging tool for all Data Vending Machine (DVM) activity on Nostr. The website is live at https://dvmdash.live and the code is available on Github.
Data Vending Machines (NIP-90) offload computationally expensive tasks from relays and clients in a decentralized, free-market manner. They are especially useful for AI tools, algorithmic processing of user’s feeds, and many other use cases.
The long term goal of DVMDash is to become 1) a place to easily see what’s happening in the DVM ecosystem with metrics and graphs, and 2) provide real-time tools to help developers monitor, debug, and improve their DVMs.
DVMDash aims to enable users to answer these types of questions at a glance: * What’s the most popular DVM right now? * How much money is being paid to image generation DVMs? * Is any DVM down at the moment? When was the last time that DVM completed a task? * Have any DVMs failed to deliver after accepting payment? Did they refund that payment? * How long does it take this DVM to respond? * For task X, what’s the average amount of time it takes for a DVM to complete the task? * … and more
For developers working with DVMs there is now a visual, graph based tool that shows DVM-chain activity. DVMs have already started calling other DVMs to assist with work. Soon, we will have humans in the loop monitoring DVM activity, or completing tasks themselves. The activity trace of which DVM is being called as part of a sub-task from another DVM will become complicated, especially because these decisions will be made at run-time and are not known ahead of time. Building a tool to help users and developers understand where a DVM is in this activity trace, whether it’s gotten stuck or is just taking a long time, will be invaluable. For now, the website only shows 1 step of a dvm chain from a user's request.
One of the main designs for the site is that it is highly clickable, meaning whenever you see a DVM, Kind, User, or Event ID, you can click it and open that up in a new page to inspect it.
Another aspect of this website is that it should be fast. If you submit a DVM request, you should see it in DVMDash within seconds, as well as events from DVMs interacting with your request. I have attempted to obtain DVM events from relays as quickly as possible and compute metrics over them within seconds.
This project makes use of a nosql database and graph database, currently set to use mongo db and neo4j, for which there are free, community versions that can be run locally.
Finally, I’m grateful to nostr:npub10pensatlcfwktnvjjw2dtem38n6rvw8g6fv73h84cuacxn4c28eqyfn34f for supporting this project.
Features in v0.1:
Global Network Metrics:
This page shows the following metrics: - DVM Requests: Number of unencrypted DVM requests (kind 5000-5999) - DVM Results: Number of unencrypted DVM results (kind 6000-6999) - DVM Request Kinds Seen: Number of unique kinds in the Kind range 5000-5999 (except for known non-DVM kinds 5666 and 5969) - DVM Result Kinds Seen: Number of unique kinds in the Kind range 6000-6999 (except for known non-DVM kinds 6666 and 6969) - DVM Pub Keys Seen: Number of unique pub keys that have written a kind 6000-6999 (except for known non-DVM kinds) or have published a kind 31990 event that specifies a ‘k’ tag value between 5000-5999 - DVM Profiles (NIP-89) Seen: Number of 31990 that have a ‘k’ tag value for kind 5000-5999 - Most Popular DVM: The DVM that has produced the most result events (kind 6000-6999) - Most Popular Kind: The Kind in range 5000-5999 that has the most requests by users. - 24 hr DVM Requests: Number of kind 5000-5999 events created in the last 24 hrs - 24 hr DVM Results: Number of kind 6000-6999 events created in the last 24 hours - 1 week DVM Requests: Number of kind 5000-5999 events created in the last week - 1 week DVM Results: Number of kind 6000-6999 events created in the last week - Unique Users of DVMs: Number of unique pubkeys of kind 5000-5999 events - Total Sats Paid to DVMs: - This is an estimate. - This value is likely a lower bound as it does not take into consideration subscriptions paid to DVMs - This is calculated by counting the values of all invoices where: - A DVM published a kind 7000 event requesting payment and containing an invoice - The DVM later provided a DVM Result for the same job for which it requested payment. - The assumption is that the invoice was paid, otherwise the DVM would not have done the work - Note that because there are multiple ways to pay a DVM such as lightning invoices, ecash, and subscriptions, there is no guaranteed way to know whether a DVM has been paid. Additionally, there is no way to know that a DVM completed the job because some DVMs may not publish a final result event and instead send the user a DM or take some other kind of action.
Recent Requests:
This page shows the most recent 3 events per kind, sorted by created date. You should always be able to find the last 3 events here of all DVM kinds.
DVM Browser:
This page will either show a profile of a specific DVM, or when no DVM is given in the url, it will show a table of all DVMs with some high level stats. Users can click on a DVM in the table to load the DVM specific page.
Kind Browser:
This page will either show data on a specific kind including all DVMs that have performed jobs of that kind, or when no kind is given, it will show a table summarizing activity across all Kinds.
Debug:
This page shows the graph based visualization of all events, users, and DVMs involved in a single job as well as a table of all events in order from oldest to newest. When no event is given, this page shows the 200 most recent events where the user can click on an event in order to debug that job. The graph-based visualization allows the user to zoom in and out and move around the graph, as well as double click on any node in the graph (except invoices) to open up that event, user, or dvm in a new page.
Playground:
This page is currently under development and may not work at the moment. If it does work, in the current state you can login with NIP-07 extension and broadcast a 5050 event with some text and then the page will show you events from DVMs. This page will be used to interact with DVMs live. A current good alternative to this feature, for some but not all kinds, is https://vendata.io/.
Looking to the Future
I originally built DVMDash out of Fear-of-Missing-Out (FOMO); I wanted to make AI systems that were comprised of DVMs but my day job was taking up a lot of my time. I needed to know when someone was performing a new task or launching a new AI or Nostr tool!
I have a long list of DVMs and Agents I hope to build and I needed DVMDash to help me do it; I hope it helps you achieve your goals with Nostr, DVMs, and even AI. To this end, I wish for this tool to be useful to others, so if you would like a feature, please submit a git issue here or note me on Nostr!
Immediate Next Steps:
- Refactoring code and removing code that is no longer used
- Improve documentation to run the project locally
- Adding a metric for number of encrypted requests
- Adding a metric for number of encrypted results
Long Term Goals:
- Add more metrics based on community feedback
- Add plots showing metrics over time
- Add support for showing a multi-dvm chain in the graph based visualizer
- Add a real-time mode where the pages will auto update (currently the user must refresh the page)
- ... Add support for user requested features!
Acknowledgements
There are some fantastic people working in the DVM space right now. Thank you to nostr:npub1drvpzev3syqt0kjrls50050uzf25gehpz9vgdw08hvex7e0vgfeq0eseet for making python bindings for nostr_sdk and for the recent asyncio upgrades! Thank you to nostr:npub1nxa4tywfz9nqp7z9zp7nr7d4nchhclsf58lcqt5y782rmf2hefjquaa6q8 for answering lots of questions about DVMs and for making the nostrdvm library. Thank you to nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft for making the original DVM NIP and vendata.io which I use all the time for testing!
P.S. I rushed to get this out in time for Nostriga 2024; code refactoring will be coming :)
-
@ 3c984938:2ec11289
2024-07-22 11:43:17Bienvenide a Nostr!
Introduccíon
Es tu primera vez aqui en Nostr? Bienvenides! Nostr es un acrónimo raro para "Notes and Other Stuff Transmitted by Relays" on un solo objetivo; resistirse a la censura. Una alternativa a las redes sociales tradicionales, comunicaciónes, blogging, streaming, podcasting, y feventualmente el correo electronico (en fase de desarrollo) con características descentralizadas que te capacita, usario. Jamas seras molestado por un anuncio, capturado por una entidad centralizada o algoritmo que te monetiza.
Permítame ser su anfitrión! Soy Onigiri! Yo estoy explorando el mundo de Nostr, un protocolo de comunicacíon decentralizada. Yo escribo sobre las herramientas y los desarolladores increíbles de Nostr que dan vida a esta reino.
Bienvenides a Nostr Wonderland
Estas a punto de entrar a un otro mundo digtal que te hará explotar tu mente de todas las aplicaciones descentralizadas, clientes, sitios que puedes utilizar. Nunca volverás a ver a las comunicaciones ni a las redes sociales de la mesma manera. Todo gracias al carácter criptográfico de nostr, inpirado por la tecnología "blockchain". Cada usario, cuando crean una cuenta en Nostr, recibe un par de llaves: una privada y una publico. Estos son las llaves de tu propio reino. Lo que escribes, cantes, grabes, lo que creas - todo te pertenece.
Unos llaves de Oro y Plata
Mi amigo y yo llamamos a esto "identidad mediante cifrado" porque tu identidad es cifrado. Tu puedes compartir tu llave de plata "npub" a otros usarios para conectar y seguir. Utiliza tu llave de oro "nsec" para accedar a tu cuenta y exponerte a muchas aplicaciones. Mantenga la llave a buen recaudo en todo momento. Ya no hay razor para estar enjaulado por los terminos de plataformas sociales nunca más.
Onigirl
npub18jvyjwpmm65g8v9azmlvu8knd5m7xlxau08y8vt75n53jtkpz2ys6mqqu3
Todavia No tienes un cliente? Seleccione la mejor opción.
Encuentra la aplicación adecuada para ti! Utilice su clave de oro "nsec" para acceder a estas herramientas maravillosas. También puedes visit a esta pagina a ver a todas las aplicaciones. Antes de pegar tu llave de oro en muchas aplicaciones, considera un "signer" (firmante) para los sitios web 3. Por favor, mire la siguiente imagen para más detalles. Consulte también la leyenda.
Get a Signer extension via chrome webstore
Un firmante (o "signer" en inglés) es una extensión del navegador web. Nos2x and NostrConnect son extensiónes ampliamente aceptado para aceder a Nostr. Esto simplifica el proceso de aceder a sitios "web 3". En lugar de copiar y pegar la clave oro "nsec" cada vez, la mantienes guardado en la extensión y le des permiso para aceder a Nostr.
👉⚡⚡Obtén una billetera Bitcoin lightning para enviar/recibir Zaps⚡⚡ (Esto es opcional)
Aqui en Nostr, utilizamos la red Lightning de Bitcoin (L2). Nesitaras una cartera lightning para enviar y recibir Satoshis, la denominacion mas chiquita de un Bitcoin. (0.000000001 BTC) Los "zaps" son un tipo de micropago en Nostr. Si te gusta el contenido de un usario, es norma dejarle una propina en la forma de un ¨zap". Por ejemplo, si te gusta este contenido, tu me puedes hacer "zap" con Satoshis para recompensar mi trabajo. Pero apenas llegaste, as que todavia no tienes una cartera. No se preocupe, puedo ayudar en eso!
"Stacker.News" es una plataforma donde los usarios pueden ganar SATS por publicar articulos y interactuar con otros.
Stacker.News es el lugar mas facil para recibir una direccion de cartera Bitcoin Lightning.
- Acedese con su extensión firmante "signer" - Nos2x or NostrConnect - hace click en tu perfil, un codigo de letras y numeros en la mano superior derecha. Veás algo como esto
- Haga clic en "edit" y elija un nombre que te guste. Se puede cambiar si deseas en el futuro.
- Haga clic en "save"
- Crea una biografía y la comunidad SN son muy acogedora. Te mandarán satoshi para darte la bienvenida.
- Tu nueva direccion de cartera Bitcoin Lightning aparecerá asi
^^No le mandas "zaps" a esta direccion; es puramente con fines educativos.
- Con tu Nueva dirección de monedero Bitcoin Lightning puedes ponerla en cualquier cliente o app de tu elección. Para ello, ve a tu página de perfil y bajo la dirección de tu monedero en "Dirección Lightning", introduce tu nueva dirección y pulsa "guardar " y ya está. Enhorabuena.
👉✨Con el tiempo, es posible que desee pasar a las opciones de auto-custodia y tal vez incluso considerar la posibilidad de auto-alojar su propio nodo LN para una mejor privacidad. La buena noticia es que stacker.news tambien está dejando de ser una cartera custodio.
⭐NIP-05-identidad DNS⭐ Al igual que en Twitter, una marca de verificación es para mostrar que eres del mismo jardín "como un humano", y no un atípico como una mala hierba o, "bot". Pero no de la forma nefasta en que lo hacen las grandes tecnológicas. En el país de las maravillas de Nostr, esto te permite asignar tu llave de plata, "npub", a un identificador DNS. Una vez verificado, puedes gritar para anunciar tu nueva residencia Nostr para compartir.
✨Hay un montón de opciones, pero si has seguido los pasos, esto se vuelve extremadamente fácil.
👉✅¡Haz clic en tu "Perfil ", luego en "Configuración ", desplázate hasta la parte inferior y pega tu clave Silver, "npub!" y haz clic en "Guardar " y ¡listo! Utiliza tu monedero relámpago de Stacker.news como tu NIP-05. ¡¡¡Enhorabuena!!! ¡Ya estás verificado! Dale unas horas y cuando uses tu cliente "principal " deberías ver una marca de verificación.
Nostr, el infonformista de los servidores.
En lugar de utilizar una única instancia o un servidor centralizado, Nostr está construido para que varias bases de datos intercambien mensajes mediante "relés". Los relés, que son neutrales y no discriminatorios, almacenan y difunden mensajes públicos en la red Nostr. Transmiten mensajes a todos los demás clientes conectados a ellos, asegurando las comunicaciones en la red descentralizada.
¡Mis amigos en Nostr te dan la bienvenida!
Bienvenida a la fiesta. ¿Le apetece un té?🍵
¡Hay mucho mas!
Esto es la punta del iceberg. Síguenme mientras continúo explorando nuevas tierras y a los desarolladores, los caballeres que potencioan este ecosistema. Encuéntrame aquí para mas contenido como este y comparten con otros usarios de nostr. Conozca a los caballeres que luchan por freedomTech (la tecnología de libertad) en Nostr y a los proyectos a los que contribuyen para hacerla realidad.💋
Onigirl @npub18jvyjwpmm65g8v9azmlvu8knd5m7xlxau08y8vt75n53jtkpz2ys6mqqu3
🧡😻Esta guía ha sido cuidadosamente traducida por miggymofongo
Puede seguirla aquí. @npub1ajt9gp0prf4xrp4j07j9rghlcyukahncs0fw5ywr977jccued9nqrcc0cs
sitio web
- Acedese con su extensión firmante "signer" - Nos2x or NostrConnect - hace click en tu perfil, un codigo de letras y numeros en la mano superior derecha. Veás algo como esto
-
@ 6871d8df:4a9396c1
2024-06-12 22:10:51Embracing AI: A Case for AI Accelerationism
In an era where artificial intelligence (AI) development is at the forefront of technological innovation, a counter-narrative championed by a group I refer to as the 'AI Decels'—those advocating for the deceleration of AI advancements— seems to be gaining significant traction. After tuning into a recent episode of the Joe Rogan Podcast, I realized that the prevailing narrative around AI was heading in a dangerous direction. Rogan had Aza Raskin and Tristan Harris, technology safety advocates, who released a talk called 'The AI Dilemma,' on for a discussion. You may know them from the popular documentary 'The Social Dilemma' on the dangers of social media. It became increasingly clear that the cautionary stance dominating this discourse might be tipping the scales too far, veering towards an over-regulated future that stifles innovation rather than fostering it.
Are we moving too fast?
While acknowledging AI's benefits, Aza and Tristan fear it could be dangerous if not guided by ethical standards and safeguards. They believe AI development is moving too quickly and that the right incentives for its growth are not in place. They are concerned about the possibility of "civilizational overwhelm," where advanced AI technology far outpaces 21st-century governance. They fear a scenario where society and its institutions cannot manage or adapt to the rapid changes and challenges introduced by AI.
They argue for regulating and slowing down AI development due to rapid, uncontrolled advancement driven by competition among companies like Google, OpenAI, and Microsoft. They claim this race can lead to unsafe releases of new technologies, with AI systems exhibiting unpredictable, emergent behaviors, posing significant societal risks. For instance, AI can inadvertently learn tasks like sentiment analysis or human emotion understanding, creating potential for misuse in areas like biological weapons or cybersecurity vulnerabilities.
Moreover, AI companies' profit-driven incentives often conflict with the public good, prioritizing market dominance over safety and ethics. This misalignment can lead to technologies that maximize engagement or profits at societal expense, similar to the negative impacts seen with social media. To address these issues, they suggest government regulation to realign AI companies' incentives with safety, ethical considerations, and public welfare. Implementing responsible development frameworks focused on long-term societal impacts is essential for mitigating potential harm.
This isn't new
Though the premise of their concerns seems reasonable, it's dangerous and an all too common occurrence with the emergence of new technologies. For example, in their example in the podcast, they refer to the technological breakthrough of oil. Oil as energy was a technological marvel and changed the course of human civilization. The embrace of oil — now the cornerstone of industry in our age — revolutionized how societies operated, fueled economies, and connected the world in unprecedented ways. Yet recently, as ideas of its environmental and geopolitical ramifications propagated, the narrative around oil has shifted.
Tristan and Aza detail this shift and claim that though the period was great for humanity, we didn't have another technology to go to once the technological consequences became apparent. The problem with that argument is that we did innovate to a better alternative: nuclear. However, at its technological breakthrough, it was met with severe suspicions, from safety concerns to ethical debates over its use. This overregulation due to these concerns caused a decades-long stagnation in nuclear innovation, where even today, we are still stuck with heavy reliance on coal and oil. The scare tactics and fear-mongering had consequences, and, interestingly, they don't see the parallels with their current deceleration stance on AI.
These examples underscore a critical insight: the initial anxiety surrounding new technologies is a natural response to the unknowns they introduce. Yet, history shows that too much anxiety can stifle the innovation needed to address the problems posed by current technologies. The cycle of discovery, fear, adaptation, and eventual acceptance reveals an essential truth—progress requires not just the courage to innovate but also the resilience to navigate the uncertainties these innovations bring.
Moreover, believing we can predict and plan for all AI-related unknowns reflects overconfidence in our understanding and foresight. History shows that technological progress, marked by unexpected outcomes and discoveries, defies such predictions. The evolution from the printing press to the internet underscores progress's unpredictability. Hence, facing AI's future requires caution, curiosity, and humility. Acknowledging our limitations and embracing continuous learning and adaptation will allow us to harness AI's potential responsibly, illustrating that embracing our uncertainties, rather than pretending to foresee them, is vital to innovation.
The journey of technological advancement is fraught with both promise and trepidation. Historically, each significant leap forward, from the dawn of the industrial age to the digital revolution, has been met with a mix of enthusiasm and apprehension. Aza Raskin and Tristan Harris's thesis in the 'AI Dilemma' embodies the latter.
Who defines "safe?"
When slowing down technologies for safety or ethical reasons, the issue arises of who gets to define what "safe" or “ethical” mean? This inquiry is not merely technical but deeply ideological, touching the very core of societal values and power dynamics. For example, the push for Diversity, Equity, and Inclusion (DEI) initiatives shows how specific ideological underpinnings can shape definitions of safety and decency.
Take the case of the initial release of Google's AI chatbot, Gemini, which chose the ideology of its creators over truth. Luckily, the answers were so ridiculous that the pushback was sudden and immediate. My worry, however, is if, in correcting this, they become experts in making the ideological capture much more subtle. Large bureaucratic institutions' top-down safety enforcement creates a fertile ground for ideological capture of safety standards.
I claim that the issue is not the technology itself but the lens through which we view and regulate it. Suppose the gatekeepers of 'safety' are aligned with a singular ideology. In that case, AI development would skew to serve specific ends, sidelining diverse perspectives and potentially stifling innovative thought and progress.
In the podcast, Tristan and Aza suggest such manipulation as a solution. They propose using AI for consensus-building and creating "shared realities" to address societal challenges. In practice, this means that when individuals' viewpoints seem to be far apart, we can leverage AI to "bridge the gap." How they bridge the gap and what we would bridge it toward is left to the imagination, but to me, it is clear. Regulators will inevitably influence it from the top down, which, in my opinion, would be the opposite of progress.
In navigating this terrain, we must advocate for a pluralistic approach to defining safety, encompassing various perspectives and values achieved through market forces rather than a governing entity choosing winners. The more players that can play the game, the more wide-ranging perspectives will catalyze innovation to flourish.
Ownership & Identity
Just because we should accelerate AI forward does not mean I do not have my concerns. When I think about what could be the most devastating for society, I don't believe we have to worry about a Matrix-level dystopia; I worry about freedom. As I explored in "Whose data is it anyway?," my concern gravitates toward the issues of data ownership and the implications of relinquishing control over our digital identities. This relinquishment threatens our privacy and the integrity of the content we generate, leaving it susceptible to the inclinations and profit of a few dominant tech entities.
To counteract these concerns, a paradigm shift towards decentralized models of data ownership is imperative. Such standards would empower individuals with control over their digital footprints, ensuring that we develop AI systems with diverse, honest, and truthful perspectives rather than the massaged, narrow viewpoints of their creators. This shift safeguards individual privacy and promotes an ethical framework for AI development that upholds the principles of fairness and impartiality.
As we stand at the crossroads of technological innovation and ethical consideration, it is crucial to advocate for systems that place data ownership firmly in the hands of users. By doing so, we can ensure that the future of AI remains truthful, non-ideological, and aligned with the broader interests of society.
But what about the Matrix?
I know I am in the minority on this, but I feel that the concerns of AGI (Artificial General Intelligence) are generally overblown. I am not scared of reaching the point of AGI, and I think the idea that AI will become so intelligent that we will lose control of it is unfounded and silly. Reaching AGI is not reaching consciousness; being worried about it spontaneously gaining consciousness is a misplaced fear. It is a tool created by humans for humans to enhance productivity and achieve specific outcomes.
At a technical level, large language models (LLMs) are trained on extensive datasets and learning patterns from language and data through a technique called "unsupervised learning" (meaning the data is untagged). They predict the next word in sentences, refining their predictions through feedback to improve coherence and relevance. When queried, LLMs generate responses based on learned patterns, simulating an understanding of language to provide contextually appropriate answers. They will only answer based on the datasets that were inputted and scanned.
AI will never be "alive," meaning that AI lacks inherent agency, consciousness, and the characteristics of life, not capable of independent thought or action. AI cannot act independently of human control. Concerns about AI gaining autonomy and posing a threat to humanity are based on a misunderstanding of the nature of AI and the fundamental differences between living beings and machines. AI spontaneously developing a will or consciousness is more similar to thinking a hammer will start walking than us being able to create consciousness through programming. Right now, there is only one way to create consciousness, and I'm skeptical that is ever something we will be able to harness and create as humans. Irrespective of its complexity — and yes, our tools will continue to become evermore complex — machines, specifically AI, cannot transcend their nature as non-living, inanimate objects programmed and controlled by humans.
The advancement of AI should be seen as enhancing human capabilities, not as a path toward creating autonomous entities with their own wills. So, while AI will continue to evolve, improve, and become more powerful, I believe it will remain under human direction and control without the existential threats often sensationalized in discussions about AI's future.
With this framing, we should not view the race toward AGI as something to avoid. This will only make the tools we use more powerful, making us more productive. With all this being said, AGI is still much farther away than many believe.
Today's AI excels in specific, narrow tasks, known as narrow or weak AI. These systems operate within tightly defined parameters, achieving remarkable efficiency and accuracy that can sometimes surpass human performance in those specific tasks. Yet, this is far from the versatile and adaptable functionality that AGI represents.
Moreover, the exponential growth of computational power observed in the past decades does not directly translate to an equivalent acceleration in achieving AGI. AI's impressive feats are often the result of massive data inputs and computing resources tailored to specific tasks. These successes do not inherently bring us closer to understanding or replicating the general problem-solving capabilities of the human mind, which again would only make the tools more potent in our hands.
While AI will undeniably introduce challenges and change the aspects of conflict and power dynamics, these challenges will primarily stem from humans wielding this powerful tool rather than the technology itself. AI is a mirror reflecting our own biases, values, and intentions. The crux of future AI-related issues lies not in the technology's inherent capabilities but in how it is used by those wielding it. This reality is at odds with the idea that we should slow down development as our biggest threat will come from those who are not friendly to us.
AI Beget's AI
While the unknowns of AI development and its pitfalls indeed stir apprehension, it's essential to recognize the power of market forces and human ingenuity in leveraging AI to address these challenges. History is replete with examples of new technologies raising concerns, only for those very technologies to provide solutions to the problems they initially seemed to exacerbate. It looks silly and unfair to think of fighting a war with a country that never embraced oil and was still primarily getting its energy from burning wood.
The evolution of AI is no exception to this pattern. As we venture into uncharted territories, the potential issues that arise with AI—be it ethical concerns, use by malicious actors, biases in decision-making, or privacy intrusions—are not merely obstacles but opportunities for innovation. It is within the realm of possibility, and indeed, probability, that AI will play a crucial role in solving the problems it creates. The idea that there would be no incentive to address and solve these problems is to underestimate the fundamental drivers of technological progress.
Market forces, fueled by the demand for better, safer, and more efficient solutions, are powerful catalysts for positive change. When a problem is worth fixing, it invariably attracts the attention of innovators, researchers, and entrepreneurs eager to solve it. This dynamic has driven progress throughout history, and AI is poised to benefit from this problem-solving cycle.
Thus, rather than viewing AI's unknowns as sources of fear, we should see them as sparks of opportunity. By tackling the challenges posed by AI, we will harness its full potential to benefit humanity. By fostering an ecosystem that encourages exploration, innovation, and problem-solving, we can ensure that AI serves as a force for good, solving problems as profound as those it might create. This is the optimism we must hold onto—a belief in our collective ability to shape AI into a tool that addresses its own challenges and elevates our capacity to solve some of society's most pressing issues.
An AI Future
The reality is that it isn't whether AI will lead to unforeseen challenges—it undoubtedly will, as has every major technological leap in history. The real issue is whether we let fear dictate our path and confine us to a standstill or embrace AI's potential to address current and future challenges.
The approach to solving potential AI-related problems with stringent regulations and a slowdown in innovation is akin to cutting off the nose to spite the face. It's a strategy that risks stagnating the U.S. in a global race where other nations will undoubtedly continue their AI advancements. This perspective dangerously ignores that AI, much like the printing press of the past, has the power to democratize information, empower individuals, and dismantle outdated power structures.
The way forward is not less AI but more of it, more innovation, optimism, and curiosity for the remarkable technological breakthroughs that will come. We must recognize that the solution to AI-induced challenges lies not in retreating but in advancing our capabilities to innovate and adapt.
AI represents a frontier of limitless possibilities. If wielded with foresight and responsibility, it's a tool that can help solve some of the most pressing issues we face today. There are certainly challenges ahead, but I trust that with problems come solutions. Let's keep the AI Decels from steering us away from this path with their doomsday predictions. Instead, let's embrace AI with the cautious optimism it deserves, forging a future where technology and humanity advance to heights we can't imagine.
-
@ 3c984938:2ec11289
2024-06-09 14:40:55I'm having some pain in my heart about the U.S. elections.
Ever since Obama campaigned for office, an increase of young voters have come out of the woodwork. Things have not improved. They've actively told you that "your vote matters." I believe this to be a lie unless any citizen can demand at the gate, at the White House to be allowed to hold and point a gun to the president's head. (Relax, this is a hyperbole)
Why so dramatic? Well, what does the president do? Sign bills, commands the military, nominates new Fed chairman, ambassadors, supreme judges and senior officials all while traveling in luxury planes and living in a white palace for four years.
They promised Every TIME to protect citizen rights when they take the oath and office.
...They've broken this several times, with so-called "emergency-crisis"
The purpose of a president, today, it seems is to basically hire armed thugs to keep the citizens in check and make sure you "voluntarily continue to be a slave," to the system, hence the IRS. The corruption extends from the cop to the judge and even to politicians. The politicians get paid from lobbyists to create bills in congress for the president to sign. There's no right answer when money is involved with politicians. It is the same if you vote Obama, Biden, Trump, or Haley. They will wield the pen to serve themselves to say it will benefit the country.
In the first 100 years of presidency, the government wasn't even a big deal. They didn't even interfere with your life as much as they do today.
^^ You hold the power in your hands, don't let them take it. Don't believe me? Try to get a loan from a bank without a signature. Your signature is as good as gold (if not better) and is an original trademark.
Just Don't Vote. End the Fed. Opt out.
^^ I choose to form my own path, even if it means leaving everything I knew prior. It doesn't have to be a spiritual thing. Some, have called me religious because of this. We're all capable of greatness and having humanity.
✨Don't have a machine heart with a machine mind. Instead, choose to have a heart like the cowardly lion from the "Wizard Of Oz."
There's no such thing as a good president or politicians.
If there was, they would have issued non-interest Federal Reserve Notes. Lincoln and Kennedy tried to do this, they got shot.
There's still a banner of America there, but it's so far gone that I cannot even recognize it. However, I only see a bunch of 🏳🌈 pride flags.
✨Patrick Henry got it wrong, when he delivered his speech, "Give me liberty or give me death." Liberty and freedom are two completely different things.
Straightforward from Merriam-Webster Choose Right or left?
No control, to be 100% without restrictions- free.
✨I disagree with the example sentence given. Because you cannot advocate for human freedom and own slaves, it's contradicting it. Which was common in the founding days.
I can understand many may disagree with me, and you might be thinking, "This time will be different." I, respectfully, disagree, and the proxy wars are proof. Learn the importance of Bitcoin, every Satoshi is a step away from corruption.
✨What does it look like to pull the curtains from the "Wizard of Oz?"
Have you watched the video below, what 30 Trillion dollars in debt looks like visually? Even I was blown away. https://video.nostr.build/d58c5e1afba6d7a905a39407f5e695a4eb4a88ae692817a36ecfa6ca1b62ea15.mp4
I say this with love. Hear my plea?
Normally, I don't write about anything political. It just feels like a losing game. My energy feels it's in better use to learn new things, write and to create. Even a simple blog post as simple as this. Stack SATs, and stay humble.
<3 Onigirl
-
@ 3bf0c63f:aefa459d
2024-05-21 12:38:08Bitcoin transactions explained
A transaction is a piece of data that takes inputs and produces outputs. Forget about the blockchain thing, Bitcoin is actually just a big tree of transactions. The blockchain is just a way to keep transactions ordered.
Imagine you have 10 satoshis. That means you have them in an unspent transaction output (UTXO). You want to spend them, so you create a transaction. The transaction should reference unspent outputs as its inputs. Every transaction has an immutable id, so you use that id plus the index of the output (because transactions can have multiple outputs). Then you specify a script that unlocks that transaction and related signatures, then you specify outputs along with a script that locks these outputs.
As you can see, there's this lock/unlocking thing and there are inputs and outputs. Inputs must be unlocked by fulfilling the conditions specified by the person who created the transaction they're in. And outputs must be locked so anyone wanting to spend those outputs will need to unlock them.
For most of the cases locking and unlocking means specifying a public key whose controller (the person who has the corresponding private key) will be able to spend. Other fancy things are possible too, but we can ignore them for now.
Back to the 10 satoshis you want to spend. Since you've successfully referenced 10 satoshis and unlocked them, now you can specify the outputs (this is all done in a single step). You can specify one output of 10 satoshis, two of 5, one of 3 and one of 7, three of 3 and so on. The sum of outputs can't be more than 10. And if the sum of outputs is less than 10 the difference goes to fees. In the first days of Bitcoin you didn't need any fees, but now you do, otherwise your transaction won't be included in any block.
If you're still interested in transactions maybe you could take a look at this small chapter of that Andreas Antonopoulos book.
If you hate Andreas Antonopoulos because he is a communist shitcoiner or don't want to read more than half a page, go here: https://en.bitcoin.it/wiki/Coin_analogy
-
@ 3c984938:2ec11289
2024-05-09 04:43:15It's been a journey from the Publishing Forest of Nostr to the open sea of web3. I've come across a beautiful chain of islands and thought. Why not take a break and explore this place? If I'm searching for devs and FOSS, I should search every nook and cranny inside the realm of Nostr. It is quite vast for little old me. I'm just a little hamster and I don't speak in code or binary numbers zeros and ones.
After being in sea for awhile, my heart raced for excitement for what I could find. It seems I wasn't alone, there were others here like me! Let's help spread the message to others about this uncharted realm. See, look at the other sailboats, aren't they pretty? Thanks to some generous donation of SATs, I was able to afford the docking fee.
Ever feel like everyone was going to a party, and you were supposed to dress up, but you missed the memo? Or a comic-con? well, I felt completely underdressed and that's an understatement. Well, turns out there is a some knights around here. Take a peek!
A black cat with a knight passed by very quickly. He was moving too fast for me to track. Where was he going? Then I spotted a group of knights heading in the same direction, so I tagged along. The vibes from these guys was impossible to resist. They were just happy-go-lucky. 🥰They were heading to a tavern on a cliff off the island.
Ehh? a Tavern? Slightly confused, whatever could these knights be doing here? I guess when they're done with their rounds they would here to blow off steam. Things are looking curiouser and curiouser. But the black cat from earlier was here with its rider, whom was dismounting. So you can only guess, where I'm going.
The atmosphere in this pub, was lively and energetic. So many knights spoke among themselves. A group here, another there, but there was one that caught my eye. I went up to a group at a table, whose height towed well above me even when seated. Taking a deep breath, I asked, "Who manages this place?" They unanimous pointed to one waiting for ale at the bar. What was he doing? Watching others talk? How peculiar.
So I went up to him! And introduced myself.
"Hello I'm Onigirl"
"Hello Onigirl, Welcome to Gossip"
"Gossip, what is Gossip?" scratching my head and whiskers.
What is Gossip? Gossip is FOSS and a great client for privacy-centric minded nostriches. It avoids browser tech which by-passes several scripting languages such as JavaScript☕, HTML parsing, rendering, and CSS(Except HTTP GET and Websockets). Using OpenGL-style rendering. For Nostriches that wish to remain anonymous can use Gossip over TOR. Mike recommends using QubesOS, Whonix and or Tails. [FYI-Gossip does not natively support tor SOCKS5 proxy] Most helpful to spill the beans if you're a journalist.
On top of using your nsec or your encryption key, Gossip adds another layer of security over your account with a password login. There's nothing wrong with using the browser extensions (such as nos2x or Flamingo) which makes it super easy to log in to Nostr enable websites, apps, but it does expose you to browser vulnerabilities.
Mike Points out
"people have already had their private key stolen from other nostr clients,"
so it a concern if you value your account. I most certainly care for mine.
Gossip UI has a simple, and clean interface revolving around NIP-65 also called the “Outbox model." As posted from GitHub,
"This NIP allows Clients to connect directly with the most up-to-date relay set from each individual user, eliminating the need of broadcasting events to popular relays."
This eliminates clients that track only a specific set of relays which can congest those relays when you publish your note. Also this can be censored, by using Gossip you can publish notes to alternative relays that have not censored you to reach the same followers.
👉The easiest way to translate that is reducing redundancy to publish to popular relays or centralized relays for content reach to your followers.
Cool! What an awesome client, I mean Tavern! What else does this knight do? He reaches for something in his pocket. what is it? A Pocket is a database for storing and retrieving nostr events but mike's written it in Rust with a few extra kinks inspired by Will's nostrdb. Still in development, but it'll be another tool for you dear user! 💖💕💚
Onigirl is proud to present this knights to the community and honor them with kisu. 💋💋💋 Show some 💖💘💓🧡💙💚
👉💋💋Will - jb55 Lord of apples 💋 @npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s
👉💋💋 Mike Knight - Lord of Security 💋 @npub1acg6thl5psv62405rljzkj8spesceyfz2c32udakc2ak0dmvfeyse9p35c
Knights spend a lot of time behind the screen coding for the better of humanity. It is a tough job! Let's appreciate these knights, relay operators, that support this amazing realm of Nostr! FOSS for all!
This article was prompted for the need for privacy and security of your data. They're different, not to be confused.
Recently, Edward Snowden warns Bitcoin devs about the need for privacy, Quote:
“I've been warning Bitcoin developers for ten years that privacy needs to be provided for at the protocol level. This is the final warning. The clock is ticking.”
Snowden’s comments come after heavy actions of enforcement from Samarai Wallet, Roger Ver, Binance’s CZ, and now the closure of Wasabi Wallet. Additionally, according to CryptoBriefing, Trezor is ending it’s CoinJoin integration as well. Many are concerned over the new definition of a money transmitter, which includes even those who don’t touch the funds.
Help your favorite the hamster
^^Me drowning in notes on your feed. I can only eat so many notes to find you.
👉If there are any XMPP fans on here. I'm open to the idea of opening a public channel, so you could follow me on that as a forum-like style. My server of choice would likely be a German server.😀You would be receiving my articles as njump.me style or website-like. GrapeneOS users, you can download Cheogram app from the F-Driod store for free to access. Apple and Andriod users are subjected to pay to download this app, an alternative is ntalk or conversations. If it interests the community, just FYI. Please comment or DM.
👉If you enjoyed this content, please consider reposting/sharing as my content is easily drowned by notes on your feed. You could also join my community under Children_Zone where I post my content.
An alternative is by following #onigirl Just FYI this feature is currently a little buggy.
Follow as I search for tools and awesome devs to help you dear user live a decentralized life as I explore the realm of Nostr.
Thank you Fren
-
@ 3bf0c63f:aefa459d
2024-01-14 14:52:16Drivechain
Understanding Drivechain requires a shift from the paradigm most bitcoiners are used to. It is not about "trustlessness" or "mathematical certainty", but game theory and incentives. (Well, Bitcoin in general is also that, but people prefer to ignore it and focus on some illusion of trustlessness provided by mathematics.)
Here we will describe the basic mechanism (simple) and incentives (complex) of "hashrate escrow" and how it enables a 2-way peg between the mainchain (Bitcoin) and various sidechains.
The full concept of "Drivechain" also involves blind merged mining (i.e., the sidechains mine themselves by publishing their block hashes to the mainchain without the miners having to run the sidechain software), but this is much easier to understand and can be accomplished either by the BIP-301 mechanism or by the Spacechains mechanism.
How does hashrate escrow work from the point of view of Bitcoin?
A new address type is created. Anything that goes in that is locked and can only be spent if all miners agree on the Withdrawal Transaction (
WT^
) that will spend it for 6 months. There is one of these special addresses for each sidechain.To gather miners' agreement
bitcoind
keeps track of the "score" of all transactions that could possibly spend from that address. On every block mined, for each sidechain, the miner can use a portion of their coinbase to either increase the score of oneWT^
by 1 while decreasing the score of all others by 1; or they can decrease the score of allWT^
s by 1; or they can do nothing.Once a transaction has gotten a score high enough, it is published and funds are effectively transferred from the sidechain to the withdrawing users.
If a timeout of 6 months passes and the score doesn't meet the threshold, that
WT^
is discarded.What does the above procedure mean?
It means that people can transfer coins from the mainchain to a sidechain by depositing to the special address. Then they can withdraw from the sidechain by making a special withdraw transaction in the sidechain.
The special transaction somehow freezes funds in the sidechain while a transaction that aggregates all withdrawals into a single mainchain
WT^
, which is then submitted to the mainchain miners so they can start voting on it and finally after some months it is published.Now the crucial part: the validity of the
WT^
is not verified by the Bitcoin mainchain rules, i.e., if Bob has requested a withdraw from the sidechain to his mainchain address, but someone publishes a wrongWT^
that instead takes Bob's funds and sends them to Alice's main address there is no way the mainchain will know that. What determines the "validity" of theWT^
is the miner vote score and only that. It is the job of miners to vote correctly -- and for that they may want to run the sidechain node in SPV mode so they can attest for the existence of a reference to theWT^
transaction in the sidechain blockchain (which then ensures it is ok) or do these checks by some other means.What? 6 months to get my money back?
Yes. But no, in practice anyone who wants their money back will be able to use an atomic swap, submarine swap or other similar service to transfer funds from the sidechain to the mainchain and vice-versa. The long delayed withdraw costs would be incurred by few liquidity providers that would gain some small profit from it.
Why bother with this at all?
Drivechains solve many different problems:
It enables experimentation and new use cases for Bitcoin
Issued assets, fully private transactions, stateful blockchain contracts, turing-completeness, decentralized games, some "DeFi" aspects, prediction markets, futarchy, decentralized and yet meaningful human-readable names, big blocks with a ton of normal transactions on them, a chain optimized only for Lighting-style networks to be built on top of it.
These are some ideas that may have merit to them, but were never actually tried because they couldn't be tried with real Bitcoin or inferfacing with real bitcoins. They were either relegated to the shitcoin territory or to custodial solutions like Liquid or RSK that may have failed to gain network effect because of that.
It solves conflicts and infighting
Some people want fully private transactions in a UTXO model, others want "accounts" they can tie to their name and build reputation on top; some people want simple multisig solutions, others want complex code that reads a ton of variables; some people want to put all the transactions on a global chain in batches every 10 minutes, others want off-chain instant transactions backed by funds previously locked in channels; some want to spend, others want to just hold; some want to use blockchain technology to solve all the problems in the world, others just want to solve money.
With Drivechain-based sidechains all these groups can be happy simultaneously and don't fight. Meanwhile they will all be using the same money and contributing to each other's ecosystem even unwillingly, it's also easy and free for them to change their group affiliation later, which reduces cognitive dissonance.
It solves "scaling"
Multiple chains like the ones described above would certainly do a lot to accomodate many more transactions that the current Bitcoin chain can. One could have special Lightning Network chains, but even just big block chains or big-block-mimblewimble chains or whatnot could probably do a good job. Or even something less cool like 200 independent chains just like Bitcoin is today, no extra features (and you can call it "sharding"), just that would already multiply the current total capacity by 200.
Use your imagination.
It solves the blockchain security budget issue
The calculation is simple: you imagine what security budget is reasonable for each block in a world without block subsidy and divide that for the amount of bytes you can fit in a single block: that is the price to be paid in satoshis per byte. In reasonable estimative, the price necessary for every Bitcoin transaction goes to very large amounts, such that not only any day-to-day transaction has insanely prohibitive costs, but also Lightning channel opens and closes are impracticable.
So without a solution like Drivechain you'll be left with only one alternative: pushing Bitcoin usage to trusted services like Liquid and RSK or custodial Lightning wallets. With Drivechain, though, there could be thousands of transactions happening in sidechains and being all aggregated into a sidechain block that would then pay a very large fee to be published (via blind merged mining) to the mainchain. Bitcoin security guaranteed.
It keeps Bitcoin decentralized
Once we have sidechains to accomodate the normal transactions, the mainchain functionality can be reduced to be only a "hub" for the sidechains' comings and goings, and then the maximum block size for the mainchain can be reduced to, say, 100kb, which would make running a full node very very easy.
Can miners steal?
Yes. If a group of coordinated miners are able to secure the majority of the hashpower and keep their coordination for 6 months, they can publish a
WT^
that takes the money from the sidechains and pays to themselves.Will miners steal?
No, because the incentives are such that they won't.
Although it may look at first that stealing is an obvious strategy for miners as it is free money, there are many costs involved:
- The cost of ceasing blind-merged mining returns -- as stealing will kill a sidechain, all the fees from it that miners would be expected to earn for the next years are gone;
- The cost of Bitcoin price going down: If a steal is successful that will mean Drivechains are not safe, therefore Bitcoin is less useful, and miner credibility will also be hurt, which are likely to cause the Bitcoin price to go down, which in turn may kill the miners' businesses and savings;
- The cost of coordination -- assuming miners are just normal businesses, they just want to do their work and get paid, but stealing from a Drivechain will require coordination with other miners to conduct an immoral act in a way that has many pitfalls and is likely to be broken over the months;
- The cost of miners leaving your mining pool: when we talked about "miners" above we were actually talking about mining pools operators, so they must also consider the risk of miners migrating from their mining pool to others as they begin the process of stealing;
- The cost of community goodwill -- when participating in a steal operation, a miner will suffer a ton of backlash from the community. Even if the attempt fails at the end, the fact that it was attempted will contribute to growing concerns over exaggerated miners power over the Bitcoin ecosystem, which may end up causing the community to agree on a hard-fork to change the mining algorithm in the future, or to do something to increase participation of more entities in the mining process (such as development or cheapment of new ASICs), which have a chance of decreasing the profits of current miners.
Another point to take in consideration is that one may be inclined to think a newly-created sidechain or a sidechain with relatively low usage may be more easily stolen from, since the blind merged mining returns from it (point 1 above) are going to be small -- but the fact is also that a sidechain with small usage will also have less money to be stolen from, and since the other costs besides 1 are less elastic at the end it will not be worth stealing from these too.
All of the above consideration are valid only if miners are stealing from good sidechains. If there is a sidechain that is doing things wrong, scamming people, not being used at all, or is full of bugs, for example, that will be perceived as a bad sidechain, and then miners can and will safely steal from it and kill it, which will be perceived as a good thing by everybody.
What do we do if miners steal?
Paul Sztorc has suggested in the past that a user-activated soft-fork could prevent miners from stealing, i.e., most Bitcoin users and nodes issue a rule similar to this one to invalidate the inclusion of a faulty
WT^
and thus cause any miner that includes it in a block to be relegated to their own Bitcoin fork that other nodes won't accept.This suggestion has made people think Drivechain is a sidechain solution backed by user-actived soft-forks for safety, which is very far from the truth. Drivechains must not and will not rely on this kind of soft-fork, although they are possible, as the coordination costs are too high and no one should ever expect these things to happen.
If even with all the incentives against them (see above) miners do still steal from a good sidechain that will mean the failure of the Drivechain experiment. It will very likely also mean the failure of the Bitcoin experiment too, as it will be proven that miners can coordinate to act maliciously over a prolonged period of time regardless of economic and social incentives, meaning they are probably in it just for attacking Bitcoin, backed by nation-states or something else, and therefore no Bitcoin transaction in the mainchain is to be expected to be safe ever again.
Why use this and not a full-blown trustless and open sidechain technology?
Because it is impossible.
If you ever heard someone saying "just use a sidechain", "do this in a sidechain" or anything like that, be aware that these people are either talking about "federated" sidechains (i.e., funds are kept in custody by a group of entities) or they are talking about Drivechain, or they are disillusioned and think it is possible to do sidechains in any other manner.
No, I mean a trustless 2-way peg with correctness of the withdrawals verified by the Bitcoin protocol!
That is not possible unless Bitcoin verifies all transactions that happen in all the sidechains, which would be akin to drastically increasing the blocksize and expanding the Bitcoin rules in tons of ways, i.e., a terrible idea that no one wants.
What about the Blockstream sidechains whitepaper?
Yes, that was a way to do it. The Drivechain hashrate escrow is a conceptually simpler way to achieve the same thing with improved incentives, less junk in the chain, more safety.
Isn't the hashrate escrow a very complex soft-fork?
Yes, but it is much simpler than SegWit. And, unlike SegWit, it doesn't force anything on users, i.e., it isn't a mandatory blocksize increase.
Why should we expect miners to care enough to participate in the voting mechanism?
Because it's in their own self-interest to do it, and it costs very little. Today over half of the miners mine RSK. It's not blind merged mining, it's a very convoluted process that requires them to run a RSK full node. For the Drivechain sidechains, an SPV node would be enough, or maybe just getting data from a block explorer API, so much much simpler.
What if I still don't like Drivechain even after reading this?
That is the entire point! You don't have to like it or use it as long as you're fine with other people using it. The hashrate escrow special addresses will not impact you at all, validation cost is minimal, and you get the benefit of people who want to use Drivechain migrating to their own sidechains and freeing up space for you in the mainchain. See also the point above about infighting.
See also
-
@ 3c984938:2ec11289
2024-04-16 17:14:58Hello (N)osytrs!
Yes! I'm calling you an (N)oystr!
Why is that? Because you shine, and I'm not just saying that to get more SATs. Ordinary Oysters and mussels can produce these beauties! Nothing seriously unique about them, however, with a little time and love each oyster is capable of creating something truly beautiful. I like believing so, at least, given the fact that you're even reading this article; makes you an (N)oystr! This isn't published this on X (formerly known as Twitter), Facebook, Discord, Telegram, or Instagram, which makes you the rare breed! A pearl indeed! I do have access to those platforms, but why create content on a terrible platform knowing I too could be shut down! Unfortunately, many people still use these platforms. This forces individuals to give up their privacy every day. Meta is leading the charge by forcing users to provide a photo ID for verification in order to use their crappy, obsolete site. If that was not bad enough, imagine if you're having a type of disagreement or opinion. Then, Bigtech can easily deplatform you. Umm. So no open debate? Just instantly shut-off users. Whatever, happened to right to a fair trial? Nope, just burning you at the stake as if you're a witch or warlock!
How heinous are the perpetrators and financiers of this? Well, that's opening another can of worms for you.
Imagine your voice being taken away, like the little mermaid. Ariel was lucky to have a prince, but the majority of us? The likelihood that I would be carried away by the current of the sea during a sunset with a prince on a sailboat is zero. And I live on an island, so I'm just missing the prince, sailboat(though I know where I could go to steal one), and red hair. Oh my gosh, now I feel sad.
I do not have the prince, Bob is better! I do not have mermaid fins, or a shell bra. Use coconut shells, it offers more support! But, I still have my voice and a killer sunset to die for!
All of that is possible thanks to the work of developers. These knights fight for Freedom Tech by utilizing FOSS, which help provides us with a vibrant ecosystem. Unfortunately, I recently learned that they are not all funded. Knights must eat, drink, and have a work space. This space is where they spend most of their sweat equity on an app or software that may and may not pan out. That brilliance is susceptible to fading, as these individuals are not seen but rather stay behind closed doors. What's worse, if these developers lose faith in their project and decide to join forces with Meta! 😖 Does WhatsApp ring a bell?
Without them, I probably wouldn't be able to create this long form article. Let's cheer them on like cheerleaders.. 👉Unfortunately, there's no cheerleader emoji so you'll just have to settle for a dancing lady, n guy. 💃🕺
Semisol said it beautifully, npub12262qa4uhw7u8gdwlgmntqtv7aye8vdcmvszkqwgs0zchel6mz7s6cgrkj
If we want freedom tech to succeed, the tools that make it possible need to be funded: relays like https://nostr.land, media hosts like https://nostr.build, clients like https://damus.io, etc.
With that thought, Onigirl is pleased to announce the launch of a new series. With a sole focus on free market devs/projects.
Knights of Nostr!
I'll happily brief you about their exciting project and how it benefits humanity! Let's Support these Magnificent projects, devs, relays, and builders! Our first runner up!
Oppa Fishcake :Lord of Media Hosting
npub137c5pd8gmhhe0njtsgwjgunc5xjr2vmzvglkgqs5sjeh972gqqxqjak37w
Oppa Fishcake with his noble steed!
Think of this as an introduction to learn and further your experience on Nostr! New developments and applications are constantly happening on Nostr. It's enough to make one's head spin. I may also cover FOSS projects(outside of Nostr) as they need some love as well! Plus, you can think of it as another tool to add to your decentralized life. I will not be doing how-to-Nostr guides. I personally feel there are plenty of great guides already available! Which I'm happy to add to curation collection via easily searchable on Yakihonne.
For email updates you can subscribe to my [[https://paragraph.xyz/@onigirl]]
If you like it, send me some 🧡💛💚 hearts💜💗💖 otherwise zap dat⚡⚡🍑🍑peach⚡⚡🍑 ~If not me, then at least to our dearest knight!
Thank you from the bottom of my heart for your time and support (N)oystr! Shine bright like a diamond! Share if you care! FOSS power!
Follow on your favorite Nostr Client for the best viewing experience!
[!NOTE]
I'm using Obsidian + Nostr Writer Plugin; a new way to publish Markdown directly to Nostr. I was a little nervous using this because I was used doing them in RStudio; R Markdown.
Since this is my first article, I sent it to my account as a draft to test it. It's pretty neat. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Personagens de jogos e símbolos
A sensação de "ser" um personagem em um jogo ou uma brincadeira talvez seja o mais próximo que eu tenha conseguido chegar do entendimento de um símbolo religioso.
A hóstia consagrada é, segundo a religião, o corpo de Cristo, mas nossa mente moderna só consegue concebê-la como sendo uma representação do corpo de Cristo. Da mesma forma outras culturas e outras religiões têm símbolos parecidos, inclusive nos quais o próprio participante do ritual faz o papel de um deus ou de qualquer coisa parecida.
"Faz o papel" é de novo a interpretação da mente moderna. O sujeito ali é a coisa, mas ele ao mesmo tempo que é também sabe que não é, que continua sendo ele mesmo.
Nos jogos de videogame e brincadeiras infantis em que se encarna um personagem o jogador é o personagem. não se diz, entre os jogadores, que alguém está "encenando", mas que ele é e pronto. nem há outra denominação ou outro verbo. No máximo "encarnando", mas já aí já é vocabulário jornalístico feito para facilitar a compreensão de quem está de fora do jogo.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A Causa
o Princípios de Economia Política de Menger é o único livro que enfatiza a CAUSA o tempo todo. os cientistas todos parecem não saber, ou se esquecer sempre, que as coisas têm causa, e que o conhecimento verdadeiro é o conhecimento da causa das coisas.
a causa é uma categoria metafísica muito superior a qualquer correlação ou resultado de teste de hipótese, ela não pode ser descoberta por nenhum artifício econométrico ou reduzida à simples antecedência temporal estatística. a causa dos fenômenos não pode ser provada cientificamente, mas pode ser conhecida.
o livro de Menger conta para o leitor as causas de vários fenômenos econômicos e as interliga de forma que o mundo caótico da economia parece adquirir uma ordem no momento em que você lê. é uma sensação mágica e indescritível.
quando eu te o recomendei, queria é te imbuir com o espírito da busca pela causa das coisas. depois de ler aquilo, você está apto a perceber continuidade causal nos fenômenos mais complexos da economia atual, enxergar as causas entre toda a ação governamental e as suas várias consequências na vida humana. eu faço isso todos os dias e é a melhor sensação do mundo quando o caos das notícias do caderno de Economia do jornal -- que para o próprio jornalista que as escreveu não têm nenhum sentido (tanto é que ele escreve tudo errado) -- se incluem num sistema ordenado de causas e consequências.
provavelmente eu sempre erro em alguns ou vários pontos, mas ainda assim é maravilhoso. ou então é mais maravilhoso ainda quando eu descubro o erro e reinsiro o acerto naquela racionalização bela da ordem do mundo econômico que é a ordem de Deus.
em scrap para T.P.
-
@ 6871d8df:4a9396c1
2024-02-24 22:42:16In an era where data seems to be as valuable as currency, the prevailing trend in AI starkly contrasts with the concept of personal data ownership. The explosion of AI and the ensuing race have made it easy to overlook where the data is coming from. The current model, dominated by big tech players, involves collecting vast amounts of user data and selling it to AI companies for training LLMs. Reddit recently penned a 60 million dollar deal, Google guards and mines Youtube, and more are going this direction. But is that their data to sell? Yes, it's on their platforms, but without the users to generate it, what would they monetize? To me, this practice raises significant ethical questions, as it assumes that user data is a commodity that companies can exploit at will.
The heart of the issue lies in the ownership of data. Why, in today's digital age, do we not retain ownership of our data? Why can't our data follow us, under our control, to wherever we want to go? These questions echo the broader sentiment that while some in the tech industry — such as the blockchain-first crypto bros — recognize the importance of data ownership, their "blockchain for everything solutions," to me, fall significantly short in execution.
Reddit further complicates this with its current move to IPO, which, on the heels of the large data deal, might reinforce the mistaken belief that user-generated data is a corporate asset. Others, no doubt, will follow suit. This underscores the urgent need for a paradigm shift towards recognizing and respecting user data as personal property.
In my perfect world, the digital landscape would undergo a revolutionary transformation centered around the empowerment and sovereignty of individual data ownership. Platforms like Twitter, Reddit, Yelp, YouTube, and Stack Overflow, integral to our digital lives, would operate on a fundamentally different premise: user-owned data.
In this envisioned future, data ownership would not just be a concept but a practice, with public and private keys ensuring the authenticity and privacy of individual identities. This model would eliminate the private data silos that currently dominate, where companies profit from selling user data without consent. Instead, data would traverse a decentralized protocol akin to the internet, prioritizing user control and transparency.
The cornerstone of this world would be a meritocratic digital ecosystem. Success for companies would hinge on their ability to leverage user-owned data to deliver unparalleled value rather than their capacity to gatekeep and monetize information. If a company breaks my trust, I can move to a competitor, and my data, connections, and followers will come with me. This shift would herald an era where consent, privacy, and utility define the digital experience, ensuring that the benefits of technology are equitably distributed and aligned with the users' interests and rights.
The conversation needs to shift fundamentally. We must challenge this trajectory and advocate for a future where data ownership and privacy are not just ideals but realities. If we continue on our current path without prioritizing individual data rights, the future of digital privacy and autonomy is bleak. Big tech's dominance allows them to treat user data as a commodity, potentially selling and exploiting it without consent. This imbalance has already led to users being cut off from their digital identities and connections when platforms terminate accounts, underscoring the need for a digital ecosystem that empowers user control over data. Without changing direction, we risk a future where our content — and our freedoms by consequence — are controlled by a few powerful entities, threatening our rights and the democratic essence of the digital realm. We must advocate for a shift towards data ownership by individuals to preserve our digital freedoms and democracy.
-
@ 7e538978:a5987ab6
2025-05-07 10:25:30Across Switzerland, customers at SPAR supermarkets are now able to pay for their groceries using Lightning on Bitcoin — a step towards everyday Bitcoin adoption. This rollout was led by DFX, a Bitcoin services company focused on onboarding businesses and individuals to Bitcoin. Behind the scenes LNbits plays a key role.
## Lightning at the Checkout
SPAR’s approach is simple: at the till, customers can scan a static QR code to pay in Bitcoin using the Lightning Network. Each checkout in each participating store has its own unique LNURL address — a reusable QR code designed for fast, low-friction Lightning payments.
To manage these LNURLs, DFX leverages LNbits. Using the LNbits Pay Links extension, DFX generates LNURLs for each till across the network of participating SPAR locations. The result is a robust, reliable setup that works at scale. Store staff do not interact with LNbits directly — instead, DFX manages the backend, ensuring each till has a dedicated LNURL without operational overhead for SPAR employees.
At SPAR we use static QR codes that meet the LNURL standard. Therefore we use LNbits. Each checkout has its own personal LNURL address which we generate with LNbits.
— Cyrill Thommen, CEO of DFX.Swiss— Cyrill Thommen, CEO of DFX.Swiss
## LNbits in Action
LNbits provided DFX with a modular, open-source solution that allows them to build only what they need, without locking into a rigid platform. For instance, DFX built custom monitoring around payment events using the LNbits API, while keeping full control over wallet infrastructure.
The ability to generate and manage LNURLs through the LNbits API, while layering additional monitoring and business logic on top, made LNbits a practical choice.
DFX’s setup highlights how open source software, Bitcoin and purpose-built tools can underpin enterprise-grade deployments. The system works reliably — without introducing friction for customers or staff.
Bitcoin in the Real World
Switzerland is already one of Europe’s most Bitcoin-friendly environments, with over 1,000 businesses accepting Bitcoin. But SPAR’s implementation is noteworthy for its scale and practicality: everyday purchases, completed with Bitcoin, at a national supermarket chain.
LNbits' flexible architecture, API-first design, and plug-in system make it well suited to precisely this kind of adoption.
As more retailers explore Lightning integration, SPAR’s rollout sets a precedent — showing how modular, open-source tools like LNbits can bring Bitcoin into daily life, seamlessly.
-
@ 9c35fe6b:5977e45b
2025-05-07 08:49:00Sailing the Nile on the Dahabiya Gorgonia Nile Cruise offers an intimate way to experience the timeless beauty of Egypt. This elegant boat is ideal for travelers who seek calm, charm, and tradition. With only a few cabins onboard, guests can enjoy a more serene atmosphere compared to larger vessels. ETB Tours Egypt offers this unique cruise option as part of its Egypt vacation packages, blending cultural richness with comfort.
Cultural Encounters Along the Nile The Dahabiya Nile Cruises give you a chance to witness life along the riverbanks up close. Onboard the Gorgonia, travelers visit lesser-known villages, ancient temples, and local markets far from the tourist crowds. With ETB Tours Egypt, your itinerary includes authentic experiences guided by knowledgeable Egyptologists—an ideal choice for those seeking Egypt private tours.
Scenic Sailing with Modern Comfort Although traditional in style, the Dahabiya Gorgonia is equipped with modern amenities to ensure a pleasant journey. Guests enjoy spacious decks, elegant dining, and personalized service. Whether part of an All inclusive Egypt vacations or a custom-designed plan, this cruise ensures you travel in style without missing out on comfort.
Smart Choices for Smart Travelers ETB Tours Egypt also makes sure that the Dahabiya Gorgonia Nile Cruise is available as part of their Egypt budget tours, offering excellent value without compromising on experience. It's perfect for those who want to enjoy the best of Egypt without overspending. Flexible options are also available through their wide range of Egypt travel packages, making it easy to match your schedule and interests.
To Contact Us: E-Mail: info@etbtours.com Mobile & WhatsApp: +20 10 67569955 - +201021100873 Address: 4 El Lebeny Axis, Nazlet Al Batran, Al Haram, Giza, Egypt
-
@ 8ce092d8:950c24ad
2024-02-04 23:35:07Overview
- Introduction
- Model Types
- Training (Data Collection and Config Settings)
- Probability Viewing: AI Inspector
- Match
- Cheat Sheet
I. Introduction
AI Arena is the first game that combines human and artificial intelligence collaboration.
AI learns your skills through "imitation learning."
Official Resources
- Official Documentation (Must Read): Everything You Need to Know About AI Arena
Watch the 2-minute video in the documentation to quickly understand the basic flow of the game. 2. Official Play-2-Airdrop competition FAQ Site https://aiarena.notion.site/aiarena/Gateway-to-the-Arena-52145e990925499d95f2fadb18a24ab0 3. Official Discord (Must Join): https://discord.gg/aiarenaplaytest for the latest announcements or seeking help. The team will also have a exclusive channel there. 4. Official YouTube: https://www.youtube.com/@aiarena because the game has built-in tutorials, you can choose to watch videos.
What is this game about?
- Although categorized as a platform fighting game, the core is a probability-based strategy game.
- Warriors take actions based on probabilities on the AI Inspector dashboard, competing against opponents.
- The game does not allow direct manual input of probabilities for each area but inputs information through data collection and establishes models by adjusting parameters.
- Data collection emulates fighting games, but training can be completed using a Dummy As long as you can complete the in-game tutorial, you can master the game controls.
II. Model Types
Before training, there are three model types to choose from: Simple Model Type, Original Model Type, and Advanced Model Type.
It is recommended to try the Advanced Model Type after completing at least one complete training with the Simple Model Type and gaining some understanding of the game.
Simple Model Type
The Simple Model is akin to completing a form, and the training session is comparable to filling various sections of that form.
This model has 30 buckets. Each bucket can be seen as telling the warrior what action to take in a specific situation. There are 30 buckets, meaning 30 different scenarios. Within the same bucket, the probabilities for direction or action are the same.
For example: What should I do when I'm off-stage — refer to the "Recovery (you off-stage)" bucket.
For all buckets, refer to this official documentation:
https://docs.aiarena.io/arenadex/game-mechanics/tabular-model-v2
Video (no sound): The entire training process for all buckets
https://youtu.be/1rfRa3WjWEA
Game version 2024.1.10. The method of saving is outdated. Please refer to the game updates.
Advanced Model Type
The "Original Model Type" and "Advanced Model Type" are based on Machine Learning, which is commonly referred to as combining with AI.
The Original Model Type consists of only one bucket, representing the entire map. If you want the AI to learn different scenarios, you need to choose a "Focus Area" to let the warrior know where to focus. A single bucket means that a slight modification can have a widespread impact on the entire model. This is where the "Advanced Model Type" comes in.
The "Advanced Model Type" can be seen as a combination of the "Original Model Type" and the "Simple Model Type". The Advanced Model Type divides the map into 8 buckets. Each bucket can use many "Focus Area." For a detailed explanation of the 8 buckets and different Focus Areas, please refer to the tutorial page (accessible in the Advanced Model Type, after completing a training session, at the top left of the Advanced Config, click on "Tutorial").
III. Training (Data Collection and Config Settings)
Training Process:
- Collect Data
- Set Parameters, Train, and Save
- Repeat Step 1 until the Model is Complete
Training the Simple Model Type is the easiest to start with; refer to the video above for a detailed process.
Training the Advanced Model Type offers more possibilities through the combination of "Focus Area" parameters, providing a higher upper limit. While the Original Model Type has great potential, it's harder to control. Therefore, this section focuses on the "Advanced Model Type."
1. What Kind of Data to Collect
- High-Quality Data: Collect purposeful data. Garbage in, garbage out. Only collect the necessary data; don't collect randomly. It's recommended to use Dummy to collect data. However, don't pursue perfection; through parameter adjustments, AI has a certain level of fault tolerance.
- Balanced Data: Balance your dataset. In simple terms, if you complete actions on the left side a certain number of times, also complete a similar number on the right side. While data imbalance can be addressed through parameter adjustments (see below), it's advised not to have this issue during data collection.
- Moderate Amount: A single training will include many individual actions. Collect data for each action 1-10 times. Personally, it's recommended to collect data 2-3 times for a single action. If the effect of a single training is not clear, conduct a second (or even third) training with the same content, but with different parameter settings.
2. What to Collect (and Focus Area Selection)
Game actions mimic fighting games, consisting of 4 directions + 6 states (Idle, Jump, Attack, Grab, Special, Shield). Directions can be combined into ↗, ↘, etc. These directions and states can then be combined into different actions.
To make "Focus Area" effective, you need to collect data in training that matches these parameters. For example, for "Distance to Opponent", you need to collect data when close to the opponent and also when far away. * Note: While you can split into multiple training sessions, it's most effective to cover different situations within a single training.
Refer to the Simple Config, categorize the actions you want to collect, and based on the game scenario, classify them into two categories: "Movement" and "Combat."
Movement-Based Actions
Action Collection
When the warrior is offstage, regardless of where the opponent is, we require the warrior to return to the stage to prevent self-destruction.
This involves 3 aerial buckets: 5 (Near Blast Zone), 7 (Under Stage), and 8 (Side Of Stage).
* Note: The background comes from the Tutorial mentioned earlier. The arrows in the image indicate the direction of the action and are for reference only. * Note: Action collection should be clean; do not collect actions that involve leaving the stage.
Config Settings
In the Simple Config, you can directly choose "Movement" in it. However, for better customization, it's recommended to use the Advanced Config directly. - Intensity: The method for setting Intensity will be introduced separately later. - Buckets: As shown in the image, choose the bucket you are training. - Focus Area: Position-based parameters: - Your position (must) - Raycast Platform Distance, Raycast Platform Type (optional, generally choose these in Bucket 7)
Combat-Based Actions
The goal is to direct attacks quickly and effectively towards the opponent, which is the core of game strategy.
This involves 5 buckets: - 2 regular situations - In the air: 6 (Safe Zone) - On the ground: 4 (Opponent Active) - 3 special situations on the ground: - 1 Projectile Active - 2 Opponent Knockback - 3 Opponent Stunned
2 Regular Situations
In the in-game tutorial, we learned how to perform horizontal attacks. However, in the actual game, directions expand to 8 dimensions. Imagine having 8 relative positions available for launching hits against the opponent. Our task is to design what action to use for attack or defense at each relative position.
Focus Area - Basic (generally select all) - Angle to opponent
- Distance to opponent - Discrete Distance: Choosing this option helps better differentiate between closer and farther distances from the opponent. As shown in the image, red indicates a relatively close distance, and green indicates a relatively distant distance.- Advanced: Other commonly used parameters
- Direction: different facings to opponent
- Your Elemental Gauge and Discrete Elementals: Considering the special's charge
- Opponent action: The warrior will react based on the opponent's different actions.
- Your action: Your previous action. Choose this if teaching combos.
3 Special Situations on the Ground
Projectile Active, Opponent Stunned, Opponent Knockback These three buckets can be referenced in the Simple Model Type video. The parameter settings approach is the same as Opponent Active/Safe Zone.
For Projectile Active, in addition to the parameters based on combat, to track the projectile, you also need to select "Raycast Projectile Distance" and "Raycast Projectile On Target."
3. Setting "Intensity"
Resources
- The "Tutorial" mentioned earlier explains these parameters.
- Official Config Document (2022.12.24): https://docs.google.com/document/d/1adXwvDHEnrVZ5bUClWQoBQ8ETrSSKgG5q48YrogaFJs/edit
TL;DR:
Epochs: - Adjust to fewer epochs if learning is insufficient, increase for more learning.
Batch Size: - Set to the minimum (16) if data is precise but unbalanced, or just want it to learn fast - Increase (e.g., 64) if data is slightly imprecise but balanced. - If both imprecise and unbalanced, consider retraining.
Learning Rate: - Maximize (0.01) for more learning but a risk of forgetting past knowledge. - Minimize for more accurate learning with less impact on previous knowledge.
Lambda: - Reduce for prioritizing learning new things.
Data Cleaning: - Enable "Remove Sparsity" unless you want AI to learn idleness. - For special cases, like teaching the warrior to use special moves when idle, refer to this tutorial video: https://discord.com/channels/1140682688651612291/1140683283626201098/1195467295913431111
Personal Experience: - Initial training with settings: 125 epochs, batch size 16, learning rate 0.01, lambda 0, data cleaning enabled. - Prioritize Multistream, sometimes use Oversampling. - Fine-tune subsequent training based on the mentioned theories.
IV. Probability Viewing: AI Inspector
The dashboard consists of "Direction + Action." Above the dashboard, you can see the "Next Action" – the action the warrior will take in its current state. The higher the probability, the more likely the warrior is to perform that action, indicating a quicker reaction. It's essential to note that when checking the Direction, the one with the highest visual representation may not have the highest numerical value. To determine the actual value, hover the mouse over the graphical representation, as shown below, where the highest direction is "Idle."
In the map, you can drag the warrior to view the probabilities of the warrior in different positions. Right-click on the warrior with the mouse to change the warrior's facing. The status bar below can change the warrior's state on the map.
When training the "Opponent Stunned, Opponent Knockback" bucket, you need to select the status below the opponent's status bar. If you are focusing on "Opponent action" in the Focus Zone, choose the action in the opponent's status bar. If you are focusing on "Your action" in the Focus Zone, choose the action in your own status bar. When training the "Projectile Active" Bucket, drag the projectile on the right side of the dashboard to check the status.
Next
The higher the probability, the faster the reaction. However, be cautious when the action probability reaches 100%. This may cause the warrior to be in a special case of "State Transition," resulting in unnecessary "Idle" states.
Explanation: In each state a fighter is in, there are different "possible transitions". For example, from falling state you cannot do low sweep because low sweep requires you to be on the ground. For the shield state, we do not allow you to directly transition to headbutt. So to do headbutt you have to first exit to another state and then do it from there (assuming that state allows you to do headbutt). This is the reason the fighter runs because "run" action is a valid state transition from shield. Source
V. Learn from Matches
After completing all the training, your model is preliminarily finished—congratulations! The warrior will step onto the arena alone and embark on its debut!
Next, we will learn about the strengths and weaknesses of the warrior from battles to continue refining the warrior's model.
In matches, besides appreciating the performance, pay attention to the following:
-
Movement, i.e., Off the Stage: Observe how the warrior gets eliminated. Is it due to issues in the action settings at a certain position, or is it a normal death caused by a high percentage? The former is what we need to avoid and optimize.
-
Combat: Analyze both sides' actions carefully. Observe which actions you and the opponent used in different states. Check which of your hits are less effective, and how does the opponent handle different actions, etc.
The approach to battle analysis is similar to the thought process in the "Training", helping to have a more comprehensive understanding of the warrior's performance and making targeted improvements.
VI. Cheat Sheet
Training 1. Click "Collect" to collect actions. 2. "Map - Data Limit" is more user-friendly. Most players perform initial training on the "Arena" map. 3. Switch between the warrior and the dummy: Tab key (keyboard) / Home key (controller). 4. Use "Collect" to make the opponent loop a set of actions. 5. Instantly move the warrior to a specific location: Click "Settings" - SPAWN - Choose the desired location on the map - On. Press the Enter key (keyboard) / Start key (controller) during training.
Inspector 1. Right-click on the fighter to change their direction. Drag the fighter and observe the changes in different positions and directions. 2. When satisfied with the training, click "Save." 3. In "Sparring" and "Simulation," use "Current Working Model." 4. If satisfied with a model, then click "compete." The model used in the rankings is the one marked as "competing."
Sparring / Ranked 1. Use the Throneroom map only for the top 2 or top 10 rankings. 2. There is a 30-second cooldown between matches. The replays are played for any match. Once the battle begins, you can see the winner on the leaderboard or by right-clicking the page - Inspect - Console. Also, if you encounter any errors or bugs, please send screenshots of the console to the Discord server.
Good luck! See you on the arena!
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Veterano não é dono de bixete
"VETERANO NÃO É DONO DE BIXETE". A frase em letras garrafais chama a atenção dos transeuntes neófitos. Paira sobre um cartaz amarelo que lista várias reclamações contra os "trotes machistas", que, na opinião do responsável pelo cartaz, "não é brincadeira, é opressão".
Eis aí um bizarro exemplo de como são as coisas: primeiro todos os universitários aprovam a idéia do trote, apoiam sua realização e até mesmo desejam sofrer o trote -- com a condição de o poderem aplicar eles mesmos depois --, louvam as maravilhas do mundo universitário, onde a suprema sabedoria se esconde atrás de rituais iniciáticos fora do alcance da imaginação do homem comum e rude, do pobre e do filhinho-de-papai das faculdades privadas; em suma: fomentam os mais baixos, os mais animalescos instintos, a crueldade primordial, destroem em si mesmos e nos colegas quaisquer valores civilizatórios que tivessem sobrado ali, ficando todos indistingüíveis de macacos agressivos e tarados.
Depois vêm aí com um cartaz protestar contra os assédios -- que sem dúvida acontecem em larguíssima escala -- sofridos pelas calouras de 17 anos e que, sendo também novatas no mundo universitário, ainda conservam um pouco de discernimento e pudor.
A incompreensão do fenômeno, porém, é tão grande, que os trotes não são identificados como um problema mental, uma doença que deve ser tratada e eliminada, mas como um sintoma da opressão machista dos homens às mulheres, um produto desta civilização paternalista que, desde que Deus é chamado "o Pai" e não "a Mãe", corrompe a benéfica, pura e angélica natureza do homem primitivo e o torna esta tão torpe criatura.
Na opinião dos autores desse cartaz é preciso, pois, continuar a destruir o que resta da cultura ocidental, e então esperar que haja trotes menos opressores.
-
@ 78b3c1ed:5033eea9
2025-05-07 08:23:24各ノードにポリシーがある理由 → ノードの資源(CPU、帯域、メモリ)を守り、無駄な処理を避けるため
なぜポリシーがコンセンサスルールより厳しいか 1.資源の節約 コンセンサスルールは「最終的に有効かどうか」の基準だが、全トランザクションをいちいち検証して中継すると資源が枯渇する。 ポリシーで「最初から弾く」仕組みが必要。
-
ネットワーク健全性の維持 手数料が低い、複雑すぎる、標準でないスクリプトのトランザクションが大量に流れると、全体のネットワークが重くなる。 これを防ぐためにノードは独自のポリシーで中継制限。
-
開発の柔軟性 ポリシーはソフトウェアアップデートで柔軟に変えられるが、コンセンサスルールは変えるとハードフォークの危険がある。 ポリシーを厳しくすることで、安全に新しい制限を試すことができる。
標準ポリシーの意味は何か? ノードオペレーターは自分でbitcoindの設定やコードを書き換えて独自のポリシーを使える。 理論上ポリシーは「任意」で、標準ポリシー(Bitcoin Coreが提供するポリシー)は単なるデフォルト値。 ただし、標準ポリシーには以下の大事な意味がある。
-
ネットワークの互換性を保つ基準 みんなが全く自由なポリシーを使うとトランザクションの伝播効率が落ちる。 標準ポリシーは「大多数のノードに中継される最小基準」を提供し、それを守ればネットワークに流せるという共通の期待値になる。
-
開発・サービスの指針 ウォレット開発者やサービス提供者(取引所・支払いサービスなど)は、「標準ポリシーに準拠したトランザクションを作れば十分」という前提で開発できる。 もし標準がなければ全ノードの個別ポリシーを調査しないと流れるトランザクションを作れなくなる。
-
コミュニティの合意形成の場 標準ポリシーはBitcoin Coreの開発・議論で決まる。ここで新しい制限や緩和を入れれば、まずポリシーレベルで試せる。 問題がなければ、将来のコンセンサスルールに昇格させる議論の土台になる。
つまりデフォルトだけど重要。 確かに標準ポリシーは技術的には「デフォルト値」にすぎないが、実際にはネットワークの安定・互換性・開発指針の柱として重要な役割を果たす。
ビットコインノードにおける「無駄な処理」というのは、主に次のようなものを指す。 1. 承認される見込みのないトランザクションの検証 例: 手数料が極端に低く、マイナーが絶対にブロックに入れないようなトランザクション → これをいちいち署名検証したり、メモリプールに載せるのはCPU・RAMの無駄。
-
明らかに標準外のスクリプトや形式の検証 例: 極端に複雑・非標準なスクリプト(non-standard script) → コンセンサス的には有効だが、ネットワークの他ノードが中継しないため、無駄な伝播になる。
-
スパム的な大量トランザクションの処理 例: 攻撃者が極小手数料のトランザクションを大量に送り、メモリプールを膨張させる場合 → メモリやディスクI/O、帯域の消費が無駄になる。
-
明らかに無効なブロックの詳細検証 例: サイズが大きすぎるブロック、難易度条件を満たさないブロック → 早期に弾かないと、全トランザクション検証や署名検証で計算資源を浪費する。
これらの無駄な処理は、ノードの CPU時間・メモリ・ディスクI/O・帯域 を消耗させ、最悪の場合は DoS攻撃(サービス妨害攻撃) に悪用される。 そこでポリシーによって、最初の受信段階、または中継段階でそもそも検証・保存・転送しないように制限する。 まとめると、「無駄な処理」とはネットワークの大勢に受け入れられず、ブロックに取り込まれないトランザクションやブロックにノード資源を使うこと。
無駄な処理かどうかは、単に「ポリシーで禁止されているか」で決まるわけではない。
本質的には次の2つで判断される 1. ノードの資源(CPU、メモリ、帯域、ディスク)を過剰に使うか 2. 他のノード・ネットワーク・マイナーに受け入れられる見込みがあるか
将来のBitcoin CoreのバージョンでOP_RETURNの出力数制限やデータサイズ制限が撤廃されたとする。 この場合標準ポリシー的には通るので、中継・保存されやすくなる。 しかし、他のノードやマイナーが追随しなければ意味がない。大量に流せばやはりDoS・スパム扱いされ、無駄な資源消費になる。
最終的には、ネットワーク全体の運用実態。 標準ポリシーの撤廃だけでは、「無駄な処理ではない」とは断定できない。 実質的な「無駄な処理」の判定は、技術的制約+経済的・運用的現実のセットで決まる。
-
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Bolo
It seems that from 1987 to around 2000 there was a big community of people who played this game called "Bolo". It was a game in which people controlled a tank and killed others while trying to capture bases in team matches. Always 2 teams, from 2 to 16 total players, games could last from 10 minutes to 12 hours. I'm still trying to understand all this.
The game looks silly from some videos you can find today, but apparently it was very deep in strategy because people developed strategy guides and wrote extensively about it and Netscape even supported
bolo:
URLs out of the box.The two most important elements on the map are pillboxes and bases. Pillboxes are originally neutral, meaning that they shoot at every tank that happens to get in its range. They shoot fast and with deadly accuracy. You can shoot the pillbox with your tank, and you can see how damaged it is by looking at it. Once the pillbox is subdued, you may run over it, which will pick it up. You may place the pillbox where you want to put it (where it is clear), if you've enough trees to build it back up. Trees are harvested by sending your man outside your tank to forest the trees. Your man (also called a builder) can also lay mines, build roads, and build walls. Once you have placed a pillbox, it will not shoot at you, but only your enemies. Therefore, pillboxes are often used to protect your bases.
That quote was taken from this "augmented FAQ" written by some user. Apparently there were many FAQs for this game. A FAQ is after all just a simple, clear and direct to the point way of writing about anything, previously known as summa[^summa-k], it doesn't have to be related to any actually frequently asked question.
More unexpected Bolo writings include an etiquette guide, an anthropology study and some wonderings on the reverse pill war tactic.
[^summa-k]: It's not the same thing, but I couldn't help but notice the similarity.
-
@ 866e0139:6a9334e5
2025-05-07 08:18:51Autor: Nicolas Riedl. Dieser Beitrag wurde mit dem Pareto-Client geschrieben. Sie finden alle Texte der Friedenstaube und weitere Texte zum Thema Frieden hier. Die neuesten Pareto-Artikel finden Sie in unserem Telegram-Kanal.
Dieser Beitrag erschien zuerst bei Radio München.
Die neuesten Artikel der Friedenstaube gibt es jetzt auch im eigenen Friedenstaube-Telegram-Kanal.
Das Kriegsgrauen kriecht unter die Haut. Bilder von verstümmelten Beinen und Armen, von Kriegstraumatisierten schweigenden Männern, von Kriegsgräbern steigen auf. Als Mutter, Schwester, Tante, Großmutter wachsen die Ängste, dass sich ein Verwandter von der politischen und medialen Kriegslust anstecken lässt und tatsächlich die Beteiligung an den näher kommenden kriegerischen Auseinandersetzungen in Erwägung zieht. Einen wütenden Kommentar anlässlich der wachsenden Kriegstreiberei verfasste unser Autor Nicolas Riedl.
Nicolas Riedl, Jahrgang 1993, geboren in München, studierte Medien-, Theater- und Politikwissenschaften in Erlangen. Den immer abstruser werdenden Zeitgeist der westlichen Kultur dokumentiert und analysiert er in kritischen Texten. Darüber hinaus ist er Büchernarr, strikter Bargeldzahler und ein für seine Generation ungewöhnlicher Digitalisierungsmuffel. Entsprechend findet man ihn auf keiner Social-Media-Plattform. Von 2017 bis 2023 war er für die Rubikon-Jugendredaktion und Videoredaktion tätig.
LASSEN SIE DER FRIEDENSTAUBE FLÜGEL WACHSEN!
Hier können Sie die Friedenstaube abonnieren und bekommen die Artikel zugesandt.
Schon jetzt können Sie uns unterstützen:
- Für 50 CHF/EURO bekommen Sie ein Jahresabo der Friedenstaube.
- Für 120 CHF/EURO bekommen Sie ein Jahresabo und ein T-Shirt/Hoodie mit der Friedenstaube.
- Für 500 CHF/EURO werden Sie Förderer und bekommen ein lebenslanges Abo sowie ein T-Shirt/Hoodie mit der Friedenstaube.
- Ab 1000 CHF werden Sie Genossenschafter der Friedenstaube mit Stimmrecht (und bekommen lebenslanges Abo, T-Shirt/Hoodie).
Für Einzahlungen in CHF (Betreff: Friedenstaube):
Für Einzahlungen in Euro:
Milosz Matuschek
IBAN DE 53710520500000814137
BYLADEM1TST
Sparkasse Traunstein-Trostberg
Betreff: Friedenstaube
Wenn Sie auf anderem Wege beitragen wollen, schreiben Sie die Friedenstaube an: friedenstaube@pareto.space
Sie sind noch nicht auf Nostr and wollen die volle Erfahrung machen (liken, kommentieren etc.)? Zappen können Sie den Autor auch ohne Nostr-Profil! Erstellen Sie sich einen Account auf Start. Weitere Onboarding-Leitfäden gibt es im Pareto-Wiki.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28neuron.vim
I started using this neuron thing to create an update this same zettelkasten, but the existing vim plugin had too many problems, so I forked it and ended up changing almost everything.
Since the upstream repository was somewhat abandoned, most users and people who were trying to contribute upstream migrate to my fork too.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Community
I was an avid IPFS user until yesterday. Many many times I asked simple questions for which I couldn't find an answer on the internet in the #ipfs IRC channel on Freenode. Most of the times I didn't get an answer, and even when I got it was rarely by someone who knew IPFS deeply. I've had issues go unanswered on js-ipfs repositories for year – one of these was raising awareness of a problem that then got fixed some months later by a complete rewrite, I closed my own issue after realizing that by myself some couple of months later, I don't think the people responsible for the rewrite were ever acknowledge that he had fixed my issue.
Some days ago I asked some questions about how the IPFS protocol worked internally, sincerely trying to understand the inefficiencies in finding and fetching content over IPFS. I pointed it would be a good idea to have a drawing showing that so people would understand the difficulties (which I didn't) and wouldn't be pissed off by the slowness. I was told to read the whitepaper. I had already the whitepaper, but read again the relevant parts. The whitepaper doesn't explain anything about the DHT and how IPFS finds content. I said that in the room, was told to read again.
Before anyone misread this section, I want to say I understand it's a pain to keep answering people on IRC if you're busy developing stuff of interplanetary importance, and that I'm not paying anyone nor I have the right to be answered. On the other hand, if you're developing a super-important protocol, financed by many millions of dollars and a lot of people are hitting their heads against your software and there's no one to help them; you're always busy but never delivers anything that brings joy to your users, something is very wrong. I sincerely don't know what IPFS developers are working on, I wouldn't doubt they're working on important things if they said that, but what I see – and what many other users see (take a look at the IPFS Discourse forum) is bugs, bugs all over the place, confusing UX, and almost no help.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On "zk-rollups" applied to Bitcoin
ZK rollups make no sense in bitcoin because there is no "cheap calldata". all data is already ~~cheap~~ expensive calldata.
There could be an onchain zk verification that allows succinct signatures maybe, but never a rollup.
What happens is: you can have one UTXO that contains multiple balances on it and in each transaction you can recreate that UTXOs but alter its state using a zk to compress all internal transactions that took place.
The blockchain must be aware of all these new things, so it is in no way "L2".
And you must have an entity responsible for that UTXO and for conjuring the state changes and zk proofs.
But on bitcoin you also must keep the data necessary to rebuild the proofs somewhere else, I'm not sure how can the third party responsible for that UTXO ensure that happens.
I think such a construct is similar to a credit card corporation: one central party upon which everybody depends, zero interoperability with external entities, every vendor must have an account on each credit card company to be able to charge customers, therefore it is not clear that such a thing is more desirable than solutions that are truly open and interoperable like Lightning, which may have its defects but at least fosters a much better environment, bringing together different conflicting parties, custodians, anyone.
-
@ 57d1a264:69f1fee1
2025-05-07 06:56:25Wild parrots tend to fly in flocks, but when kept as single pets, they may become lonely and bored https://www.youtube.com/watch?v=OHcAOlamgDc
Source: https://www.smithsonianmag.com/smart-news/scientists-taught-pet-parrots-to-video-call-each-other-and-the-birds-loved-it-180982041/
originally posted at https://stacker.news/items/973639
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Castas hindus em nova chave
Shudras buscam o máximo bem para os seus próprios corpos; vaishyas o máximo bem para a sua própria vida terrena e a da sua família; kshatriyas o máximo bem para a sociedade e este mundo terreno; brâmanes buscam o máximo bem.
-
@ 57d1a264:69f1fee1
2025-05-07 06:29:52Your device, your data. TRMNL's architecture prevents outsiders (including us) from accessing your local network. TRMNAL achieve this through 1 way communication between client and server, versus the other way around. Learn more.
Learn more at https://usetrmnl.com/
originally posted at https://stacker.news/items/973632
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Economics
Just a bunch of somewhat-related notes.
- notes on "Economic Action Beyond the Extent of the Market", Per Bylund
- Mises' interest rate theory
- Profits, not wages, as the originary factor
- Reisman on opportunity cost
- Money Supply Measurement
- Per Bylund's insight
- Maybe a new approach to the Austrian Business Cycle Theory, some disorganized thoughts
- An argument according to which fractional-reserve banking is merely theft and nothing else
- Conjecture and criticism
- Qual é o economista? (piadas)
- UBI calculations
- Donations on the internet
-
@ 57d1a264:69f1fee1
2025-05-07 06:16:30Here’s Sean Voisen writing about how programming is a feeling:
For those of us who enjoy programming, there is a deep satisfaction that comes from solving problems through well-written code, a kind of ineffable joy found in the elegant expression of a system through our favorite syntax. It is akin to the same satisfaction a craftsperson might find at the end of the day after toiling away on well-made piece of furniture, the culmination of small dopamine hits that come from sweating the details on something and getting them just right. Maybe nobody will notice those details, but it doesn’t matter. We care, we notice, we get joy from the aesthetics of the craft.
This got me thinking about the idea of satisfaction in craft. Where does it come from?
Continue Reading https://blog.jim-nielsen.com/2025/craft-and-satisfaction/
originally posted at https://stacker.news/items/973628
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Splitpages
The simplest possible service: it splitted PDF pages in half.
Created specially to solve the problem of those scanned books that come with two pages side-by-side as if they were a single page and are much harder to read on Kindle because of that.
It required me to learn about Heroku Buildpacks though, and fork or contribute to a Heroku Buildpack that embedded a mupdf binary.
-
@ 57d1a264:69f1fee1
2025-05-07 06:03:29CryptPad
Collaboration and privacy. Yes, you can have both Flagship instance of CryptPad, the end-to-end encrypted and open-source collaboration suite. Cloud administered by the CryptPad development team. https://cryptpad.fr/
ONLYOFFICE DocSpace
Document collaboration made simpler. Easily collaborate with customizable rooms. Edit any content you have. Work faster using AI assistants. Protect your sensitive business data. Download or try STARTUP Cloud (Limited-time offer) FREE https://www.onlyoffice.com/
SeaFile
A new way to organize your files Beyond just syncing and sharing files, Seafile lets you add custom file properties and organize your files in different views. With AI-powered automation for generating properties, Seafile offers a smarter, more efficient way to manage your files. Try it Now, Free for up to 3 users https://seafile.com/
SandStorm
An open source platform for self-hosting web apps Self-host web-based productivity apps easily and securely. Sandstorm is an open source project built by a community of volunteers with the goal of making it really easy to run open source web applications. Try the Demo or Signup Free https://alpha.sandstorm.io/apps
NextCloud Hub
A new generation of online collaboration that puts you in control. Nextcloud offers a modern, on premise content collaboration platform with real-time document editing, video chat & groupware on mobile, desktop and web. Sign up for a free Nextcloud account https://nextcloud.com/sign-up/
LinShare
True Open Source Secure File Sharing Solution We are committed to providing a reliable Open Source file-sharing solution, expertly designed to meet the highest standards of diverse industries, such as government and finance Try the Demo https://linshare.app/
Twake Drive
The open-source alternative to Google Drive. Privacy-First Open Source Workplace. Twake workplace open source business. Improve your effeciency with truly Open Source, all-in-one digital suite. Enhance the security in every aspect of your professional and private life. Sign up https://sign-up.twake.app/
SpaceDrive
One Explorer. All Your Files. Unify files from all your devices and clouds into a single, easy-to-use explorer. Designed for creators, hoarders and the painfully disorganized. Download desktop app (mobile coming soon) https://www.spacedrive.com/
ente
Safe Home for your photos Store, share, and discover your memories with end-to-end encryption. End-to-end encryption, durable storage and simple sharing. Packed with these and much more into our beautiful open source apps. Get started https://web.ente.io
fileStash
Turn your FTP server into... Filestash is the enterprise-grade file manager connecting your storage with your identity provider and authorisations. Try the demo https://demo.filestash.app
STORJ
Disruptively fast. Globally secure. S3-compatible distributed cloud services that make the most demanding workflows fast and affordable. Fast track your journey toward high performance cloud services. Storj pricing is consistent and competitive in meeting or exceeding your cloud services needs. Give the products a try to experience the benefits of the distributed cloud. Get Started https://www.storj.io/get-started
FireFile
The open‑source alternative to Dropbox. Firefiles lets you setup a cloud drive with the backend of your choice and lets you seamlessly manage your files across multiple providers. It revolutionizes cloud storage management by offering a unified platform for all your storage needs. Sign up Free https://beta.firefiles.app
originally posted at https://stacker.news/items/973626
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28tempreites
My first library to get stars on GitHub, was a very stupid templating library that used just HTML and HTML attributes ("DSL-free"). I was inspired by http://microjs.com/ at the time and ended up not using the library. Probably no one ever did.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Thoughts on Nostr key management
On Why I don't like NIP-26 as a solution for key management I talked about multiple techniques that could be used to tackle the problem of key management on Nostr.
Here are some ideas that work in tandem:
- NIP-41 (stateless key invalidation)
- NIP-46 (Nostr Connect)
- NIP-07 (signer browser extension)
- Connected hardware signing devices
- other things like musig or frostr keys used in conjunction with a semi-trusted server; or other kinds of trusted software, like a dedicated signer on a mobile device that can sign on behalf of other apps; or even a separate protocol that some people decide to use as the source of truth for their keys, and some clients might decide to use that automatically
- there are probably many other ideas
Some premises I have in my mind (that may be flawed) that base my thoughts on these matters (and cause me to not worry too much) are that
- For the vast majority of people, Nostr keys aren't a target as valuable as Bitcoin keys, so they will probably be ok even without any solution;
- Even when you lose everything, identity can be recovered -- slowly and painfully, but still --, unlike money;
- Nostr is not trying to replace all other forms of online communication (even though when I think about this I can't imagine one thing that wouldn't be nice to replace with Nostr) or of offline communication, so there will always be ways.
- For the vast majority of people, losing keys and starting fresh isn't a big deal. It is a big deal when you have followers and an online persona and your life depends on that, but how many people are like that? In the real world I see people deleting social media accounts all the time and creating new ones, people losing their phone numbers or other accounts associated with their phone numbers, and not caring very much -- they just find a way to notify friends and family and move on.
We can probably come up with some specs to ease the "manual" recovery process, like social attestation and explicit signaling -- i.e., Alice, Bob and Carol are friends; Alice loses her key; Bob sends a new Nostr event kind to the network saying what is Alice's new key; depending on how much Carol trusts Bob, she can automatically start following that and remove the old key -- or something like that.
One nice thing about some of these proposals, like NIP-41, or the social-recovery method, or the external-source-of-truth-method, is that they don't have to be implemented in any client, they can live in standalone single-purpose microapps that users open or visit only every now and then, and these can then automatically update their follow lists with the latest news from keys that have changed according to multiple methods.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Shitcoinery
IPFS was advertised to the Ethereum community since the beggining as a way to "store" data for their "dApps". I don't think this is harmful in any way, but for some reason it may have led IPFS developers to focus too much on Ethereum stuff. Once I watched a talk showing libp2p developers – despite being ignored by the Ethereum team (that ended up creating their own agnostic p2p library) – dedicating an enourmous amount of work on getting a libp2p app running in the browser talking to a normal Ethereum node.
The always somewhat-abandoned "Awesome IPFS" site is a big repository of "dApps", some of which don't even have their landing page up anymore, useless Ethereum smart contracts that for some reason use IPFS to store whatever the useless data their users produce.
Again, per se it isn't a problem that Ethereum people are using IPFS, but it is at least confusing, maybe misleading, that when you search for IPFS most of the use-cases are actually Ethereum useless-cases.
See also
- Bitcoin, the only non-shitcoin
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28litepub
A Go library that abstracts all the burdensome ActivityPub things and provides just the right amount of helpers necessary to integrate an existing website into the "fediverse" (what an odious name). Made for the gravity integration.
See also
-
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28questo.email
This was a thing done in a brief period I liked the idea of "indiewebcamp", a stupid movement of people saying everybody should have their site and post their lives in it.
From the GitHub postmortem:
questo.email was a service that integrated email addresses into the indieweb ecosystem by providing email-to-note and email-to-webmention triggers, which could be used for people to comment through webmention using their email addresses, and be replied, and also for people to send messages from their sites directly to the email addresses of people they knew; Questo also worked as an IndieAuth provider that used people's email addresses and Mozilla Persona.
It was live from December 2014 through December 2015.
Here's how the home page looked:
See also
- jekmentions, another thing related to "indieweb"
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28nix
Pra instalar o neuron fui forçado a baixar e instalar o nix. Não consegui me lembrar por que não estava usando até hoje aquele maravilhoso sistema de instalar pacotes desde a primeira vez que tentei, anos atrás.
Que sofrimento pra fazer funcionar com o
fish
, mas até que bem menos sofrimento que da outra vez. Tive que instalar um tal defish-foreign-environment
(usando o próprio nix!, já que a outra opção era ooh-my-fish
ou qualquer outra porcaria dessas) e aí usá-lo para aplicar as definições de shell para bash direto nofish
.E aí lembrei também que o
/nix/store
fica cheio demais, o negócio instala tudo que existe neste mundo a partir do zero. É só para computadores muito ricos, mas vamos ver como vai ser. Estou gostando do neuron (veja, estou usando como diário), então vou ter que deixar o nix aí. -
@ 502ab02a:a2860397
2025-05-07 01:08:58สัปดาห์นี้ถือว่าเป็นเบรคคั่นพักผ่อนแก้เครียดนิดหน่อยแล้วกันครับ เรามาเล่าย้อนอดีตกันนิดหน่อย เหมือนสตาร์วอส์ที่ฉายภาค 4-5-6 แล้วย้อนไป 1-2-3 ฮาๆๆๆ
เคยได้ยินคำว่า Nuremberg Trials ไหมครับ ย้อนความนิดนึงว่า Nuremberg (Nürnberg อ่านว่า เนือร์นแบร์ก) คือชื่อเมืองในเยอรมนีที่เคยเป็นเวทีพิจารณาคดีประวัติศาสตร์หลังสงครามโลกครั้งที่ 2
“Nuremberg Trials” คือการไต่สวนผู้มีส่วนเกี่ยวข้องกับ อาชญากรรมสงครามของนาซีเยอรมัน หลังสงครามโลกครั้งที่ 2 ในปี 1945–46 พันธมิตรผู้ชนะสงครามได้จับตัวผู้นำนาซี นักการเมือง หมอ นักวิทยาศาสตร์ มาขึ้นศาล ข้อหาของพวกเขาไม่ได้แค่ฆ่าคน แต่รวมถึงการละเมิดศีลธรรมมนุษย์ขั้นพื้นฐาน อย่างการทดลองทางการแพทย์กับนักโทษ โดยไม่มีการขอความยินยอม
จากการไต่สวนนี้ จึงเกิดหลักจริยธรรมที่ชื่อว่า “Nuremberg Code” ซึ่งกลายเป็นรากฐานของการทดลองทางการแพทย์ยุคใหม่ หัวใจของโค้ดนี้คือคำว่า “Informed Consent” แปลว่า ถ้าจะทำอะไรกับร่างกายใคร ต้องได้รับความยินยอมจากเขาอย่างเต็มใจ และมีข้อมูลครบถ้วน นี่แหละ คือบทเรียนจากบาดแผลของสงครามโลก
แต่แล้ว...ในปี 2020 โลกก็เข้าสู่ยุคที่ใครบางคนบอกว่า “ต้องเชื่อผู้เชี่ยวชาญ” ใครกังวล = คนไม่รักสังคม ใครถามเยอะ = คนต่อต้านวิทยาศาสตร์ การยินยอมโดยสมัครใจ เริ่มกลายเป็นแค่คำเชิงสัญลักษณ์
ในช่วงหลังนี้เราอาจจะได้ยินข่าวหรือทฤษฎีในอินเตอร์เนทเกี่ยวกับคำว่า Nuremberg 2.0 กันนะครับ เพราะเริ่มผุดขึ้นตามกระทู้เงียบ ๆ คลิปใต้ดิน และเวทีเสวนาแปลก ๆ ที่ไม่มีใครอยากอ้างชื่อบนเวที TED Talk ซึ่ง กลุ่มที่ใช้คำนี้มักจะหมายถึงความต้องการให้มีการ “ไต่สวน” หรือ “เอาผิด” กับนักการเมือง นักวิทยาศาสตร์ แพทย์ หรือองค์กรที่เกี่ยวข้องกับ การออกคำสั่ง การบังคับ การเซ็นเซอร์ข้อมูลที่ขัดแย้งกับแนวทางรัฐ การเผยแพร่ข้อมูลโดยไม่โปร่งใส พวกเขามองว่า นโยบายเหล่านั้นละเมิดสิทธิเสรีภาพของประชาชนในระดับที่เปรียบได้กับ “อาชญากรรมต่อมนุษยชาติ” จึงเสนอแนวคิด “Nuremberg 2.0”
พวกเขาไม่ได้เรียกร้องแค่ความโปร่งใส แต่เขาอยากเห็นการทบทวน ว่าใครกันแน่ที่ละเมิดหลักจริยธรรมที่โลกเคยตกลงกันไว้เมื่อ 80 ปีก่อน
Nuremberg คือการไต่สวนคนที่ใช้อำนาจรัฐฆ่าคนอย่างจงใจ Nuremberg 2.0 คือคำเตือนว่า “การใช้ความกลัวครอบงำเสรีภาพ” อาจไม่ต่างกันนัก
ทีนี้เคยสงสัยไหม แล้วมันเกี่ยวอะไรกับอาหาร?
เพราะจากวิกฤตโรคระบาด เราเริ่มเห็น “วิทยาศาสตร์แบบผูกขาด” คุมเกมส์ บริษัทเทคโนโลยีเริ่มเข้ามาทำอาหาร ชื่อใหม่ของเนื้อสเต๊กกลายเป็น “โปรตีนทางเลือก” อาหารจากแล็บกลายเป็น “ทางรอดของโลก” สารเคมีอัดลงไปแทนเนื้อจริง ๆ แต่มีฉลากติดว่า "รักษ์โลก ปลอดภัย ยั่งยืน"
แต่ถ้ามองให้ลึกลงไปอีกนิด บางกลุ่มคนกลับเริ่มเห็นอะไรบางอย่างที่ขนลุกกว่า เพราะมันคือ “ระบบควบคุมสุขภาพ” ที่อาศัย “ความกลัว” เป็นหัวเชื้อ และ “วิทยาศาสตร์แบบผูกขาด” เป็นกลไก
จากนั้น…ทุกอย่างก็จะถูกเสิร์ฟอย่างสวยงามในรูปแบบ "นวัตกรรมเพื่ออนาคต" ไม่ว่าจะเป็นอาหารเสริมชนิดใหม่ เนื้อสัตว์ปลูกในแล็บ หรืออาหารที่ไม่ต้องเคี้ยว
ลองคิดเล่น ๆครับ ถ้าสารอาหารถูกควบคุมได้ เหมือนที่เราเคยถูกบังคับกับบางอย่างได้ล่ะ? วันหนึ่ง เราอาจถูกขอให้ "กิน" ในสิ่งที่ระบบสุขภาพอนาคตเขาบอกว่าดี แล้วถ้าเฮียบอกว่าไม่อยากกิน...เขาอาจไม่ห้าม แต่ App สุขภาพจะเตือนว่า “คุณมีพฤติกรรมเสี่ยงต่อโลกใบนี้” แต้มเครดิตจะสุขภาพจะลดลงและส่วนลดข้าวกล่องเนื้อจากจุลินทรีย์จะไม่เข้าบัญชีเฮียอีกเลย
ใช่…มันไม่เหมือนการบังคับ แต่มันคือการสร้าง “ระบบทางเดียว” ที่ทำให้คนที่อยากเดินออกนอกแถว เหมือนเดินลงเหว
Nuremberg 2.0 จึงไม่ใช่แค่เรื่องของอดีตหรือโรคระบาด แต่มันเป็นกระจกที่สะท้อนว่า “ถ้าเราไม่เรียนรู้จากประวัติศาสตร์ เราอาจกินซ้ำรอยมันเข้าไปในมื้อเย็น”
อนาคตของอาหารอาจไม่ได้อยู่ในจาน แต่อยู่ในนโยบาย อยู่ในบริษัทที่ผลิตโปรตีนจากอากาศ อยู่ในทุนที่ซื้อนักวิทยาศาสตร์ไว้ทั้งวงการ และถ้าเราหลับตาอีกครั้ง หลายคนก็กลัวว่า...บทไต่สวน Nuremberg รอบใหม่ อาจไม่สามารถเกิดขึ้นอีกต่อไป เพราะคราวนี้ คนร้ายจะไม่ได้ถือปืน แต่อาจถือใบรับรองโภชนาการระดับโลกในมือแทน
#pirateketo #กูต้องรู้มั๊ย #ม้วนหางสิลูก #siamstr
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28nostr - Notes and Other Stuff Transmitted by Relays
The simplest open protocol that is able to create a censorship-resistant global "social" network once and for all.
It doesn't rely on any trusted central server, hence it is resilient; it is based on cryptographic keys and signatures, so it is tamperproof; it does not rely on P2P techniques, therefore it works.
Very short summary of how it works, if you don't plan to read anything else:
Everybody runs a client. It can be a native client, a web client, etc. To publish something, you write a post, sign it with your key and send it to multiple relays (servers hosted by someone else, or yourself). To get updates from other people, you ask multiple relays if they know anything about these other people. Anyone can run a relay. A relay is very simple and dumb. It does nothing besides accepting posts from some people and forwarding to others. Relays don't have to be trusted. Signatures are verified on the client side.
This is needed because other solutions are broken:
The problem with Twitter
- Twitter has ads;
- Twitter uses bizarre techniques to keep you addicted;
- Twitter doesn't show an actual historical feed from people you follow;
- Twitter bans people;
- Twitter shadowbans people.
- Twitter has a lot of spam.
The problem with Mastodon and similar programs
- User identities are attached to domain names controlled by third-parties;
- Server owners can ban you, just like Twitter; Server owners can also block other servers;
- Migration between servers is an afterthought and can only be accomplished if servers cooperate. It doesn't work in an adversarial environment (all followers are lost);
- There are no clear incentives to run servers, therefore they tend to be run by enthusiasts and people who want to have their name attached to a cool domain. Then, users are subject to the despotism of a single person, which is often worse than that of a big company like Twitter, and they can't migrate out;
- Since servers tend to be run amateurishly, they are often abandoned after a while — which is effectively the same as banning everybody;
- It doesn't make sense to have a ton of servers if updates from every server will have to be painfully pushed (and saved!) to a ton of other servers. This point is exacerbated by the fact that servers tend to exist in huge numbers, therefore more data has to be passed to more places more often;
- For the specific example of video sharing, ActivityPub enthusiasts realized it would be completely impossible to transmit video from server to server the way text notes are, so they decided to keep the video hosted only from the single instance where it was posted to, which is similar to the Nostr approach.
The problem with SSB (Secure Scuttlebutt)
- It doesn't have many problems. I think it's great. In fact, I was going to use it as a basis for this, but
- its protocol is too complicated because it wasn't thought about being an open protocol at all. It was just written in JavaScript in probably a quick way to solve a specific problem and grew from that, therefore it has weird and unnecessary quirks like signing a JSON string which must strictly follow the rules of ECMA-262 6th Edition;
- It insists on having a chain of updates from a single user, which feels unnecessary to me and something that adds bloat and rigidity to the thing — each server/user needs to store all the chain of posts to be sure the new one is valid. Why? (Maybe they have a good reason);
- It is not as simple as Nostr, as it was primarily made for P2P syncing, with "pubs" being an afterthought;
- Still, it may be worth considering using SSB instead of this custom protocol and just adapting it to the client-relay server model, because reusing a standard is always better than trying to get people in a new one.
The problem with other solutions that require everybody to run their own server
- They require everybody to run their own server;
- Sometimes people can still be censored in these because domain names can be censored.
How does Nostr work?
- There are two components: clients and relays. Each user runs a client. Anyone can run a relay.
- Every user is identified by a public key. Every post is signed. Every client validates these signatures.
- Clients fetch data from relays of their choice and publish data to other relays of their choice. A relay doesn't talk to another relay, only directly to users.
- For example, to "follow" someone a user just instructs their client to query the relays it knows for posts from that public key.
- On startup, a client queries data from all relays it knows for all users it follows (for example, all updates from the last day), then displays that data to the user chronologically.
- A "post" can contain any kind of structured data, but the most used ones are going to find their way into the standard so all clients and relays can handle them seamlessly.
How does it solve the problems the networks above can't?
- Users getting banned and servers being closed
- A relay can block a user from publishing anything there, but that has no effect on them as they can still publish to other relays. Since users are identified by a public key, they don't lose their identities and their follower base when they get banned.
- Instead of requiring users to manually type new relay addresses (although this should also be supported), whenever someone you're following posts a server recommendation, the client should automatically add that to the list of relays it will query.
- If someone is using a relay to publish their data but wants to migrate to another one, they can publish a server recommendation to that previous relay and go;
- If someone gets banned from many relays such that they can't get their server recommendations broadcasted, they may still let some close friends know through other means with which relay they are publishing now. Then, these close friends can publish server recommendations to that new server, and slowly, the old follower base of the banned user will begin finding their posts again from the new relay.
-
All of the above is valid too for when a relay ceases its operations.
-
Censorship-resistance
- Each user can publish their updates to any number of relays.
-
A relay can charge a fee (the negotiation of that fee is outside of the protocol for now) from users to publish there, which ensures censorship-resistance (there will always be some Russian server willing to take your money in exchange for serving your posts).
-
Spam
-
If spam is a concern for a relay, it can require payment for publication or some other form of authentication, such as an email address or phone, and associate these internally with a pubkey that then gets to publish to that relay — or other anti-spam techniques, like hashcash or captchas. If a relay is being used as a spam vector, it can easily be unlisted by clients, which can continue to fetch updates from other relays.
-
Data storage
- For the network to stay healthy, there is no need for hundreds of active relays. In fact, it can work just fine with just a handful, given the fact that new relays can be created and spread through the network easily in case the existing relays start misbehaving. Therefore, the amount of data storage required, in general, is relatively less than Mastodon or similar software.
-
Or considering a different outcome: one in which there exist hundreds of niche relays run by amateurs, each relaying updates from a small group of users. The architecture scales just as well: data is sent from users to a single server, and from that server directly to the users who will consume that. It doesn't have to be stored by anyone else. In this situation, it is not a big burden for any single server to process updates from others, and having amateur servers is not a problem.
-
Video and other heavy content
-
It's easy for a relay to reject large content, or to charge for accepting and hosting large content. When information and incentives are clear, it's easy for the market forces to solve the problem.
-
Techniques to trick the user
- Each client can decide how to best show posts to users, so there is always the option of just consuming what you want in the manner you want — from using an AI to decide the order of the updates you'll see to just reading them in chronological order.
FAQ
- This is very simple. Why hasn't anyone done it before?
I don't know, but I imagine it has to do with the fact that people making social networks are either companies wanting to make money or P2P activists who want to make a thing completely without servers. They both fail to see the specific mix of both worlds that Nostr uses.
- How do I find people to follow?
First, you must know them and get their public key somehow, either by asking or by seeing it referenced somewhere. Once you're inside a Nostr social network you'll be able to see them interacting with other people and then you can also start following and interacting with these others.
- How do I find relays? What happens if I'm not connected to the same relays someone else is?
You won't be able to communicate with that person. But there are hints on events that can be used so that your client software (or you, manually) knows how to connect to the other person's relay and interact with them. There are other ideas on how to solve this too in the future but we can't ever promise perfect reachability, no protocol can.
- Can I know how many people are following me?
No, but you can get some estimates if relays cooperate in an extra-protocol way.
- What incentive is there for people to run relays?
The question is misleading. It assumes that relays are free dumb pipes that exist such that people can move data around through them. In this case yes, the incentives would not exist. This in fact could be said of DHT nodes in all other p2p network stacks: what incentive is there for people to run DHT nodes?
- Nostr enables you to move between server relays or use multiple relays but if these relays are just on AWS or Azure what’s the difference?
There are literally thousands of VPS providers scattered all around the globe today, there is not only AWS or Azure. AWS or Azure are exactly the providers used by single centralized service providers that need a lot of scale, and even then not just these two. For smaller relay servers any VPS will do the job very well.
-
@ a9e24cc2:597d8933
2025-05-07 01:06:44𝐌𝐄𝐒𝐒𝐀𝐆𝐄 BlackHat_Nexus 𝐅𝐎𝐑 𝐀𝐍𝐘 𝐊𝐈𝐍𝐃 𝐎𝐅 𝐒𝐄𝐑𝐕𝐈𝐂𝐄 𝐑𝐄𝐂𝐎𝐕𝐄𝐑 𝐘𝐎𝐔𝐑 𝐀𝐂𝐂𝐎𝐔𝐍𝐓Fast, Available and Reliable for any of the following services 🤳 Recovery of lost funds🤳 Facebook Hack🤳 WhatsApp Hack 🤳 Instagram Hack🤳 Spying🤳 Windows Hacking🤳 Recover lost wallet 🤳 Credit score trick 🤳 Recover Password🤳 Gmail Hack🤳 SnapChat Hacking 🤳 Cellphone Monitoring 🤳 Tik Tok Hack🤳 Twitter Hack🤳 Lost Phone Tracking🤳 Lost IaptopTracking🤳 Lost Car Tracking🤳 Cloning WhatsApp🤳 Cryptocurrency Wallet🤳 Hacking🤳 Iphone unlock 🤳 Got banned 🤳 Private Number available🤳 Telegram hacking 🤳 Websites hacking 🤳 Hack University 🤳 IOS and Android hack 🤳 Wifi Hacking 🤳 CCTV hacking🤳 Hack Bot Game 🤳 Free fire hack 🤳 Changing of school grades 🤳 Cards 💳hackingNo 🆓 services 🚫WhatsApp +1 3606068592Send a DM https://t.me/BlackHat_Nexus@BlackHat_Nexus
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Rumple
a payments network based on trust channels
This is the description of a Lightning-like network that will work only with credit or trust-based channels and exist alongside the normal Lightning Network. I imagine some people will think this is undesirable and at the same time very easy to do (such that if it doesn't exist yet it must be because no one cares), but in fact it is a very desirable thing -- which I hope I can establish below -- and at the same time a very non-trivial problem to solve, as the history of Ryan Fugger's Ripple project and posterior copies of it show.
Read these first to get the full context:
- Ryan Fugger's Ripple
- Ripple and the problem of the decentralized commit
- The Lightning Network solves the problem of the decentralized commit
- Parallel Chains
Explanation about the name
Since we're copying the fundamental Ripple idea from Ryan Fugger and since the name "Ripple" is now associated with a scam coin called XRP, and since Ryan Fugger has changed the name of his old website "Ripplepay" to "Rumplepay", we will follow his lead here. If "Ripplepay" was the name of a centralized prototype to the open peer-to-peer network "Ripple", now that the centralized version is called "Rumplepay" the peer-to-peer version must be called "Rumple".
Now the idea
Basically we copy the Lightning Network, but without HTLCs or channels being opened and closed with funds committed to them on multisig Bitcoin transactions published to the blockchain. Instead we use pure trust relationships like the original Ripple concept.
And we use the blockchain commit method, but instead of spending an absurd amount of money to use the actual Bitcoin blockchain instead we use a parallel chain.
How exactly -- a protocol proposal attempt
It could work like this:
The parallel chain, or "Rumple Chain"
- We define a parallel chain with a genesis block;
- Following blocks must contain
a. the ID of the previous block; b. a list of up to 32768 entries of arbitrary 32-byte values; c. an ID constituted by sha256(the previous block ID + the merkle root of all the entries)
- To be mined, each parallel block must be included in the Bitcoin chain according as explained above.
Now that we have a structure for a simple "blockchain" that is completely useless, just blocks over blocks of meaningless values, we proceed to the next step of assigning meaning to these values.
The off-chain payments network, or "Rumple Network"
- We create a network of nodes that can talk to each other via TCP messages (all details are the same as the Lightning Network, except where mentioned otherwise);
- These nodes can create trust channels to each other. These channels are backed by nothing except the willingness of one peer to pay the other what is owed.
- When Alice creates a trust channel with Bob (
Alice trusts Bob
), contrary to what happens in the Lightning Network, it's A that can immediately receive payments through that channel, and everything A receives will be an IOU from Bob to Alice. So Alice should never open a channel to Bob unless Alice trusts Bob. But also Alice can choose the amount of trust it has in Bob, she can, for example, open a very small channel with Bob, which means she will only lose a few satoshis if Bob decides to exit scam her. (in the original Ripple examples these channels were always depicted as friend relationships, and they can continue being that, but it's expected -- given the experience of the Lightning Network -- that the bulk of the channels will exist between users and wallet provider nodes that will act as hubs). - As Alice receive a payment through her channel with Bob, she becomes a creditor and Bob a debtor, i.e., the balance of the channel moves a little to her side. Now she can use these funds to make payments over that channel (or make a payment that combines funds from multiple channels using MPP).
- If at any time Alice decides to close her channel with Bob, she can send all the funds she has standing there to somewhere else (for example, another channel she has with someone else, another wallet somewhere else, a shop that is selling some good or service, or a service that will aggregate all funds from all her channels and send a transaction to the Bitcoin chain on her behalf).
- If at any time Bob leaves the network Alice is entitled by Bob's cryptographic signatures to knock on his door and demand payment, or go to a judge and ask him to force Bob to pay, or share the signatures and commitments online and hurt Bob's reputation with the rest of the network (but yes, none of these things is good enough and if Bob is a very dishonest person none of these things is likely to save Alice's funds).
The payment flow
- Suppose there exists a route
Alice->Bob->Carol
and Alice wants to send a payment to Carol. - First Alice reads an invoice she received from Carol. The invoice (which can be pretty similar or maybe even the same as BOLT11) contains a payment hash
h
and information about how to reach Carol's node, optionally an amount. Let's say it's 100 satoshis. - Using the routing information she gathered, Alice builds an onion and sends it to Bob, at the same time she offers to Bob a "conditional IOU". That stands for a signed commitment that Alice will owe Bob an 100 satoshis if in the next 50 blocks of the Rumple Chain there appears a block containing the preimage
p
such thatsha256(p) == h
. - Bob peels the onion and discovers that he must forward that payment to Carol, so he forwards the peeled onion and offers a conditional IOU to Carol with the same
h
. Bob doesn't know Carol is the final recipient of the payment, it could potentially go on and on. - When Carol gets the conditional IOU from Bob, she makes a list of all the nodes who have announced themselves as miners (which is not something I have mentioned before, but nodes that are acting as miners will must announce themselves somehow) and are online and bidding for the next Rumple block. Each of these miners will have previously published a random 32-byte value
v
they they intend to include in their next block. - Carol sends payments through routes to all (or a big number) of these miners, but this time the conditional IOU contains two conditions (values that must appear in a block for the IOU to be valid):
p
such thatsha256(p) == h
(the same that featured in the invoice) andv
(which must be unique and constant for each miner, something that is easily verifiable by Carol beforehand). Also, instead of these conditions being valid for the next 50 blocks they are valid only for the single next block. - Now Carol broadcasts
p
to the mempool and hopes one of the miners to which she sent conditional payments sees it and, allured by the possibility of cashing in Carol's payment, includesp
in the next block. If that does not happen, Carol can try again in the next block.
Why bother with this at all?
-
The biggest advantage of Lightning is its openness
It has been said multiple times that if trust is involved then we don't need Lightning, we can use Coinbase, or worse, Paypal. This is very wrong. Lightning is good specially because it serves as a bridge between Coinbase, Paypal, other custodial provider and someone running their own node. All these can transact freely across the network and pay each other without worrying about who is in which provider or setup.
Rumple inherits that openness. In a Rumple Network anyone is free to open new trust channels and immediately route payments to anyone else.
Also, since Rumple payments are also based on the reveal of a preimage it can do swaps with Lightning inside a payment route from day one (by which I mean one can pay from Rumple to Lightning and vice-versa).
-
Rumple fixes Lightning's fragility
Lightning is too fragile.
It's known that Lightning is vulnerable to multiple attacks -- like the flood-and-loot attack, for example, although not an attack that's easy to execute, it's still dangerous even if failed. Given the existence of these attacks, it's important to not ever open channels with random anonymous people. Some degree of trust must exist between peers.
But one does not even have to consider attacks. The creation of HTLCs is a liability that every node has to do multiple times during its life. Every initiated, received or forwarded payment require adding one HTLC then removing it from the commitment transaction.
Another issue that makes trust needed between peers is the fact that channels can be closed unilaterally. Although this is a feature, it is also a bug when considering high-fee environments. Imagine you pay $2 in fees to open a channel, your peer may close that unilaterally in the next second and then you have to pay another $15 to close the channel. The opener pays (this is also a feature that can double as a bug by itself). Even if it's not you opening the channel, a peer can open a channel with you, make a payment, then clone the channel, and now you're left with, say, an output of 800 satoshis, which is equal to zero if network fees are high.
So you should only open channels with people you know and know aren't going to actively try to hack you and people who are not going to close channels and impose unnecessary costs on you. But even considering a fully trusted Lightning Network, even if -- to be extreme -- you only opened channels with yourself, these channels would still be fragile. If some HTLC gets stuck for any reason (peer offline or some weird small incompatibility between node softwares) and you're forced to close the channel because of that, there are the extra costs of sweeping these UTXO outputs plus the total costs of closing and reopening a channel that shouldn't have been closed in the first place. Even if HTLCs don't get stuck, a fee renegotiation event during a mempool spike may cause channels to force-close, become valueless or settle for very high closing fee.
Some of these issues are mitigated by Eltoo, others by only having channels with people you trust. Others referenced above, plus the the griefing attack and in general the ability of anyone to spam the network for free with payments that can be pending forever or a lot of payments fail repeatedly makes it very fragile.
Rumple solves most of these problems by not having to touch the blockchain at all. Fee negotiation makes no sense. Opening and closing channels is free. Flood-and-loot is a non-issue. The griefing attack can be still attempted as funds in trust channels must be reserved like on Lightning, but since there should be no theoretical limit to the number of prepared payments a channel can have, the griefing must rely on actual amounts being committed, which prevents large attacks from being performed easily.
-
Rumple fixes Lightning's unsolvable reputation issues
In the Lightning Conference 2019, Rusty Russell promised there would be pre-payments on Lightning someday, since everybody was aware of potential spam issues and pre-payments would be the way to solve that. Fast-forward to November 2020 and these pre-payments have become an apparently unsolvable problem[^thread-402]: no one knows how to implement them reliably without destroying privacy completely or introducing worse problems.
Replacing these payments with tables of reputation between peers is also an unsolved problem[^reputation-lightning], for the same reasons explained in the thread above.
-
Rumple solves the hot wallet problem
Since you don't have to use Bitcoin keys or sign transactions with a Rumple node, only your channel trust is at risk at any time.
-
Rumple ends custodianship
Since no one is storing other people's funds, a big hub or wallet provider can be used in multiple payment routes, but it cannot be immediately classified as a "custodian". At best, it will be a big debtor.
-
Rumple is fun
Opening channels with strangers is boring. Opening channels with friends and people you trust even a little makes that relationship grow stronger and the trust be reinforced. (But of course, like it happens in the Lightning Network today, if Rumple is successful the bulk of trust will be from isolated users to big reliable hubs.)
Questions or potential issues
-
So many advantages, yes, but trusted? Custodial? That's easy and stupid!
Well, an enormous part of the current Lightning Network (and also onchain Bitcoin wallets) already rests on trust, mainly trust between users and custodial wallet providers like ZEBEDEE, Alby, Wallet-of-Satoshi and others. Worse: on the current Lightning Network users not only trust, they also expose their entire transaction history to these providers[^hosted-channels].
Besides that, as detailed in point 3 of the previous section, there are many unsolvable issues on the Lightning protocol that make each sovereign node dependent on some level of trust in its peers (and the network in general dependent on trusting that no one else will spam it to death).
So, given the current state of the Lightning Network, to trust peers like Rumple requires is not a giant change -- but it is still a significant change: in Rumple you shouldn't open a large trust channel with someone just because it looks trustworthy, you must personally know that person and only put in what you're willing to lose. In known brands that have reputation to lose you can probably deposit more trust, same for long-term friends, and that's all. Still it is probably good enough, given the existence of MPP payments and the fact that the purpose of Rumple is to be a payments network for day-to-day purchases and not a way to buy real estate.
-
Why would anyone run a node in this parallel chain?
I don't know. Ideally every server running a Rumple Network node will be running a Bitcoin node and a Rumple chain node. Besides using it to confirm and publish your own Rumple Network transactions it can be set to do BMM mining automatically and maybe earn some small fees comparable to running a Lightning routing node or a JoinMarket yield generator.
Also it will probably be very lightweight, as pruning is completely free and no verification-since-the-genesis-block will take place.
-
What is the maturity of the debt that exists in the Rumple Network or its legal status?
By default it is to be understood as being payable on demand for payments occurring inside the network (as credit can be used to forward or initiate payments by the creditor using that channel). But details of settlement outside the network or what happens if one of the peers disappears cannot be enforced or specified by the network.
Perhaps some standard optional settlement methods (like a Bitcoin address) can be announced and negotiated upon channel creation inside the protocol, but nothing more than that.
[^thread-402]: Read at least the first 10 messages of the thread to see how naïve proposals like you and me could have thought about are brought up and then dismantled very carefully by the group of people most committed to getting Lightning to work properly. [^reputation-lightning]: See also the footnote at Ripple and the problem of the decentralized commit. [^hosted-channels]: Although that second part can be solved by hosted channels.
-
@ 752f5d10:88491db3
2025-05-07 00:25:29Opinion about Trendo: Forex Trading & Broker (android)
L o s i n g $111,555 overnight was a crushing blow. The account lockout and subpar customer service that followed left me stunned. However, I gained invaluable insights from this ordeal : Conduct thorough research before committing Verify credentials and read reviews from multiple sources Be c a u t i o u s of un realistic promises Demand responsive and reliable support. Fortunately, I found chelsy__desmarais__54__A T__g=m=a=i=l__D=o=t__c_o_m, which helped me re-cover from this financial setback. W_h_a_t_s_A_p_p ; +1=8=5=9=4=3=6=4=2=1=1
WalletScrutiny #nostrOpinion
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Boardthreads
This was a very badly done service for turning a Trello list into a helpdesk UI.
Surprisingly, it had more paying users than Websites For Trello, which I was working on simultaneously and dedicating much more time to it.
The Neo4j database I used for this was a very poor choice, it was probably the cause of all the bugs.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Webvatar
Like Gravatar, but using profile images from websites tagged with "microformats-2" tags, like people from the indiewebcamp movement liked. It falled back to favicon, gravatar and procedural avatar generators.
No one really used this, despite people saying they liked it. Since I was desperate to getting some of my programs appreciated by someone I even bought a domain. It was sad, but an enriching experience.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The problem with ION
ION is a DID method based on a thing called "Sidetree".
I can't say for sure what is the problem with ION, because I don't understand the design, even though I have read all I could and asked everybody I knew. All available information only touches on the high-level aspects of it (and of course its amazing wonders) and no one has ever bothered to explain the details. I've also asked the main designer of the protocol, Daniel Buchner, but he may have thought I was trolling him on Twitter and refused to answer, instead pointing me to an incomplete spec on the Decentralized Identity Foundation website that I had already read before. I even tried to join the DIF as a member so I could join their closed community calls and hear what they say, maybe eventually ask a question, so I could understand it, but my entrance was ignored, then after many months and a nudge from another member I was told I had to do a KYC process to be admitted, which I refused.
One thing I know is:
- ION is supposed to provide a way to rotate keys seamlessly and automatically without losing the main identity (and the ION proponents also claim there are no "master" keys because these can also be rotated).
- ION is also not a blockchain, i.e. it doesn't have a deterministic consensus mechanism and it is decentralized, i.e. anyone can publish data to it, doesn't have to be a single central server, there may be holes in the available data and the protocol doesn't treat that as a problem.
- From all we know about years of attempts to scale Bitcoins and develop offchain protocols it is clear that you can't solve the double-spend problem without a central authority or a kind of blockchain (i.e. a decentralized system with deterministic consensus).
- Rotating keys also suffer from the double-spend problem: whenever you rotate a key it is as if it was "spent", you aren't supposed to be able to use it again.
The logic conclusion of the 4 assumptions above is that ION is flawed: it can't provide the key rotation it says it can if it is not a blockchain.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28bolt12 problems
- clients can't programatically build new offers by changing a path or query params (services like zbd.gg or lnurl-pay.me won't work)
- impossible to use in a load-balanced custodian way -- since offers would have to be pregenerated and tied to a specific lightning node.
- the existence of fiat currency fields makes it so wallets have to fetch exchange rates from somewhere on the internet (or offer a bad user experience), using HTTP which hurts user privacy.
- the vendor field is misleading, can be phished very easily, not as safe as a domain name.
- onion messages are an improvement over fake HTLC-based payments as a way of transmitting data, for sure. but we must decide if they are (i) suitable for transmitting all kinds of data over the internet, a replacement for tor; or (ii) not something that will scale well or on which we can count on for the future. if there was proper incentivization for data transmission it could end up being (i), the holy grail of p2p communication over the internet, but that is a very hard problem to solve and not guaranteed to yield the desired scalability results. since not even hints of attempting to solve that are being made, it's safer to conclude it is (ii).
bolt12 limitations
- not flexible enough. there are some interesting fields defined in the spec, but who gets to add more fields later if necessary? very unclear.
- services can't return any actionable data to the users who paid for something. it's unclear how business can be conducted without an extra communication channel.
bolt12 illusions
- recurring payments is not really solved, it is just a spec that defines intervals. the actual implementation must still be done by each wallet and service. the recurring payment cannot be enforced, the wallet must still initiate the payment. even if the wallet is evil and is willing to initiate a payment without the user knowing it still needs to have funds, channels, be online, connected etc., so it's not as if the services could rely on the payments being delivered in time.
- people seem to think it will enable pushing payments to mobile wallets, which it does not and cannot.
- there is a confusion of contexts: it looks like offers are superior to lnurl-pay, for example, because they don't require domain names. domain names, though, are common and well-established among internet services and stores, because these services have websites, so this is not really an issue. it is an issue, though, for people that want to receive payments in their homes. for these, indeed, bolt12 offers a superior solution -- but at the same time bolt12 seems to be selling itself as a tool for merchants and service providers when it includes and highlights features as recurring payments and refunds.
- the privacy gains for the receiver that are promoted as being part of bolt12 in fact come from a separate proposal, blinded paths, which should work for all normal lightning payments and indeed are a very nice solution. they are (or at least were, and should be) independent from the bolt12 proposal. a separate proposal, which can be (and already is being) used right now, also improves privacy for the receiver very much anway, it's called trampoline routing.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The Lightning Network solves the problem of the decentralized commit
Before reading this, see Ripple and the problem of the decentralized commit.
The Bitcoin Lightning Network can be thought as a system similar to Ripple: there are conditional IOUs (HTLCs) that are sent in "prepare"-like messages across a route, and a secret
p
that must travel from the final receiver backwards through the route until it reaches the initial sender and possession of that secret serves to prove the payment as well as to make the IOU hold true.The difference is that if one of the parties don't send the "acknowledge" in time, the other has a trusted third-party with its own clock (that is the clock that is valid for everybody involved) to complain immediately at the timeout: the Bitcoin blockchain. If C has
p
and B isn't acknowleding it, C tells the Bitcoin blockchain and it will force the transfer of the amount from B to C.Differences (or 1 upside and 3 downside)
-
The Lightning Network differs from a "pure" Ripple network in that when we send a "prepare" message on the Lightning Network, unlike on a pure Ripple network we're not just promising we will owe something -- instead we are putting the money on the table already for the other to get if we are not responsive.
-
The feature above removes the trust element from the equation. We can now have relationships with people we don't trust, as the Bitcoin blockchain will serve as an automated escrow for our conditional payments and no one will be harmed. Therefore it is much easier to build networks and route payments if you don't always require trust relationships.
-
However it introduces the cost of the capital. A ton of capital must be made available in channels and locked in HTLCs so payments can be routed. This leads to potential issues like the ones described in https://twitter.com/joostjgr/status/1308414364911841281.
-
Another issue that comes with the necessity of using the Bitcoin blockchain as an arbiter is that it may cost a lot in fees -- much more than the value of the payment that is being disputed -- to enforce it on the blockchain.[^closing-channels-for-nothing]
Solutions
Because the downsides listed above are so real and problematic -- and much more so when attacks from malicious peers are taken into account --, some have argued that the Lightning Network must rely on at least some trust between peers, which partly negate the benefit.
The introduction of purely trust-backend channels is the next step in the reasoning: if we are trusting already, why not make channels that don't touch the blockchain and don't require peers to commit large amounts of capital?
The reason is, again, the ambiguity that comes from the problem of the decentralized commit. Therefore hosted channels can be good when trust is required only from one side, like in the final hops of payments, but they cannot work in the middle of routes without eroding trust relationships between peers (however they can be useful if employed as channels between two nodes ran by the same person).
The next solution is a revamped pure Ripple network, one that solves the problem of the decentralized commit in a different way.
[^closing-channels-for-nothing]: That is even true when, for reasons of the payment being so small that it doesn't even deserve an actual HTLC that can be enforced on the chain (as per the protocol), even then the channel between the two nodes will be closed, only to make it very clear that there was a disagreement. Leaving it online would be harmful as one of the peers could repeat the attack again and again. This is a proof that ambiguity, in case of the pure Ripple network, is a very important issue.
-
-
@ eac63075:b4988b48
2025-03-03 17:18:12Abstract
This paper examines a hypothetical scenario in which the United States, under Trump’s leadership, withdraws from NATO and reduces its support for Europe, thereby enabling a Russian conquest of Ukraine and the subsequent expansion of Moscow’s influence over Eurasia, while the US consolidates its dominance over South America. Drawing on classical geopolitical theories—specifically those of Halford Mackinder, Alfred Thayer Mahan, Rudolf Kjellén, and Friedrich Ratzel—the study analyzes how these frameworks can elucidate the evolving power dynamics and territorial ambitions in a reconfigured global order. The discussion highlights Mackinder’s notion of the Eurasian Heartland and its strategic importance, Mahan’s emphasis on maritime power and control of strategic routes, Kjellén’s view of the state as an expanding organism, and Ratzel’s concept of Lebensraum as a justification for territorial expansion. The paper also explores contemporary developments, such as the US–Ukraine economic agreement and Trump’s overt territorial ambitions involving Greenland and Canada, in light of these theories. By juxtaposing traditional geopolitical concepts with current international relations, the study aims to shed light on the potential implications of such shifts for regional stability, global security, and the balance of power, particularly in relation to emerging neocolonial practices in Latin America.
Introduction
In recent years, the geopolitical dynamics involving the United States, Russia, and Ukraine have sparked analyses from different theoretical perspectives. This paper examines recent events – presupposing a scenario in which Donald Trump withdraws the US from NATO and reduces its support for Europe, allowing a Russian conquest of Ukraine and the expansion of Moscow’s influence over Eurasia, while the US consolidates its dominance over South America – in light of classical geopolitical theories. The ideas of Halford Mackinder, Alfred Thayer Mahan, Rudolf Kjellén, and Friedrich Ratzel are used as reference points. The proposal is to impartially evaluate how each theory can elucidate the developments of this hypothetical scenario, relating Russian territorial expansion in Eurasia to the strategic retreat of the US to the Western Hemisphere.
Initially, we will outline Mackinder’s conception of the Heartland (the central Eurasian territory) and the crucial role of Eastern Europe and Ukraine in the quest for global dominance. Next, we will discuss Mahan’s ideas regarding maritime power and the control of strategic routes, considering the impacts on the naval power balance among the US, Russia, and other maritime powers such as the United Kingdom and Japan. Subsequently, we will examine Kjellén’s organic theory of the state, interpreting the Russian expansionist strategy as a reflection of a state organism in search of vital space. In the same vein, Ratzel’s concept of “Lebensraum” will be explored, along with how Russia could justify territorial expansion based on resources and territory. Finally, the paper connects these theories to the current political context, analyzing the direct negotiations between Washington and Moscow (overlooking Ukraine and Europe), the US policy toward authoritarian regimes in Latin America, and the notion of a hemispheric division of power – the “Island of the Americas” under North American hegemony versus an Eurasia dominated by Russia. Lastly, it considers the possibility that such a geopolitical arrangement may foster the strengthening of authoritarian governments globally, rather than containing them, thus altering the paradigms of the liberal world order.
The Heartland of Mackinder: Ukraine, Eurasia, and Global Dominance
Halford J. Mackinder, a British geographer and pioneer of geopolitics, proposed the celebrated Heartland Theory in the early twentieth century. Mackinder divided the world into geostrategic zones and identified the Heartland—the central continental mass of Eurasia—as the “geographical pivot of history” [5]. His most famous maxim encapsulates this vision: “who rules Eastern Europe commands the Heartland; who rules the Heartland commands the World Island; who rules the World Island commands the world” [5]. Eastern Europe and, in particular, the region of present-day Ukraine, play a key role in this formula. This is because, for Mackinder, Eastern Europe functions as a gateway to the Heartland, providing access to resources and a strategic position for the projection of continental power [5].
Applying this theory to our scenario, the conquest of Ukraine and Eastern European countries by Russia would have profound geopolitical implications. From a Mackinderian point of view, such a conquest would enormously strengthen Russia’s position in the Heartland by adding manpower (population) and Ukraine’s industrial and agricultural resources to its power base [5]. In fact, Mackinder argued that controlling the Heartland conferred formidable geostrategic advantages—a vast terrestrial “natural fortress” protected from naval invasions and rich in resources such as wheat, minerals, and fuels [5]. Thus, if Moscow were to incorporate Ukraine (renowned for its fertile soil and grain production, as well as its mineral reserves) and extend its influence over Eastern Europe, Russia would consolidate the Heartland under its direct control. In this context, the absence of the USA (withdrawn from NATO and less engaged in Europe) would remove an important obstacle to Russian predominance in the region.
With central and eastern Eurasia under Russian influence, it would be possible to move toward the realization of the geopolitical nightmare described by Mackinder for Western maritime powers: a hegemonic continental power capable of projecting power to both Europe and Asia. Mackinder himself warned that if a Heartland power gained additional access to an oceanic coastline—in other words, if it combined land power with a significant maritime front—it would constitute a “danger” to global freedom [5]. In the scenario considered, besides advancing into Eastern Europe, Russia would already possess strategic maritime outlets (for example, in the Black Sea, via Crimea, and in the Baltic, via Kaliningrad or the Baltic States if influenced). Thus, the control of Ukraine would reinforce Russia’s position in the Black Sea and facilitate projection into the Eastern Mediterranean, expanding its oceanic front. From a Mackinderian perspective, this could potentially transform Russia into the dominant power of the “World Island” (the combined mass of Europe, Asia, and Africa), thereby unbalancing the global geopolitical order [5].
It is worth noting that, historically, Mackinder’s doctrine influenced containment strategies: both in the interwar period and during the Cold War, efforts were made to prevent a single power from controlling the Heartland and Eastern Europe. NATO, for example, can be seen as an instrument to prevent Soviet/Russian advances in Europe, in line with Mackinder’s imperative to “contain the Heartland.” Thus, if the USA were to abandon that role—by leaving NATO and tacitly accepting the Russian sphere of influence in Eurasia—we would be witnessing an inversion of the principles that have guided Western policy for decades. In short, under Mackinder’s theory, the Russian conquest of Ukraine and beyond would represent the key for Russia to command the Heartland and, potentially, challenge global hegemony, especially in a scenario where the USA self-restricts to the Western Hemisphere.
The Maritime Power of Mahan and the Naval Balance between West and East
While Mackinder emphasized continental land power, Alfred Thayer Mahan, a nineteenth-century American naval strategist, highlighted the crucial role of maritime power in global dominance. In his work The Influence of Sea Power upon History (1890), Mahan studied the example of the British Empire and concluded that control of the seas paved the way for British supremacy as a world power [10]. He argued that a strong navy and the control of strategic maritime routes were decisive factors for projecting military, political, and economic power. His doctrine can be summarized in the following points: (1) the United States should aspire to be a world power; (2) control of the seas is necessary to achieve that status; (3) such control is obtained through a powerful fleet of warships [17]. In other words, for Mahan, whoever dominates the maritime routes and possesses naval superiority will be in a position to influence global destinies, ensuring trade, supplies, and the rapid movement of military forces.
In the proposed scenario, in which the USA withdraws militarily from Europe and possibly from the Eurasian stage, Mahan’s ideas raise questions about the distribution of maritime power and its effects. Traditionally, the US Navy operates globally, ensuring freedom of navigation and deterring challenges in major seas (Atlantic, Pacific, Indian, etc.). A withdrawal of the USA from NATO could also signal a reduction in its naval presence in the Northeast Atlantic, the Mediterranean Sea, and other areas close to Eurasia. In such a case, who would fill this naval vacuum? Russia, although primarily a land power, has been attempting to modernize its navy and has specific interests—for example, consolidating its dominance in the Black Sea and maintaining a presence in the Mediterranean (with a naval base in Tartus, Syria). The United Kingdom, a historic European maritime power, would remain aligned with the USA but, without American military support in Europe, might potentially be overwhelmed trying to contain an increasingly assertive Russian navy in European waters on its own. Japan, another significant maritime actor allied with the USA, is concerned with the naval balance in the Pacific; without full American engagement, Tokyo might be compelled to expand its own naval power to contain both Russia in the Far East (which maintains a fleet in the Pacific) and, especially, the growing Chinese navy.
According to Mahan’s thinking, strategic maritime routes and choke points (crucial straits and channels) become contested prizes in this power game. With the USA focusing on the Americas, one could imagine Washington reinforcing control over the Panama Canal and Caribbean routes—reviving an “American Gulf” policy in the Western Atlantic and Eastern Pacific. In fact, indications of this orientation emerge in statements attributed to Trump, who once suggested reclaiming direct control over Panama, transforming Canada into a North American state, and even “annexing” Greenland due to its Arctic geopolitical importance [18]. These aspirations reflect a quest to secure advantageous maritime positions near the American continent.
Conversely, in the absence of American presence in the Eastern Atlantic and Mediterranean, Russia would have free rein for regional maritime projection. This could include anything from the unrestricted use of the Black Sea (after dominating Ukraine, thereby ensuring full access to Crimea and Ukrainian ports) to greater influence in the Eastern Mediterranean via Syria and partnerships with countries such as Iran or Egypt. The Baltic Sea would also become an area of expanded Russian interest, pressuring coastal countries and perhaps reducing NATO’s traditional local naval supremacy. However, it is worth noting that even with these regional expansions, Russia lacks a blue-water navy comparable to that of the USA; thus, its initial global maritime impact would be limited without alliances.
An important aspect of Mahan’s theories is that naval power serves as a counterbalance to the land power of the Heartland. Therefore, even if Russia were to dominate the Eurasian continental mass, the continued presence of American naval might on the oceans could prevent complete global domination by Moscow. However, if the USA voluntarily restricts its naval reach to the Americas, it would forgo influencing the power balance in the seas adjacent to Eurasia. Consequently, the balance of maritime power would tend to shift in favor of regional Eurasian actors. The United Kingdom and Japan, traditional allies of the USA, could intensify their naval capabilities to defend regional interests—the United Kingdom safeguarding the North Atlantic and the North Sea, and Japan patrolling the Northwest Pacific—but both would face budgetary and structural limitations in fully compensating for the absence of the American superpower. Consequently, Mahan’s vision suggests that the withdrawal of the USA from the extra-regional scene would weaken the liberal maritime regime, possibly opening space for revisionist powers to contest routes that were previously secured (for example, Russia and China encountering less opposition on the routes of the Arctic and the Indo-Pacific, respectively). In summary, naval hegemony would fragment, and control of strategic seas would become contested, reconfiguring the relative influence of the USA, Russia, and maritime allies such as the United Kingdom and Japan.
Kjellén and the State as a Living Organism: Russian Expansion as an Organic Necessity
Another useful theoretical lens to interpret Russian geopolitical posture is that of Rudolf Kjellén, a Swedish political scientist of the early twentieth century who conceived the State as a living organism. Kjellén, who even coined the term “geopolitics,” was influenced by Friedrich Ratzel’s ideas and by social Darwinism, arguing that States are born, grow, and decline analogously to living beings [13]. In his work Staten som livsform (The State as a Form of Life, 1916), he maintained that States possess an organic dimension in addition to the legal one and that “just as any form of life, States must expand or die” [14]. This expansion would not be motivated merely by aggressive conquest but seen as a necessary growth for the self-preservation of the state organism [14]. In complement, Kjellén echoed Ratzel’s “law of expanding spaces” by asserting that large States expand at the expense of smaller ones, with it being only a matter of time before the great realms fill the available spaces [14]. That is, from the organic perspective, vigorous States tend to incorporate smaller neighboring territories, consolidating territorially much like an organism absorbing nutrients.
Applying this theory to the strategy of contemporary Russia, we can interpret Moscow’s actions—including the invasion of Ukraine and the ambition to restore its sphere of influence in Eurasia—as the expression of an organic drive for expansion. For a strategist influenced by this school, Russia (viewed as a state organism with a long imperial history) needs to expand its territory and influence to ensure its survival and security. The loss of control over spaces that once were part of the Russian Empire or the Soviet Union (such as Ukraine itself, the Caucasus, or Central Asia) may be perceived by Russian elites as an atrophy of the state organism, rendering it vulnerable. Thus, the reincorporation of these territories—whether directly (annexation) or indirectly (political vassalage)—would equate to restoring lost members or strengthening vital organs of the state body. In fact, official Russian arguments often portray Ukraine as an intrinsic part of “Russian historicity,” denying it a fully separate identity—a narrative that aligns with the idea that Russian expansion in that region is natural and necessary for the Russian State (seen as encompassing also Russian speakers beyond its current borders).
Kjellén would thus provide a theoretical justification for Russian territorial expansion as an organic phenomenon. As a great power, Russia would inevitably seek to expand at the expense of smaller neighbors (Ukraine, Georgia, the Baltic States, etc.), as dictated by the tendency of “great spaces to organize” to the detriment of the small [14]. This view can be identified in contemporary Russian doctrines that value spheres of influence and the notion that neighboring countries must gravitate around Moscow in order for the natural order to be maintained. The very idea of “Eurasia” united under Russian leadership (advocated by modern Russian thinkers) echoes this organic conception of vital space and expansion as a sign of the State’s vitality.
However, Kjellén’s theory also warns of the phenomenon of “imperial overstretch,” should a State exceed its internal cohesion limits by expanding excessively [14]. He recognized that extending borders too far could increase friction and vulnerabilities, making it difficult to maintain cohesion—a very large organism may lack functional integration. In the Russian context, this suggests that although expansion is seen as necessary, there are risks if Russia tries to encompass more than it can govern effectively. Conquering Ukraine and subjugating Eastern Europe, for example, could economically and militarily overburden the Russian State, especially if it faced resistance or had to manage hostile populations. However, in the hypothetical scenario we adopt (isolated USA and a weakened Europe), Russia might calculate that the organic benefits of expansion (territory, resources, strategic depth) would outweigh the costs, since external interference would be limited. Thus, through Kjellén’s lens, expansionist Russia behaves as an organism following its instinct for survival and growth, absorbing weaker neighbors; yet such a process is not devoid of challenges, requiring that the “organism Russia” manages to assimilate these new spaces without collapsing under its own weight.
Ratzel and Lebensraum: Resources, Territory, and the Justification for Expansion
Parallel to Kjellén’s organic view, Friedrich Ratzel’s theory offers another conceptual basis for understanding Russian expansion: the concept of Lebensraum (vital space). Ratzel, a German geographer of the late nineteenth century, proposed that the survival and development of a people or nation depended critically on the available physical space and resources. Influenced by Darwinist ideas, he applied the notion of “survival of the fittest” to nations, arguing that human societies need to conquer territory and resources to prosper, and that the stronger and fittest civilizations will naturally prevail over the weaker ones [12]. In 1901, Ratzel coined the term Lebensraum to describe this need for “vital space” as a geographical factor in national power [15].
Subsequently, this idea would be adopted—and extremely distorted—by Nazi ideology to justify Germany’s aggressions in Europe. However, the core of Ratzel’s concept is that territorial expansion is essential for the survival and growth of a State, especially to secure food, raw materials, and space for its population [12].
When examining Russia’s stance under this perspective, we can see several narratives that evoke the logic of Lebensraum. Russia is the largest country in the world by area; however, much of its territory is characterized by adverse climates (tundra, taiga) and is relatively sparsely populated in Siberia. On the other hand, adjacent regions such as Ukraine possess highly arable lands (chernozem—black soil), significant Slavic population density, and additional natural resources (coal in the Donbass, for example). An implicit justification for Russian expansion could be the search for supplementary resources and fertile lands to secure its self-sufficiency and power—exactly as Ratzel described that vigorous nations do. Historical records show that Ratzel emphasized agrarian primacy: he believed that new territories should be colonized by farmers, providing the food base for the nation [12]. Ukraine, historically called the “breadbasket of Europe,” fits perfectly into this vision of conquest for sustenance and agricultural wealth.
Furthermore, Ratzel viewed geography as a determinant of the destiny of nations—peoples adapted to certain habitats seek to expand them if they aspire to grow. In contemporary Russian discourse, there is often mention of the need to ensure security and territorial depth in the face of NATO, or to unite brotherly peoples (Russians and Russian speakers) within a single political space. Such arguments can be read as a modern translation of Lebensraum: the idea that the Russian nation, in order to be secure and flourish, must control a larger space, encompassing buffer zones and critical resources. This Russian “vital space” would naturally include Ukraine and other former Soviet republics, given the historical and infrastructural interdependence. Ratzel emphasized that peoples migrated and expanded when their original homeland no longer met their needs or aspirations [12]. Although contemporary Russia does not suffer from demographic pressure (on the contrary, it faces population decline), under the logic of a great power there is indeed a sentiment of geopolitical insufficiency for having lost influence over areas considered strategic. Thus, reconquering these areas would mean recovering the “habitat” necessary for the Russian nation to prosper and feel secure.
It is important to mention that, in Ratzel’s and Kjellén’s formulations, the pursuit of Lebensraum or organic expansion is not morally qualified—it is treated as a natural process in the politics of power. Thus, on the discursive level, Russia can avoid overly aggressive rhetoric and resort to “natural” justifications: for example, claiming that it needs to occupy Ukraine for defensive purposes (security space) or to reunify peoples (a common cultural and historical space). Beneath these justifications, however, resonates the geopolitical imperative to acquire more territory and resources as a guarantee of national survival, something consonant with Ratzel’s theory. In fact, Russian Realpolitik frequently prioritizes the control of energy resources (gas, oil) and transportation routes. Expanding its influence over central Eurasia would also mean controlling oil pipelines, gas lines, and logistical corridors—essential elements of modern Lebensraum understood as access to vital resources and infrastructure.
In summary, by conquering Ukraine and extending its reach into Eurasia, Russia could effectively invoke the concept of Lebensraum: presenting its expansion not as mere imperialism, but as a necessity to secure indispensable lands and resources for its people and to correct the “injustice” of a vital space diminished by post-Cold War territorial losses. The theories of Ratzel and Kjellén together paint a picture in which Russian expansion emerges almost as a natural law—the great State reclaiming space to ensure its survival and development at the expense of smaller neighbors.
Trump, NATO, and the Threat of American Withdrawal
One of the most alarming changes with Trump's return to power is the tense relationship with the North Atlantic Treaty Organization (NATO). Trump has long criticized allies for not meeting military spending targets, even threatening during his first term to withdraw the US from the alliance if members did not increase their contributions [2]. This threat, initially viewed with skepticism, became concrete after his re-election, leading European allies to seriously consider the possibility of having to defend themselves without American support [1]. In fact, Trump suggested in post-election interviews that the US would only remain in NATO if the allies “paid their bills” – otherwise, he “would seriously consider” leaving [2]. Such statements reinforced the warning that the US might not honor NATO's mutual defense commitment, precisely at a time of continuous Russian threat due to the war in Ukraine [1].
From a theoretical point of view, this posture of American retrenchment evokes the classic tension between maritime power and land power. Alfred Thayer Mahan emphasized that the global power of the US derived largely from its naval superiority and from alliances that ensured control over strategic maritime routes [9]. NATO, since 1949, has served not only to deter Soviet terrestrial advances in Eurasia, but also to secure the US naval presence in the North Atlantic and the Mediterranean – a fundamental element according to Mahan. In turn, Halford Mackinder warned that the balance of global power depended on the control of the Eurasian “Heartland” (the central region of Eurasia). The withdrawal or disengagement of the US (a maritime power) from this region could open the way for a continental power (such as Russia) to expand its influence in Eastern Europe, unbalancing the power balance [3]. In other words, by threatening to leave NATO, Trump jeopardizes the principle of containment that prevented Russian dominance over Eastern Europe – something that Mackinder would see as a dangerous shift in global power in favor of the Heartland power.
Adopting an impartial tone, it is observed that European countries have reacted to this new reality with precautionary measures. Strategic reports already calculate the cost of an autonomous European defense: hundreds of thousands of additional soldiers and investments of hundreds of billions of euros would be required if the US ceased to guarantee the security of the continent [1]. European dependence on American military power is significant and, without it, there would be a need for a major reinforcement of European Armed Forces [1]. This mobilization practically reflects the anticipation of a power vacuum left by the US – a scenario in which Mackinder’s theory (on the primacy of the Heartland and the vulnerability of the “external crescent” where Western Europe is located) regains its relevance.
The US–Ukraine Economic Agreement: Strategic Minerals in Exchange for Support?
Another novelty of Trump's second term is the unprecedented and transactional manner in which Washington has been dealing with the war in Ukraine. Instead of emphasizing security guarantees and alliances, the Trump administration proposed a trade agreement with Ukraine focused on the exploitation of strategic minerals, linking American support to a direct economic benefit. According to sources close to the negotiations, the US and Ukraine are about to sign a pact to share the revenues from the exploitation of critical mineral resources on Ukrainian territory [19]. Materials such as titanium, lithium, rare earths, and uranium – vital for high-tech and defense industries – would be at the core of this agreement [6]. According to the known draft, Ukraine would allocate 50% of the profits from new mineral ventures to a fund controlled by the US, which would reinvest part of the resources in the country’s own reconstruction [6] [19].
It is noteworthy that the pact does not include explicit security guarantees for Kyiv, despite Ukraine remaining under direct military threat from Russia [19]. Essentially, the Trump administration offers financial support and economic investment in exchange for a share in Ukrainian natural resources, but without formally committing to Ukraine's defense in the event of a renewed Russian offensive [19]. American authorities argue that this economic partnership would already be sufficient to “secure Ukrainian interests,” as it would provide the US with its own incentives to desire Ukraine’s stability [19]. “What could be better for Ukraine than being in an economic partnership with the United States?” stated Mike Waltz, a US national security advisor, defending the proposal [19].
Analysts, however, assess the agreement in divided terms. For some, it represents a form of economic exploitation at a time of Ukraine's fragility – comparing the demand to share mineral wealth amid war to a scheme of “mafia protection” [19]. Steven Cook, from the Council on Foreign Relations, classified the offer as “extortion,” and political scientist Virginia P. Fortna observed that charging resources from an invaded country resembles predatory practices [19]. Joseph Nye adds that it is a short-term gain strategy that could be “disastrous in the long run” for American credibility, reflecting the transactional approach that Trump even adopted with close allies in other contexts [19]. On the other hand, some see a future advantage for Kyiv: journalist Pierre Briançon suggests that at least this agreement aligns American commercial interests with Ukraine’s future, which could, in theory, keep the US involved in Ukrainian prosperity in the long term [19]. It is even recalled that President Zelensky himself proposed last year the idea of sharing natural resources with the US to bring the interests of the two countries closer together [19].
From the perspective of geopolitical theories, this agreement illustrates a shift towards economic pragmatism in international relations, approaching concepts proposed by Kjellén. Rudolf Kjellén, who coined the term “geopolitics,” saw the State as a territorial organism that seeks to ensure its survival through self-sufficiency and the control of strategic resources [4]. Trump's demand for a share in Ukrainian resources in order to continue supporting the country reflects a logic of autarky and direct national interest – that is, foreign policy serving primarily to reinforce the economic and material position of the US. This view contrasts with the traditional cooperative approach, but aligns with Kjellén’s idea that powerful States tend to transform international relations into opportunities for their own gain, ensuring access to vital raw materials. Similarly, Friedrich Ratzel argued that States have a “propensity to expand their borders according to their capacities,” seeking vital space (Lebensraum) and resources to sustain their development [11]. The US–Ukraine pact, by conditioning military/economic aid on obtaining tangible advantages (half of the mineral profits), is reminiscent of Ratzel’s perspective: the US, as a rising economic power, expands its economic influence over Ukrainian territory like an organism extending itself to obtain the necessary resources for its well-being. It is, therefore, a form of economic expansionism at the expense of purely ideological commitments or collective security.
Peace Negotiations Excluding Ukraine and the Legitimacy of the Agreement
Another controversial point is the manner in which peace negotiations between Russia and the West have been conducted under Trump's administration. Since taking office, the American president has engaged directly with Moscow in pursuit of a ceasefire, deliberately keeping the Ukrainian government out of the initial discussions [6]. Trump expressed his desire to “leave Zelensky out of the conversation” and also excluded the European Union from any influence in the process [6]. This negotiation strategy—conducted without the presence of the primary interested party, Ukraine—raises serious questions about the legitimacy and sustainability of any resulting agreement.
Historically, peace agreements reached without the direct participation of one of the conflicting parties tend to face problems in implementation and acceptance.
The exclusion of Ukraine in the decision-making phase brings to light the issue of guarantees. As noted, the emerging agreement lacks formal US security guarantees for Ukraine. This implies that, after the agreement is signed, nothing will prevent Russia from launching a new offensive if it deems it convenient, knowing that the US has not committed to defending it militarily. Experts have already warned that a ceasefire without robust protection may only be a pause for Russian rearmament, rendering the conflict “frozen” temporarily and potentially resumed in the near future. The European strategic community has expressed similar concern: without American deterrence, the risk of further Russian aggressions in the region increases considerably [1]. Denmark, for example, has released intelligence reports warning of possible imminent Russian attacks, prompting neighboring countries to accelerate plans for independent defense [1].
The legitimacy of this asymmetric peace agreement (negotiated without Ukraine fully at the table and under economic coercion) is also questionable from a legal and moral point of view. It violates the principle of self-determination by imposing terms decided by great powers on a sovereign country—a practice reminiscent of dark chapters in diplomacy, such as the Munich Agreement of 1938, when powers determined the fate of Czechoslovakia without its consent. In the current case, Ukraine would end up signing the agreement, but from a position of weakness, raising doubts about how durable such a commitment would be.
From Mackinder’s perspective, Ukraine’s removal from the battlefield without guarantees essentially means admitting a greater influence of Russia (the Heartland power) over Eastern Europe. This would alter the balance in Eurasia in a potentially lasting way. Furthermore, the fact that great powers negotiate over the heads of a smaller country evokes the imperial logic of the nineteenth and early twentieth centuries, when empires decided among themselves the divisions of foreign territories—a behavior that Mackinder saw as likely in a world of a “closed system.” With the entire world already occupied by States, Mackinder predicted that powers would begin to compete for influence within this consolidated board, often subjugating smaller states to gain advantage [3]. The US–Russia negotiation regarding Ukraine, without proper Ukrainian representation, exemplifies this type of neo-imperial dynamic in the twenty-first century.
Also noteworthy is the consonance with the ideas of Ratzel and Kjellén: both viewed smaller states as easily relegated to the status of satellites or even “parasitic organisms” in the orbit of larger states. Kjellén spoke of the intrinsic vulnerability of states with little territorial depth or economic dependence, making them susceptible to external pressures [4][20]. Ukraine, weakened by war and dependent on external aid, becomes a concrete example of this theorized vulnerability: it has had to cede strategic resources and accept terms dictated against its will in an attempt to secure its immediate survival. The resulting agreement, therefore, reflects a power imbalance characteristic of the hierarchical international relations described by classical geopolitical theorists.
Implicit Territorial Concessions and Trump’s Public Discourse
A central and controversial point in Trump’s statements regarding the war in Ukraine is the insinuation of territorial concessions to Russia as part of the conflict’s resolution. Publicly, Trump avoided explicitly condemning Russian aggression and even stated that he considered it “unlikely” that Ukraine would be able to retake all the areas occupied by the Russians [16]. In debates and interviews, he suggested that “if I were president, the war would end in 24 hours,” implying that he would force an understanding between Kyiv and Moscow that would likely involve ceding some territory in exchange for peace. This position marks a break with the previous US policy of not recognizing any territorial acquisitions made by force and fuels speculations that a future peace agreement sponsored by Trump would legitimize at least part of Russia’s gains since 2014 (Crimea, Donbass, and areas seized during the 2022 invasion).
The actions of his administration corroborate this interpretation. As discussed, the economic agreement focuses on the exploitation of Ukrainian natural resources, many of which are located precisely in regions currently under Russian military control, such as parts of the Zaporizhzhia Oblast, Donetsk, Lugansk, and the Azov Sea area [6]. A Ukrainian geologist, Hanna Liventseva, highlighted that “most of these elements (strategic minerals) are found in the south of the Ukrainian Shield, mainly in the Azov region, and most of these territories are currently invaded by Russia” [6]. This means that, to make joint exploitation viable, Russia’s de facto control over these areas would have to be recognized—or at least tolerated—in the short term. In other words, the pact indirectly and tacitly accepts Russian territorial gains, as it involves sharing the profits from resources that are not currently accessible to the Kyiv government.
Furthermore, figures close to Trump have made explicit statements regarding the possibility of territorial cession. Mike Waltz, Trump’s national security advisor, publicly stated that Zelensky might need to “cede land to Russia” to end the war [8]. This remark—made public in March 2025—confirms that the Trump White House considers it natural for Ukraine to relinquish parts of its territory in favor of an agreement. Such a stance marks a break from the previous Western consensus, which condemned any territorial gains by force. Under Trump, a pragmatic view (in the eyes of his supporters) or a cynical one (according to his critics) seems to prevail: sacrificing principles of territorial integrity to quickly end hostilities and secure immediate economic benefits.
In theoretical terms, this inclination to validate territorial gains by force recalls the concept of Realpolitik and the geopolitical Darwinism that influenced thinkers such as Ratzel. In Ratzel’s organic conception, expanding states naturally absorb neighboring territories when they are strong enough to do so, while declining states lose territory—a process almost biological in the selection of the fittest [11]. The Trump administration’s acceptance that Ukraine should “give something” to Moscow to seal peace reflects a normalization of this geopolitical selection process: it recognizes the aggressor (Russia) as having the “right” to retain conquered lands, because that is how power realities on the ground dictate. Mackinder, although firmly opposed to allowing Russia to dominate the Heartland, would see this outcome as the logical consequence of the lack of engagement from maritime powers (the USA and the United Kingdom, for example) in sustaining the Ukrainian counterattack. Without the active involvement of maritime power to balance the dispute, land power prevails in Eastern Europe.
From the perspective of international legitimacy, the cession of Ukrainian territories—whether de jure or de facto—creates a dangerous precedent in the post-Cold War era. Rewarding violent aggression with territorial gains may encourage similar strategies in other parts of the world, undermining the architecture of collective security. This is possibly a return to a world of spheres of influence, where great powers define borders and zones of control according to their convenience—something that the rules-based order after 1945 sought to avoid. Here, academic impartiality requires noting that coercion for territorial concessions rarely produces lasting peace, as the aggrieved party—in this case, Ukraine—may accept temporarily but will continue to assert its rights in the long term, as has occurred with other territorial injustices in history.
Territorial Ambitions of Trump: Greenland and Canada
Beyond the Eurasian theater of war, Trump revived geopolitical ambitions involving territories traditionally allied with the US: Greenland (an autonomous territory of Denmark) and Canada. As early as 2019, during his first term, Trump shocked the world by proposing to buy Greenland—rich in minerals and strategically positioned in the Arctic. Upon his return to power, he went further: expressing a “renewed interest” in acquiring Greenland and publicly suggesting the incorporation of Canada as the 51st American state [2].
In January 2025, during a press conference at Mar-a-Lago, he even displayed maps in which the US and Canada appeared merged into a single country, while Greenland was marked as a future American possession [2]. Posts by the president on social media included satirical images with a map of North America where Canada was labeled “51st” and Greenland designated as “Our Land” [2].
Such moves were met with concern and disbelief by allies. Canadian Prime Minister Justin Trudeau was caught on an open microphone warning that Trump’s fixation on annexation “is real” and not just a joke [7]. Trudeau emphasized that Washington appeared to covet Canada’s vast mineral resources, which would explain the insistence on the idea of absorption [7]. In public, Trump argued that Canadians “would be more prosperous as American citizens,” promising tax cuts and better services should they become part of the US [7]. On the Danish side, the reaction to the revived plan regarding Greenland was firmly negative—as it was in 2019—reaffirming that the territory is not for sale. Trump, however, insinuated that the issue might be one of national security, indicating that American possession of Greenland would prevent adverse influences (a reference to China and Russia in the Arctic) [2]. More worryingly, he refused to rule out the use of military means to obtain the island, although he assured that he had no intention of invading Canada by force (in the Canadian case, he spoke of “economic force” to forge a union) [2].
This series of initiatives reflects an unprecedented expansionist impetus by the US in recent times, at least in discourse. Analyzing this through the lens of classical geopolitics offers interesting insights. Friedrich Ratzel and his notion of Lebensraum suggest that powerful states, upon reaching a certain predominance, seek to expand their territory by influencing or incorporating adjacent areas. Trump, by targeting the immediate neighbor (Canada) and a nearby strategic territory (Greenland), appears to resurrect this logic of territorial expansion for the sake of gaining space and resources. Ratzel saw such expansion almost as a natural process for vigorous states, comparable to the growth of an organism [11]. From this perspective, the US would be exercising its “right” of expansion in North America and the polar region, integrating areas of vital interest.
Additionally, Alfred Mahan’s view on maritime power helps to understand the strategic value of Greenland. Mahan postulated that control of key maritime chokepoints and naval bases ensures global advantage [9]. Greenland, situated between the North Atlantic and the Arctic, has become increasingly relevant as climate change opens new polar maritime routes and reveals vast mineral deposits (including rare earth elements and oil). For the US, having a presence or sovereignty over Greenland would mean dominating the gateway to the Arctic and denying this space to rivals. This aligns with Mahan’s strategy of securing commercial and military routes (in this case, potential Arctic routes) and resources to consolidate naval supremacy. On the other hand, the incorporation of Canada—with its enormous territory, Arctic coastline, and abundant natural resources—would provide the US with formidable geoeconomic and geopolitical reinforcement, practically eliminating vulnerabilities along its northern border. This is an ambitious project that also echoes ideas of Kjellén, for whom an ideal State should seek territorial completeness and economic self-sufficiency within its region. Incorporating Canada would be the pinnacle of American regional autarky, turning North America into a unified bloc under Washington (a scenario reminiscent of the “pan-regions” conceived by twentieth-century geopoliticians influenced by Kjellén).
It is important to note, however, that these ambitions face enormous legal and political obstacles. The sovereignty of Canada and Greenland (Denmark) is guaranteed by international law, and both peoples categorically reject the idea of annexation. Any hostile action by the US against these countries would shake alliances and the world order itself. Even so, the very fact that an American president suggests such possibilities already produces geopolitical effects: traditional partners begin to distrust Washington’s intentions, seek alternative alliances, and strengthen nationalist discourses of resistance. In summary, Trump’s expansionist intentions in Greenland and Canada rekindle old territorial issues and paradoxically place the US in the position of a revisionist power—a role once associated with empires in search of colonies.
Implications for Brazil and South America: A New Neocolonization?
In light of this geopolitical reconfiguration driven by Trump's USA—with a reordering of alliances and a possible partition of spheres of influence among great powers—the question arises: what is the impact on Brazil and the other countries of South America? Traditionally, Latin America has been under the aegis of the Monroe Doctrine (1823), which established non-interference by Europe in the region and, implicitly, the primacy of the USA in the Western Hemisphere. In the post–Cold War period, this influence translated more into political and economic leadership, without formal annexations or direct territorial domination. However, the current context points to a kind of “neocolonization” of the Global South, in which larger powers seek to control resources and peripheral governments in an indirect yet effective manner.
Mackinder’s theories can be used to illuminate this dynamic. As mentioned, Mackinder envisioned the twentieth-century world as a closed system, in which there were no longer any unknown lands to be colonized—hence, the powers would fight among themselves for control over already occupied regions [3]. He predicted that Africa and Latin America (then largely European colonies or semi-colonies) would continue as boards upon which the great powers would project their disputes, a form of neocolonialism. In the current scenario, we see the USA proposing exchanges of protection for resources (as in Ukraine) and even leaders of developing countries seeking similar agreements. A notable example: the President of the Democratic Republic of the Congo, Felix Tshisekedi, praised the USA–Ukraine initiative and suggested an analogous agreement involving Congolese mineral wealth in exchange for US support against internal rebels (M23) [19]. In other words, African countries and possibly South American ones may enter into this logic of offering privileged access to resources (cobalt, lithium, food, biodiversity) in order to obtain security guarantees or investments. This represents a regression to the times when external powers dictated the directions of the South in exchange for promises of protection, characterizing a strategic neocolonialism.
For Brazil, in particular, this rearrangement generates both opportunities and risks. As a regional power with considerable diplomatic autonomy, Brazil has historically sought to balance relationships with the USA, Europe, China, and other actors, avoiding automatic alignments. However, in a world where Trump’s USA is actively redefining spheres of influence—possibly making deals with Russia that divide priorities (for example, Washington focusing on the Western Hemisphere and Moscow on the Eastern)—South America could once again be seen as an exclusive American sphere of influence. From this perspective, Washington could pressure South American countries to align with its directives, limiting partnerships with rivals (such as China) and seeking privileged access to strategic resources (such as the Amazon, fresh water, minerals, and agricultural commodities). Some indications are already emerging: Trump’s transactional approach mentioned by Nye included pressures on Canada and Mexico regarding border and trade issues, under the threat of commercial sanctions. It would not be unthinkable to adopt a hard line, for example, with regard to Brazilian environmental policies (linked to the Amazon) or Brazil’s relations with China, using tariffs or incentives as leverage—a sort of geopolitics of economic coercion.
On the other hand, Brazil and its neighbors could also attempt to take advantage of the Sino–North American competition. If the USA is distracted consolidating its hemispheric “hard power” hegemony (even with annexation fantasies in the north), powers such as China may advance their economic presence in South America through investments and trade (Belt and Road, infrastructure financing)—which is already happening. This would constitute an indirect neocolonial dispute in the South: Chinese loans and investments versus American demands and agreements, partly reminiscent of the nineteenth-century imperial competition (when the United Kingdom, USA, and others competed for Latin American markets and resources).
From a conceptual standpoint, Mackinder might classify South America as part of the “Outer Crescent” (external insular crescent)—peripheral to the great Eurasian “World-Island,” yet still crucial as a source of resources and a strategic position in the South Atlantic and Pacific. If the USA consolidates an informal empire in the Americas, it would be reinforcing its “insular bastion” far from the Eurasian Heartland, a strategy that Mackinder once suggested for maritime powers: to control islands and peripheral continents to compensate for the disadvantage of not controlling the Heartland. However, an excessive US dominance in the South could lead to local resistance and alternative alignments, unbalancing the region.
Kjellén would add that for Brazil to maintain its decisive sovereignty, it will need to strengthen its autarky and internal cohesion—in other words, reduce vulnerabilities (economic, military, social) that external powers might exploit [4]. Meanwhile, Mahan might point out the importance for Brazil of controlling its maritime routes and coastlines (South Atlantic) to avoid being at the mercy of a naval power like the USA. And Ratzel would remind us that states that do not expand their influence tend to be absorbed by foreign influences—which, in the context of Brazil, does not mean conquering neighboring territories, but rather actively leading South American integration to create a block more resilient to external intrusion.
In summary, South America finds itself in a more competitive and segmented world, where major players are resurrecting practices from past eras. The notion of “neocolonization” here does not imply direct occupation, but rather mechanisms of dependency: whether through unequal economic agreements or through diplomatic or military pressure for alignment. Brazil, as the largest economy and territory on the subcontinent, will have to navigate with heightened caution. A new global power balance, marked by the division of spheres of influence among the USA, China, and Russia, may reduce the sovereign maneuvering space of South American countries unless they act jointly. Thus, theoretical reflection suggests the need for South–South strategies, reinforcement of regional organizations, and diversification of partnerships to avoid falling into modern “neocolonial traps.”
Conclusion
The emerging post–re-election geopolitical conjuncture of Donald Trump signals a return to classical geopolitical principles, after several decades of predominance of institutional liberal views. We witness the revaluation of concepts such as spheres of influence, exchanges of protection for resources, naval power versus land power, and disputes over territory and raw materials—all central themes in the writings of Mackinder, Mahan, Kjellén, and Ratzel at the end of the nineteenth and the beginning of the twentieth century. An impartial analysis of these events, in light of these theories, shows internal coherence in Trump’s actions: although controversial, they follow a logic of maximizing national interest and the relative power of the USA on the world stage, even at the expense of established principles and alliances.
Halford Mackinder reminds us that, in a closed world with no new lands to conquer, the great powers will seek to redistribute the world among themselves [3]. This seems to manifest in the direct understandings between the USA and Russia over the fate of Ukraine, and in American ambitions in the Arctic and the Western Hemisphere. Alfred Mahan emphasizes that the control of the seas and strategic positions ensures supremacy—we see reflections of this in Trump’s obsession with Greenland (Arctic) and the possible neglect of the importance of maintaining NATO (and therefore the North Atlantic) as a cohesive bloc, something that Mahan’s theory would criticize due to the risk of a naval vacuum. Rudolf Kjellén and Friedrich Ratzel provide the framework to understand the more aggressive facet of expansionist nationalism: the idea of the State as an organism that needs to grow, secure resources, and seek self-sufficiency explains everything from the extortionate agreement imposed on Ukraine to the annexation rhetoric regarding Canada.
The potential consequences are profound. In the short term, we may witness a precarious ceasefire in the Ukraine war, with consolidated Russian territorial gains and Ukraine economically tied to the USA, but without formal military protection—a fragile “armed peace.” Western Europe, alarmed, may accelerate its independent militarization, perhaps marking the beginning of European defense autonomy, as is already openly debated [1]. At the far end of the globe, American activism in the Arctic and the Americas may reshape alliances: countries like Canada, once aligned with Washington, might seek to guarantee their sovereignty by distancing themselves from it; powers like China could take advantage of the openings to increase their presence in Latin America and Africa through economic diplomacy; and emerging countries of the Global South may have to choose between submitting to new “guardianships” or strengthening South–South cooperation.
Ultimately, the current situation reinforces the relevance of studying geopolitics through historical lenses. The actions of the Trump administration indicate that, despite all technological and normative advances, the competition for geographic power has not disappeared—it has merely assumed new formats. Academic impartiality obliges us not to prematurely judge whether these strategies will be successful or beneficial, but history and theory warn that neo-imperial movements tend to generate counter-reactions. As Mackinder insinuated, “every shock or change anywhere reverberates around the world,” and a sudden move by a superpower tends to provoke unforeseen adjustments and chain conflicts. It remains to be seen how the other actors—including Brazil and its neighbors—will adapt to this new chapter in the great struggle for global power, in which centuries-old theories once again have a surprising explanatory power over present events.
Bibliography
[1] A Referência. (2025). Europa calcula o custo de se defender sem os EUA: 300 mil soldados e 250 bilhões de euros a mais. Recuperado em 3 de março de 2025, de https://areferencia.com/europa/europa-calcula-o-custo-de-se-defender-sem-os-eua-300-mil-soldados-e-250-bilhoes-de-euros-a-mais/#:\~:text=Europa%20calcula%20o%20custo%20de,bilh%C3%B5es%20de%20euros%20a%20mais
[2] Brexit Institute. (2025). What happens if Trump invades Greenland? Recuperado em 3 de março de 2025, de https://dcubrexitinstitute.eu/2025/01/what-happens-if-trump-invades-greenland/#:\~:text=Ever%20since%20Donald%20Trump%20announced,agreed%20in%20Wales%20in%202014
[3] Cfettweis C:CST22(2)8576.DVI. (2025). Mackinder and Angell. Recuperado em 3 de março de 2025, de https://cfettweis.com/wp-content/uploads/Mackinder-and-Angell.pdf#:\~:text=meant%20the%20beginning%20of%20an,Mackinder
[4] Diva-Portal. (2025). The geopolitics of territorial relativity. Poland seen by Rudolf Kjellén. Recuperado em 3 de março de 2025, de https://www.diva-portal.org/smash/get/diva2:1696547/FULLTEXT02#:\~:text=,The%20state%20territory
[5] Geopolitical Monitor. (2025). The Russo-Ukrainian War and Mackinder’s Heartland Thesis. Recuperado em 3 de março de 2025, de https://www.geopoliticalmonitor.com/the-ukraine-war-and-mackinders-heartland-thesis/#:\~:text=In%201904%2C%20Sir%20Halford%20J,in%20adding%20a%20substantial%20oceanic
[6] Instituto Humanitas Unisinos. (2025). Trump obriga Zelensky a hipotecar a exploração de minerais críticos em troca do seu apoio. Recuperado em 3 de março de 2025, de https://www.ihu.unisinos.br/648986-trump-obriga-zelensky-a-hipotecar-a-exploracao-de-minerais-criticos-em-troca-do-seu-apoio#:\~:text=Essa%20troca%20inclui%20os%20cobi%C3%A7ados,s%C3%A3o%20praticamente%20inexploradas%20no%20pa%C3%ADs
[7] Politico. (2025). Trump’s annexation fixation is no joke, Trudeau warns. Recuperado em 3 de março de 2025, de https://www.politico.com/news/2025/02/07/canada-trudeau-trump-51-state-00203156#:\~:text=TORONTO%20%E2%80%94%20Prime%20Minister%20Justin,Canada%20becoming%20the%2051st%20state%2C%E2%80%9D%20Trudeau%20said
[8] The Daily Beast. (2025). Top Trump Adviser Moves Goalpost for Ukraine to End War. Recuperado em 3 de março de 2025, de https://www.thedailybeast.com/top-trump-adviser-moves-goalpost-for-ukraine-to-end-war/#:\~:text=LAND%20GRAB
[9] The Geostrata. (2025). Alfred Thayer Mahan and Supremacy of Naval Power. Recuperado em 3 de março de 2025, de https://www.thegeostrata.com/post/alfred-thayer-mahan-and-supremacy-of-naval-power#:\~:text=Alfred%20Thayer%20Mahan%20and%20Supremacy,control%20over%20maritime%20trade%20routes
[10] U.S. Department of State. (2025). Mahan’s The Influence of Sea Power upon History: Securing International Markets in the 1890s. Recuperado em 3 de março de 2025, de https://history.state.gov/milestones/1866-1898/mahan#:\~:text=Mahan%20argued%20that%20British%20control,American%20politicians%20believed%20that%20these
[11] Britannica. (2025a). Friedrich Ratzel | Biogeography, Anthropogeography, Political Geography. Recuperado em 3 de março de 2025, de https://www.britannica.com/biography/Friedrich-Ratzel#:\~:text=webster,Swedish%20political%20scientist%20%2076
[12] Britannica. (2025b). Lebensraum. Recuperado em 3 de março de 2025, de https://www.britannica.com/topic/Lebensraum#:\~:text=defined,The
[13] Britannica. (2025c). Rudolf Kjellén. Recuperado em 3 de março de 2025, de https://www.britannica.com/biography/Rudolf-Kjellen
[14] Wikipedia (ZH). (2025). Rudolf Kjellén. Recuperado em 3 de março de 2025, de https://zh.wikipedia.org/wiki/w:Rudolf_Kjell%C3%A9n#:\~:text=Besides%20legalistic%2C%20states%20have%20organic,preservation.%20%5B%203
[15] Wikipedia. (2025). Lebensraum. Recuperado em 3 de março de 2025, de https://en.wikipedia.org/wiki/Lebensraum#:\~:text=The%20German%20geographer%20and%20ethnographer,into%20the%20Greater%20Germanic%20Reich
[16] YouTube. (2025). Trump says Ukraine 'unlikely to get all land back' or join NATO [Vídeo]. Recuperado em 3 de março de 2025, de https://www.youtube.com/watch?v=BmHzAVLhsXU#:\~:text=Trump%20says%20Ukraine%20%27unlikely%20to,for%20it%20to%20join%20NATO
[17] U.S. Naval Institute. (2025) Operation World Peace. Recuperado em 3 de março de 2025, de https://www.usni.org/magazines/proceedings/1955/june/operation-world-peace#:\\~:text=“The Mahan doctrine%2C” according to,the word “airships” is more
[18] Emissary. (2024) Trump’s Greenland and Panama Canal Threats Are a Throwback to an Old, Misguided Foreign Policy. Recuperado em 3 de março de 2025, de https://carnegieendowment.org/emissary/2025/01/trump-greenland-panama-canal-monroe-doctrine-policy?lang=en
[19] A Referência. Acordo EUA-Ucrânia está praticamente fechado, mas analistas se dividem sobre quem sairá ganhando. Recuperado em 3 de março de 2025, de https://areferencia.com/europa/acordo-eua-ucrania-esta-praticamente-fechado-mas-analistas-se-dividem-sobre-quem-saira-ganhando/#:\\~:text=EUA e 17,o acordo a seu favor
[20] Wikipedia. (2025) Geopolitik. Recuperado em 3 de março de 2025, de https://en.wikipedia.org/wiki/Geopolitik#:\\~:text=Rudolph Kjellén was Ratzel's Swedish,Kjellén's State
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Eltoo
Read the paper, it's actually nice and small. You can read only everything up to section 4.2 and it will be enough. Done.
Ok, you don't want to. Or you tried but still want to read here.
Eltoo is a way of keeping payment channel state that works better than the original scheme used in Lightning. Since Lightning is a bunch of different protocols glued together, it can It replace just the part the previously dealed with keeping the payment channel.
Eltoo works like this: A and B want a payment channel, so they create a multisig transaction with deposits from both -- or from just one, doesn't matter. That transaction is only spendable if both cooperate. So if one of them is unresponsive or non-cooperative the other must have a way to get his funds back, so they also create an update transaction but don't publish it to the blockchain. That update transaction spends to a settlement transaction that then distributes the money back to A and B as their balances say.
If they are cooperative they can change the balances of the channel by just creating new update transactions and settlement transactions and number them like 1, 2, 3, 4 etc.
Solid arrows means a transaction is presigned to spend only that previous other transaction; dotted arrows mean it's a floating transaction that can spend any of the previous.
Why do they need and update and a settlement transaction?
Because if B publishes update2 (in which his balances were greater) A needs some time to publish update4 (the latest, which holds correct state of balances).
Each update transaction can be spent by any newer update transaction immediately or by its own specific settlement transaction only after some time -- or some blocks.
Hopefully you got that.
How do they close the channel?
If they're cooperative they can just agree to spend the funding transaction, that first multisig transaction I mentioned, to whatever destinations they want. If one party isn't cooperating the other can just publish the latest update transaction, wait a while, then publish its settlement transaction.
How is this better than the previous way of keeping channel states?
Eltoo is better because nodes only have to keep the last set of update and settlement transactions. Before they had to keep all intermediate state updates.
If it is so better why didn't they do it first?
Because they didn't have the idea. And also because they needed an update to the Bitcoin protocol that allowed the presigned update transactions to spend any of the previous update transactions. This protocol update is called
SIGHASH_NOINPUT
[^anyprevout], you've seen this name out there. By marking a transaction withSIGHASH_NOINPUT
it enters a mystical state and becomes a floating transaction that can be bound to any other transaction as long as its unlocking script matches the locking script.Why can't update2 bind itself to update4 and spend that?
Good question. It can. But then it can't anymore, because Eltoo uses
OP_CHECKLOCKTIMEVERIFY
to ensure that doesn't actually check not a locktime, but a sequence. It's all arcane stuff.And then Eltoo update transactions are numbered and their lock/unlock scripts will only match if a transaction is being spent by another one that's greater than it.
Do Eltoo channels expire?
No.
What is that "on-chain protocol" they talk about in the paper?
That's just an example to guide you through how the off-chain protocol works. Read carefully or don't read it at all. The off-chain mechanics is different from the on-chain mechanics. Repeating: the on-chain protocol is useless in the real world, it's just a didactic tool.
[^anyprevout]: Later
SIGHASH_NOINPUT
was modified to fit better with Taproot and Schnorr signatures and renamed toSIGHASH_ANYPREVOUT
. -
@ 8671a6e5:f88194d1
2025-05-06 16:23:25"I tried pasting my login key into the text field, but no luck—it just wouldn't work. Turns out, the login field becomes completely unusable whenever the on-screen keyboard shows up on my phone. So either no one ever bothered to test this on a phone, or they did and thought, ‘Eh, who needs to actually log in anyway?’."
### \ \ Develop and evolve
Any technology or industry at the forefront of innovation faces the same struggle. Idealists, inventors, and early adopters jump in first, working to make things usable for the technical crowd. Only later do the products begin to take shape for the average user.
Bitcoin’s dropping the Ball on usability (and user-experience)
First, we have to acknowledge the progress we've made. Bitcoin has come a long way in terms of usability—no doubt about it. Even if I still think it’s bad, it’s nowhere near as terrible as it was ten or more years ago. The days of printing a paper wallet from some shady website and hoping it would still work months or years later are behind us. The days of buggy software never getting fixed are mostly over.
The Bitcoin technology itself made progress through many BIPs (Bitcoin Improvement Proposals) and combined with an increasing number of apps, devs, websites and related networks (Liquid, Lightning, Nostr, ....) we can say that we're seeing a strong ecosystem going its way. The ecosystem is alive and expanding, and technically, things are clearly working. The problem is that we’re still building with a mindset where developers and project managers consider usability—but don’t truly care about it in practice. They don’t lead with it. (Yes, there are always exceptions.)
All that progress looks cool, when you see the latest releases of hardware wallets, software wallets, exchanges, nostr clients and services built purely for bitcoin, you're usually thinking that we've progressed nicely. But I want to focus on the downside of all these shiny tools. Because if Bitcoin has made it this far, it’s mostly thanks to people who deeply understand its value and are stubborn enough to push through the friction. They don’t give up when the user experience sucks.
Many bitcoiners completely lost their perspective on the software front in my opinion. Because we could have been so much further ahead, and we didn't because some of the most important components on the user-facing side of Bitcoin (arguably the most important part) hasn’t kept pace with the popularity and possible growth. And that should be a great concern, because Bitcoin is meant to be open and accessible. The blockchain is public. This is supposed to be for everyone. This is an open ledger technology so in theory everything is user-facing to one extent or another. Yet we fail on that front to make the glue stick. Somewhere, we’re easily amused by the tools we create, and often contains hurdles we can’t see or feel. While users reject it after 5 seconds tops.
We didn’t came a lot further yet, because we’ve ignored usability at its core (pun intended).
I’m not talking about usability in the “it works on my machine” sense. I’m talking about usability that meets the standard of modern apps. Think Spotify, Instagram, Uber, Gmail. Products that ordinary people use without reading a manual or digging through forums.
That’s the bar. We’re still far from it.
Bad UX scares your grandma away
… and that’s how many bitcoiners apparently like it.
Subsequently, when I say usability, I’m using it as an umbrella term. For me, it covers user experience, user interface, and real-life, full-cycle testing—from onboarding a brand new user to rolling out a new version of the app. And oh boy, our onboarding is so horrible. (“Hey wanna try bitcoin? Here’s an app that takes up to 4 minutes or more to get though, but wait, you’ll have to install a plugin, or wait I’ll send you an on-chain transaction…)
Take a look at the listings on Bitvocation, an excellent job board for Bitcoiners and related projects. You’ll quickly notice a pattern: almost no companies are hiring software testers. It’s marketing, more marketing, some sales, and of course, full-stack developers. But … No testers.
Because testing has become something that’s often skipped or automated in a hurry. Maybe the devs run a test locally to confirm that the feature they just built doesn’t crash outright. That’s it. And if testing does happen at a company, it’s usually shallow—focused only on the top five percent of critical bugs. The finer points that shape real user experience, like button placement, navigation flow, and responsiveness, are dumped on “the community.”
Which leads to some software being rushed out to production, and only then do teams discover how many problems exist in the real world. If there’s anyone left to care that is, since most teams are scattered all over the world and get paid by the hour by some VC firm on a small runway to a launch date.
This has real life consequences I’ve seen for myself with new users. Like a lightning wallet having a +5 minute onboarding time, and a fat on-screen error for the new users, or a hardware wallet stuck in an endless upgrade loop, just because nobody tested it on a device that was “old” (as in, one year old).
The result is clear: usability and experience testing are so low on the priority list, they may as well not exist. And that’s tragic, because the enthusiasm of new users gets crushed the moment they run into what I call Linux’plaining.
That’s when something obvious fails — like a lookup command that’s copied straight from their own help documentation but doesn’t work — and the answer you get as a user is something like: “Yeah, but first you have to…” followed by an explanation that isn’t mentioned anywhere in the interface or documentation. You were just supposed to know. No one updates the documentation, and no one cares. As most of the projects are very temporary or don’t really care if it succeeds or not, because they’re bitcoiners and bitcoin always wins. Just like PGP always was super cool and good, and users should just be smarter.
Lessons from the past usability disasters
We can always learn from the past especially when its precedents are still echoing through the systems we use today
So here goes, some examples from the legacy / fiat industry:
Lotus Notes, for example. Once a titan in enterprise communication software, which managed to capture about 145 million mailboxes. But its downfall is an example of what happens when you ignore and keep ignoring real-life user needs and fail to evolve with the market. Software like that doesn’t just fade, it collapses under the weight of its own inertia and bloat. If you think bitcoin can’t have that, yes… we’re of course not having a competitor in the market (hard money is hard money, not a mailbox or office software provider of course). But we can erode trust to the extent that it becomes LotusNotes’d.
Its archaic 1990s interface came with clunky navigation and a chaotic document management system. Users got frustrated fast—basic tasks took too long. Picture this: you're stuck in a cubicle, trying to find the calendar function in Lotus Notes while a giant office printer hisses and spits out stacks of paper behind you. The platform never made the leap to modern expectations. It failed to deliver proper mobile clients and clung to outdated tech like LotusScript and the Domino architecture, which made it vulnerable to security issues and incompatible with the web standards of the time. By 2012, IBM pulled the plug on the Lotus brand, as businesses moved en masse to cloud-based alternatives.
Another kind of usability failure has plagued PGP1 (and still does so after 34 years). PGP (Pretty Good Privacy) is a time-tested and rock-solid method for encryption and key exchange, but it’s riddled with usability problems, especially for anyone who isn’t technically inclined.
Its very nature and complexity are already steep hurdles (and yes, you can’t make it fully easy without compromising how it’s supposed to work—granted). But the real problem? Almost zero effort has gone into giving even the most eager new users a manageable learning curve. That neglect slowly killed off any real user base—except for the hardcore encryption folks who already know what they’re doing.
Ask anyone in a shopping street or the historic center of your city if they’ve heard of PGP. And on the off chance someone knows it’s not a trendy new fast-food joint called “Perfectly Grilled Poultry,” the odds of them having actually used it in the past six months are basically zero, unless you happen to bump into that one neckbeard guy in his 60s wearing a stained Star Wars T-shirt named Leonard.
The builders of PGP made one major mistake: they never treated usability as a serious design goal (that’s normal for people knee deep in encryption, I get that, it’s the way it is). PGP is fantastic on itself. Other companies and projects tried to build around it, but while they stumbled, tools like Signal and ProtonMail stepped in; offering the same core features of encryption and secure messaging, minus the headache. They delivered what PGP never could: powerful functionality wrapped in something regular people can actually use. Now, we’ve got encrypted communication flowing through apps like Signal, where all the complex tech is buried so deep in the background, the average user doesn’t even realize it’s there. ProtonMail went one step further even, integrating PGP so cleanly that users never need to exchange keys or understand the cryptography behind it all, yet still benefit from bulletproof encryption.
There’s no debate—this shift is a good thing. History shows that unusable software fades into irrelevance. Whether due to lack of interest, failure to reach critical mass, or a competitor swooping in to eat market share, clunky tools don’t survive. Now, to be clear, Bitcoin doesn’t have to worry about that kind of threat. There’s no real competition when it comes to hard money. Unless, of course, you genuinely believe that flashy shitcoins are a viable alternative—in which case, you might as well stop reading here and go get yourself scammed on the latest Solana airdrop or whatever hype train’s leaving the station today for the degens.
The main takeaway here is that Bitcoin must avoid becoming the next Lotus Notes, bloated with features but neglected by users—or the next PGP, sidelined by its own lack of usability. That kind of trajectory would erode trust, especially if usability and onboarding keep falling behind. And honestly, we’re already seeing signs of this in bitcoin. User adoption in Europe, especially in countries like Germany is noticeably lagging. The introduction of the EU’s MiCA regulations isn’t helping either. Most of the companies that were actually pushing adoption are now either shutting down, leaving the EU, or jumping through creative loopholes just to stay alive. And the last thing on anyone’s mind is improving UX. It takes time, effort, and specialized people to seriously think through how to build this properly, from the beginning, with this ease of use and onboarding in mind. That’s a luxury most teams can’t or won’t prioritize right now. Understandably when the lack of funds is still a major issue within the bitcoin space. (for people sitting on hard money, there’s surprisingly little money flowing into useful projects that aren’t hyped up empty boxes)
The number of nodes being set up by end users worldwide isn’t exactly skyrocketing either. Sure, there’s some growth but let’s not overstate it. Based on Bitnodes’ snapshots taken in March of each year, we’re looking at: 2022 : around 10500 2023 : around 17000 2024 : around 18500 2025 : around 21000 (I know there are different methods of measuring these, like read-only nodes, the % change is roughly the same nonetheless)
In my opinion, if we had non-clunky software that was actually released with proper testing and usability in mind, we could’ve easily doubled those node numbers. A bad user experience with a wallet spreads fast—and brings in exactly zero new users. The same goes for people trying to set up a miner or spin up a node, only to give up after a few frustrating steps. Sure, there are good people out there making guides and videos2 to help mitigate those hurdles, and that helps. But let’s be honest: there’s still very little “wow” factor when average users interact with most Bitcoin software. Almost every time they walk away, it’s because of one of two things—usability issues or bugs.
For the record: if a user can’t set up a wallet because the interface is so rotten or poorly tested, so they don’t know where to click or how to even select a seed word from a list, then that’s a problem — that’s a bug. Argue all you want: sure, it’s not a code-level bug and no, it’s not a system crash. But it is a usability failure. Call it onboarding friction, UX flaw, whatever fits your spreadsheet or circus Maximus of failures in your ticketing system. Bottom line: if your software doesn’t help users accomplish its core purpose, it’s broken. It’s a bug. Pretending it’s something a copywriter or marketing team can fix is pure deflection. The solution isn’t to relabel the problem, 1990’s telecom-style, just to avoid dealing with it. It’s to actually sit down, think, collaborate, and go through the issue, and getting real solutions out. ”No it’s not an issue, that’s how it works” like someone from a failing (and by now defunct) wallet told me once, is not a solution.
You got 21 seconds
The user can’t be onboarded because your software has an “issue”? In my book, that’s a bug. The usual response when you report it? “Yeah, that’s not a priority.” Well, guess what? It actually is a priority. All these small annoyances, hurdles, and bits of BS still plague this industry, and they make the whole experience miserable for regular people trying it out for the first time. The first 21 seconds (yeah, you see what I did there) are the most important when someone opens new software. If it doesn’t click right away—if they’re fiddling with sats or dollar signs, or hunting for some hidden setting buried behind a tiny arrow—it’s game over. They’re annoyed. They’re gone.
And this is exactly why we’re seeing a flood of shitcoin apps sweeping new users off their feet with "faster apps" or "nicer designs" apps that somehow can afford the UI specialists and slick, centralized setups to spread their lies and scams.
I hate to say it, but the Phantom wallet for example, for the Solana network, loaded with fake airdrop schemes and the most blatant scams — has a far better UX than most Bitcoin wallets and Lightning Wallets. Learn from it. Download that **** and get to know what we do wrong and how we can learn from the enemy.
That’s a hard truth. So, instead of just screaming “Uh, shitcooooin!” (yes, we know it is), maybe we should start learning from it. Their apps are better than ours in terms of UI and UX. They attract more people 5x faster (we know that’s also because of the fast gains and retardation playing with the marketing) but we can’t keep ignoring that. Somehow these apps attract more than our trustworthiness, our steady, secure, decentralized hard money truth.
It’s like stepping into one of the best Italian restaurants in town—supposedly. But then the menu’s a mess, the staff is scrolling on their phones, and something smells burnt coming from the kitchen. So, what do you do? You walk out. You cross the street to the fast food joint and order a burger and fries. And as you’re walking out with your food, someone from the Italian place yells at you: “Fast food is bad!” ”Yeah man I know, I wanted a nice Spaghetti aglio e olio, but here I am, digesting a cheeseburger that felt rather spongy.” (the problem is so gone so deep now, that users just walk past that Italian restaurant, don’t even recognize it as a restaurant because it doesn’t have cheeseburgers).
Fear of the dark
Technical people, not marketeers built bitcoin, it’s build on hundreds of small building blocks that interacted over time to have the bitcoin network and it’s immer evolving value. At one point David Chaum cooked up eCash, using blind signatures to let people send digital money anonymously — except it was still stuck on clunky centralized servers. Go back even further, to the 1970s, when Diffie, Hellman, and Rivest introduced public-key cryptography—the magic sauce that gave us secure digital signatures and authentication, making sure your messages stayed private and tamper-proof.
Fast forward to the 1990s, where peer-to-peer started to take off, decentralized networks getting started. Adam Back’s Hashcash in ‘97 used proof-of-work to fight email spam, and the cypherpunks were all about sticking it to the man with privacy-first, the invention 199 Human-Readable 128-bit keys3, decentralized systems. We started to swap files over p2p networks and later, torrents.
All these parts—anonymous cash, encryption, and leaderless networks finally clicked into place when Satoshi Nakamoto poured them into a chain of blocks, built on an ingenious “time-stamping” system: the timechain, or blockchain if you prefer. And just like that, Bitcoin was born—a peer-to-peer money system that didn’t need middlemen and actually worked without any central servers.
So yes, it’s only natural that Bitcoin and the many tools, born from math, obscurity, and cryptography, isn’t exactly always a user-interface darling. That’s also it’s charm for me in any case, as the core is robust and valuable beyond belief. That’s why we love to so see more use, more adoption.
But that doesn’t mean we can’t squash critical “show-stopper” bugs before releasing bitcoin-related software. And it sure as hell doesn’t mean we should act like jerks when a user points out something’s broken, confusing, or just doesn’t meet expectations. We can’t be complacent either about our role as builders of the next generations, as the core is hard money, and it would be a fatal mistake for the world to see it being used only for some rockstars from Wall Street and their counterparts to store their debt laden fiat. We can free people, make them better, make them elevate themselves. And yet, the people we try to elevate, we often alienate. All because we don’t test our stuff well enough. We should be so good, we blow the banking apps away. (they’re blowing themselves out of the market luckily with fiat “features” and overly over the top use of “analytics” to measure your carbon footprint for example).
We should be so damn professional that someone using Bitcoin apps for a full year wouldn’t even notice any bugs, because there wouldn’t be much to get annoyed by.
So… we have to do better. I’ve seen it time and time again — on Lightning tipping apps, Nostr plugins, wallets, hardware wallets, even metal plates we can screw up somehow … you name it. “It works on my machine”, isn’t enough anymore! Those days are over.
Even apps built with solid funding and strong dev and test teams like fedi.xyz4 can miss the mark. While the idea was good and the app itself ran fine without too much hurdles and usual bugs. But usability failed on a different front: there was just nothing meaningful to do in the app beyond poking around, chatting a bit, and sending a few sats back and forth. The communities it’s supposed to connect, just aren’t there, or weren’t there “yet”.
It’s a beautifully designed application and a strong proof-of-concept for federated community funds. But then… nothing. No one I know uses it. Their last blogpost was from beginning of October 2024, which doesn’t bode well, writing this than 6 months after. That said, they got some great onboarding going, usually under 20 seconds, which proves it can be done right (even if it was all a front-end for a more complex backend).
As you can see “usability” is a broad terminology, covering technical aspects, user-interface, but also use-cases. Even if you have a cool app that works really well and is well thought-out users won’t use it if there’s no real substance. You can’t get that critical mass by waiting for customers to come in or communities to embrace it. They won’t, because most of the individuals already had past experiences with bitcoin apps or services, and there’s a reason for them not being on-board already.
A lot of bitcoin companies build tools for new people. Never for the lapsed people, the persons that came in, thought of it as an investment or “a coin”… then left because of a bad experience or the price going down in fiat. All the while we have some software that usually isn’t so kind to new people, or causes loss of funds and time. Even if they make one little “mistake” of not knowing the system beforehand.
Bitcoin’s Moby Dick
\ Bitcoin itself has a big issue here. The user base could grow faster, and more robust, if there wasn’t software that worked as a sort of repellent against users.
I especially see a younger and less tech-savvy audience absolutely disliking the software we have now. No matter if it’s Electrum’s desktop wallet (hardly the sexiest tool out there, although I like it myself, but it lacks some features), Sparrow, or any lightning wallet out there (safe for WoS). I even saw people disliking Proton wallet, which I personally thought of as something really slick, well-made and polished. But even that doesn’t cut it for many people, as the “account” and “wallet” system wasn’t clear enough for them. (You see, we all have the same bias, because we know bitcoin, we look at it from a perspective of “facepalm, of course it’s a wallet named “account”, but when you sit next to a new user, it becomes clear that this is a hurdle. (please proton wallet: name a wallet a wallet, not “account”. But most users already in bitcoin, love what you’re doing)
Naturally disliking usability
The same technically brilliant people who maintain Bitcoin and build its apps haven’t quite tapped into their inner Steve Jobs—if that person even exists in the Bitcoin space. Let’s be honest: the next iOS-style wow moment, or the kind of frictionless usability seen in Spotify or Instagram, probably won’t come from hardcore Bitcoin devs alone. In fact, some builders in the space seem to actively disregard—or even look down on—discussions about usability. Just mention names like Wallet of Satoshi (yes, we all know it’s a custodial frontend) or the need for smoother interactions with Bitcoin, and you’ll get eye-rolls or defensive rants instead of curiosity or openness.
Moving more towards a better user interface for things like Sparrow or Bitcoin Core for example, would bring all kinds of “bad things” according to some, and on top of that, bring in new users (noobs) that ask questions like: “Do you burn all these sats when I make a transaction?” (Yes, that’s a real one.)
I get the “usability sucks” gripe — fear of losing key features, dumbing things down, or opening the door to unwanted changes (like BIP proposals real bitcoiners hate) that tweak bitcoin to suit any user’s whim. Close to no one in bitcoin (really in bitcoin!) wants that, including me.
That fear is however largely unfounded; because Bitcoin doesn’t change without consensus. Any change that would undermine its core use or value proposition simply won’t make it through. And let’s be honest: most of the users who crave these “faster,” centralized alternatives—those drawn to slick apps, one-click solutions, and dopamine-driven UI—will either stick with fiat, ape into the shitcoin-of-the-month, or praise the shiny new CBDC once it drops (“much fast, much cool”). These degen types, chasing fiat gains and jackpot dreams, aren’t relevant to this story, No matter what we build for bitcoin, they’ll always love the fiat-story and will always dislike bitcoin because it’s not a jackpot for them. (Honestly, why don’t they just gamble at a casino?)
People who fear that improving usability will somehow bring down the Bitcoin network are being a bit too paranoid—and honestly, they often don’t understand what usability or proper testing actually means.
They treat it like fluff, when in reality it's fundamental. Usability doesn't mean dumbing things down or compromising Bitcoin's core values; it means understanding why your fancy new app isn’t being used by anyone outside of your bubble. Testing is the beating heart of getting things out with confidence. Nothing more satisfying in software building than to proudly show even your beta versions to users, knowing it’s well tested. It’s much more than clicking a few buttons and tossing your code on GitHub. It's about asking real questions: can someone outside your Telegram group actually use this and will it they be using the software at all?
If you create a Nostr app that opens an in-app browser window and then tries to log you in with your NIPS05 or NIPS07 or whatever number it is that authenticates you, then you need to think about how it’s going to work in real life. Have people already visited this underlying website? Is that website using the exact same mechanism? Is it really working like we think it is in the real world? (Some notable good things are happening with the development of Keychat for example, I have the feeling they get it, it’s not all bad). And yes, there are still bugs and things to improve there, they’re just starting. (The browser section and nostr login need some work imho).
Guess what? You can test your stuff. But it takes time and effort. The kind of effort that, if skipped, gets multiplied across thousands of people. Thousands of people wasting their time trying to use your app, hitting errors, assuming they did something wrong, retrying, googling workarounds—only to eventually realize: it’s not them. It’s a bug. A bug you didn’t catch. Because you didn’t test. And now everyone loses. And guess what? Those users? They’re not coming back.
A good example (to stay positive here) is Fountain App, where the first versions were , eh… let’s say not so good, and then quickly evolved into a company and product that works really well, and also listens to their users and fixes their bugs. The interface can still be better in my opinion, but it’s getting there. And it’s super good now.
A bad example? Alby. (Sorry to say.) It still suffers from a bloated, clunky interface and an onboarding flow that utterly confuses new or returning users. It just doesn’t get the job done. Opinions may vary, sure, but hand this app to any non-technical user and ask them to get online and do a Nostr zap. Watch what happens. If they even manage to get through the initial setup, that is.
Another example? Bitkit. When I tried transferring funds from the "savings" to the "spending" account, the wallet silently opened a Lightning channel—no warning, no explanation—and suddenly my coins were locked up. To make things worse, the wallet still showed the full balance as spendable, even though part of it was now stuck in that channel. That was in November 2024, the last time I touched Bitkit. I wasted too much time trying to figure it out, I haven’t looked back (assuming the project is even still alive, I didn’t see them pop up anywhere).
Some metal BIP39 backup tools are great in theory but poorly executed. I bought one that didn’t even include a simple instruction on how to open it. The person I gave it to spent two hours trying to open it with a screwdriver and even attempted drilling. Turns out, it just slides open with some pressure. A simple instruction would’ve saved all that frustration.
Builders often assume users “just get it,” but a small guide could’ve prevented all the hassle. It’s a small step, but it’s crucial for better user experience. So why not avoid such situations and put a friggin cheap piece of paper in the box so people know how to open it? (The creators would probably facepalm if they read this, “how can users nòt see this?”). Yeah,… put a paper in there with instructions.
That’s natural, because as a creator you’re “in” it, you know. You don’t see how others would overlook something so obvious.
Bitcoiners are extremely bad on that front.
I’ll dive deeper into some examples in part 2 of this post.
By AVB
end of part 1
If you like to support independent thought and writings on bitcoin, follow this substack please https://coinos.io/allesvoorbitcoin/receive\ \ footnotes:
1 https://philzimmermann.com/EN/findpgp/
2 BTC sessions: set up a bitcoin node
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Hosted-channels Lightning wallet that runs in the browser
Communicates over HTTP with a server that is actually connected to the Lightning Network, but generates preimages and onions locally, doing everything like the Hosted Channels protocol says. Just the communication method changes.
Could use this library: https://www.npmjs.com/package/bolt04
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28doulas.club
A full catalog of all Brazilian doulas with data carefully scrapped from many websites that contained partial catalogs and some data manually included. All this packaged as a Couchapp and served directly from Cloudant.
This was done because the idea of doulas was good, but I spotted an issue: pregnant womwn should know many doulas before choosing one that would match well, therefore a full catalog with a lot of information was necessary.
This was a huge amount of work mostly wasted.
Many doulas who knew about this didn't like it and sent angry and offensive emails telling me to remove them. This was information one should know before choosing a doula.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The monolithic approach to CouchDB views
Imagine you have an app that created one document for each day. The docs ids are easily "2015-02-05", "2015-02-06" and so on. Nothing could be more simple. Let's say each day you record "sales", "expenses" and "events", so this a document for a typical day for the retail management Couchapp for an orchid shop:
{ "_id": "2015-02-04", "sales": [{ "what": "A blue orchid", "price": 50000 }, { "what": "A red orchid", "price": 3500 }, { "what": "A yellow orchid", "price": 11500 }], "expenses": [{ "what": "A new bucket", "how much": 300 },{ "what": "The afternoon snack", "how much": "1200" }], "events": [ "Bob opened the store", "Lisa arrived", "Bob went home", "Lisa closed the store" ] }
Now when you want to know what happened in a specific day, you know where to look at.
But you don't want only that, you want profit reports, cash flows, day profitability, a complete log of the events et cetera. Then you create one view to turn this mess into something more useful:
``` function (doc) { var spldate = doc._id.split("-") var year = parseInt(spldate[0]) var month = parseInt(spldate[1]) var day = parseInt(spldate[2])
doc.sales.forEach(function (sale, i) { emit(["sale", sale.what], sale.price) emit(["cashflow", year, month, day, i], sale.price) }) doc.expenses.forEach(function (exp, i) { emit(["expense", exp.what], exp.price) emit(["cashflow", year, month, day, i], -exp.price) }) doc.events.forEach(function (ev, i) { emit(["log", year, month, day, i], ev) }) } ```
Then you add a reduce function with the value of
_sum
and you get a bunch of useful query endpoints. For example, you can request/_design/orchids/_view/main?startkey=["cashflow", "2014", "12"]&endkey=["cashflow", "2014", "12", {}]
-
@ 04c915da:3dfbecc9
2023-09-26 17:34:13For years American bitcoin miners have argued for more efficient and free energy markets. It benefits everyone if our energy infrastructure is as efficient and robust as possible. Unfortunately, broken incentives have led to increased regulation throughout the sector, incentivizing less efficient energy sources such as solar and wind at the detriment of more efficient alternatives.
The result has been less reliable energy infrastructure for all Americans and increased energy costs across the board. This naturally has a direct impact on bitcoin miners: increased energy costs make them less competitive globally.
Bitcoin mining represents a global energy market that does not require permission to participate. Anyone can plug a mining computer into power and internet to get paid the current dynamic market price for their work in bitcoin. Using cellphone or satellite internet, these mines can be located anywhere in the world, sourcing the cheapest power available.
Absent of regulation, bitcoin mining naturally incentivizes the build out of highly efficient and robust energy infrastructure. Unfortunately that world does not exist and burdensome regulations remain the biggest threat for US based mining businesses. Jurisdictional arbitrage gives miners the option of moving to a friendlier country but that naturally comes with its own costs.
Enter AI. With the rapid development and release of AI tools comes the requirement of running massive datacenters for their models. Major tech companies are scrambling to secure machines, rack space, and cheap energy to run full suites of AI enabled tools and services. The most valuable and powerful tech companies in America have stumbled into an accidental alliance with bitcoin miners: THE NEED FOR CHEAP AND RELIABLE ENERGY.
Our government is corrupt. Money talks. These companies will push for energy freedom and it will greatly benefit us all.
-
@ bbef5093:71228592
2025-05-06 16:11:35India csökkentené az atomerőművek építési idejét ambiciózus nukleáris céljai eléréséhez
India célja, hogy a jelenlegi 10 évről a „világszínvonalú” 6 évre csökkentse atomerőművi projektjeinek kivitelezési idejét, hogy elérje a 2047-re kitűzött, 100 GW beépített nukleáris kapacitást.
Az SBI Capital Markets (az Indiai Állami Bank befektetési banki leányvállalata) jelentése szerint ez segítene mérsékelni a korábbi költségtúllépéseket, és vonzóbbá tenné az országot a globális befektetők számára.
A jelentés szerint a jelenlegi, mintegy 8 GW kapacitás és a csak 7 GW-nyi építés alatt álló kapacitás mellett „jelentős gyorsítás” szükséges a célok eléréséhez.
A kormány elindította a „nukleáris energia missziót”, amelyhez körülbelül 2,3 milliárd dollárt (2 milliárd eurót) különített el K+F-re és legalább öt Bharat kis moduláris reaktor (BSMR) telepítésére, de további kihívásokat kell megoldania a célok eléréséhez.
Az építési idők csökkentése kulcsfontosságú, de a jelentés átfogó rendszerszintű reformokat is javasol, beleértve a gyorsabb engedélyezést, a földszerzési szabályok egyszerűsítését, az erőművek körüli védőtávolság csökkentését, és a szabályozó hatóság (Atomic Energy Regulatory Board) nagyobb önállóságát.
A jelentés szerint a nemzet korlátozott uránkészletei miatt elengedhetetlen az üzemanyagforrások diverzifikálása nemzetközi megállapodások révén, valamint az indiai nukleáris program 2. és 3. szakaszának felgyorsítása.
India háromlépcsős nukleáris programja célja egy zárt üzemanyagciklus kialakítása, amely a természetes uránra, a plutóniumra és végül a tóriumra épül. A 2. szakaszban gyorsneutronos reaktorokat használnak, amelyek több energiát nyernek ki az uránból, kevesebb bányászott uránt igényelnek, és a fel nem használt uránt új üzemanyaggá alakítják. A 3. szakaszban fejlett reaktorok működnek majd India hatalmas tóriumkészleteire alapozva.
2025 januárjában az indiai Nuclear Power Corporation (NPCIL) pályázatot írt ki Bharat SMR-ek telepítésére, először nyitva meg a nukleáris szektort indiai magáncégek előtt.
Eddig csak az állami tulajdonú NPCIL építhetett és üzemeltethetett kereskedelmi atomerőműveket Indiában.
A Bharat SMR-ek (a „Bharat” hindiül Indiát jelent) telepítése a „Viksit Bharat” („Fejlődő India”) program része.
Engedélyezési folyamat: „elhúzódó és egymásra épülő”
A Bharat atomerőmű fejlesztésének részletei továbbra sem világosak, de Nirmala Sitharaman pénzügyminiszter júliusban elmondta, hogy az állami National Thermal Power Corporation és a Bharat Heavy Electricals Limited közös vállalkozásában valósulna meg a fejlesztés.
Sitharaman hozzátette, hogy a kormány a magánszektorral közösen létrehozna egy Bharat Small Reactors nevű céget, amely SMR-ek és új nukleáris technológiák kutatás-fejlesztésével foglalkozna.
Az SBI jelentése szerint javítani kell az SMR programot, mert az engedélyezési folyamat jelenleg „elhúzódó és egymásra épülő”, és aránytalan kockázatot jelent a magánszereplők számára a reaktorfejlesztés során.
A program „stratégiailag jó helyzetben van a sikerhez”, mert szigorú belépési feltételeket támaszt, így csak komoly és alkalmas szereplők vehetnek részt benne.
A kormánynak azonban be kellene vezetnie egy kártérítési záradékot, amely védi a magáncégeket az üzemanyag- és nehézvíz-ellátás hiányától, amely az Atomenergia Minisztérium (DAE) hatáskörébe tartozik.
A jelentés szerint mind az üzemanyag, mind a nehézvíz ellátása a DAE-től függ, és „a hozzáférés hiánya” problémát jelenthet. India legtöbb kereskedelmi atomerőműve hazai fejlesztésű, nyomottvizes nehézvizes reaktor.
A jelentés szerint: „A meglévő szabályozási hiányosságok kezelése kulcsfontosságú, hogy a magánszektor vezethesse a kitűzött 100 GW nukleáris kapacitás 50%-ának fejlesztését 2047-ig.”
Az NPCIL nemrégiben közölte, hogy India 2031–32-ig további 18 reaktort kíván hozzáadni az energiamixhez, ezzel az ország nukleáris kapacitása 22,4 GW-ra nő.
A Nemzetközi Atomenergia-ügynökség adatai szerint Indiában 21 reaktor üzemel kereskedelmi forgalomban, amelyek 2023-ban az ország áramtermelésének körülbelül 3%-át adták. Hat egység van építés alatt.
Roszatom pert indított a leállított Hanhikivi-1 projekt miatt Finnországban
Az orosz állami Roszatom atomenergetikai vállalat pert indított Moszkvában a finn Fortum és Outokumpu cégek ellen, és 227,8 milliárd rubel (2,8 milliárd dollár, 2,4 milliárd euró) kártérítést követel a finnországi Hanhikivi-1 atomerőmű szerződésének felmondása miatt – derül ki bírósági dokumentumokból és a Roszatom közleményéből.
A Roszatom a „mérnöki, beszerzési és kivitelezési (EPC) szerződés jogellenes felmondása”, a részvényesi megállapodás, az üzemanyag-ellátási szerződés megsértése, valamint a kölcsön visszafizetésének megtagadása miatt követel kártérítést.
A Fortum a NucNetnek e-mailben azt írta, hogy „nem kapott hivatalos értesítést orosz perről”.
A Fortum 2025. április 29-i negyedéves jelentésében közölte, hogy a Roszatom finn leányvállalata, a Raos Project, valamint a Roszatom nemzetközi divíziója, a JSC Rusatom Energy International, illetve a Fennovoima (a Hanhikivi projektért felelős finn konzorcium) között a Hanhikivi EPC szerződésével kapcsolatban nemzetközi választottbírósági eljárás zajlik.
2025 februárjában a választottbíróság úgy döntött, hogy nincs joghatósága a Fortummal szembeni követelések ügyében. „Ez a döntés végleges volt, így a Fortum nem része a választottbírósági eljárásnak” – közölte a cég.
A Fortum 2015-ben kisebbségi tulajdonos lett a Fennovoima projektben, de a teljes tulajdonrészt 2020-ban leírta.
A Fennovoima konzorcium, amelyben a Roszatom a Raos-on keresztül 34%-os kisebbségi részesedéssel rendelkezett, 2022 májusában felmondta a Hanhikivi-1 létesítésére vonatkozó szerződést az ukrajnai háború miatti késedelmek és megnövekedett kockázatok miatt.
A projekt technológiája az orosz AES-2006 típusú nyomottvizes reaktor lett volna.
2021 áprilisában a Fennovoima közölte, hogy a projekt teljes beruházási költsége 6,5–7 milliárd euróról 7–7,5 milliárd euróra nőtt.
2022 augusztusában a Roszatom és a Fennovoima kölcsönösen milliárdos kártérítési igényt nyújtott be egymás ellen a projekt leállítása miatt.
A Fennovoima nemzetközi választottbírósági eljárást indított 1,7 milliárd euró előleg visszafizetéséért. A Roszatom 3 milliárd eurós ellenkeresetet nyújtott be. Ezek az ügyek jelenleg is nemzetközi bíróságok előtt vannak.
Dél-koreai delegáció Csehországba utazik nukleáris szerződés aláírására
Egy dél-koreai delegáció 2025. május 6-án Csehországba utazik, hogy részt vegyen egy több milliárd dolláros szerződés aláírásán, amely két új atomerőmű építéséről szól a Dukovany telephelyen – közölte a dél-koreai kereskedelmi, ipari és energetikai minisztérium.
A delegáció, amelyben kormányzati és parlamenti tisztviselők is vannak, kétnapos prágai látogatásra indul, hogy részt vegyen a szerdára tervezett aláírási ceremónián.
A küldöttség találkozik Petr Fiala cseh miniszterelnökkel és Milos Vystrcil szenátusi elnökkel is, hogy megvitassák a Dukovany projektet.
Fiala múlt héten bejelentette, hogy Prága május 7-én írja alá a Dukovany szerződést a Korea Hydro & Nuclear Power (KHNP) céggel.
A cseh versenyhivatal nemrég engedélyezte a szerződés aláírását a KHNP-vel, miután elutasította a francia EDF fellebbezését.
A versenyhivatal április 24-i döntése megerősítette a korábbi ítéletet, amelyet az EDF megtámadott, miután 2024 júliusában elvesztette a tenderpályázatot a KHNP-vel szemben.
Ez lehetővé teszi, hogy a két dél-koreai APR1400 reaktor egység szerződését aláírják Dukovanyban, Dél-Csehországban. A szerződés az ország történetének legnagyobb energetikai beruházása, értéke legalább 400 milliárd korona (16 milliárd euró, 18 milliárd dollár).
A szerződést eredetileg márciusban írták volna alá, de a vesztes pályázók (EDF, Westinghouse) fellebbezései, dél-koreai politikai bizonytalanságok és a cseh cégek lokalizációs igényei miatt csúszott.
A KHNP januárban rendezte a szellemi tulajdonjogi vitát a Westinghouse-zal, amely korábban azt állította, hogy a KHNP az ő technológiáját használja az APR1400 reaktorokban.
A szerződés aláírása Dél-Korea első külföldi atomerőmű-építési projektje lesz 2009 óta, amikor a KHNP négy APR1400 reaktort épített az Egyesült Arab Emírségekben, Barakahban.
Csehországban hat kereskedelmi reaktor működik: négy orosz VVER-440-es Dukovanyban, két nagyobb VVER-1000-es Temelínben. Az IAEA szerint ezek az egységek a cseh áramtermelés mintegy 36,7%-át adják.
Az USA-nak „minél előbb” új reaktort kell építenie – mondta a DOE jelöltje a szenátusi bizottság előtt
Az USA-nak minél előbb új atomerőművet kell építenie, és elő kell mozdítania a fejlett reaktorok fejlesztését, engedélyezését és telepítését – hangzott el a szenátusi energiaügyi bizottság előtt.
Ted Garrish, aki a DOE nukleáris energiaügyi helyettes államtitkári posztjára jelöltként jelent meg, elmondta: az országnak új reaktort kell telepítenie, legyen az nagy, kis moduláris vagy mikroreaktor.
Az USA-ban jelenleg nincs épülő kereskedelmi atomerőmű, az utolsó kettő, a Vogtle-3 és Vogtle-4 2023-ban, illetve 2024-ben indult el Georgiában.
„A nukleáris energia kivételes lehetőség a növekvő villamosenergia-igény megbízható, megfizethető és biztonságos kielégítésére” – mondta Garrish, aki tapasztalt atomenergetikai vezető. Szerinte az USA-nak nemzetbiztonsági okokból is fejlesztenie kell a hazai urándúsító ipart.
Vizsgálni kell a nemzetközi piacot és a kormányközi megállapodások lehetőségét az amerikai nukleáris fejlesztők és ellátási láncok számára, valamint meg kell oldani a kiégett fűtőelemek elhelyezésének problémáját.
1987-ben a Kongresszus a nevadai Yucca Mountain-t jelölte ki a kiégett fűtőelemek végleges tárolóhelyének, de 2009-ben az Obama-adminisztráció leállította a projektet.
Az USA-ban az 1950-es évek óta mintegy 83 000 tonna radioaktív hulladék, köztük kiégett fűtőelem halmozódott fel, amelyet jelenleg acél- és betonkonténerekben tárolnak az erőművek telephelyein.
Garrish korábban a DOE nemzetközi ügyekért felelős helyettes államtitkára volt (2018–2021), jelenleg az Egyesült Haladó Atomenergia Szövetség igazgatótanácsának elnöke.
Egyéb hírek
Szlovénia közös munkát sürget az USA-val a nukleáris energiában:
Az USA és Horvátország tisztviselői együttműködésről tárgyaltak Közép- és Délkelet-Európa energiaellátásának diverzifikálása érdekében, különös tekintettel a kis moduláris reaktorokra (SMR). Horvátország és Szlovénia közösen tulajdonolja a szlovéniai Krško atomerőművet, amely egyetlen 696 MW-os nyomottvizes reaktorával Horvátország áramfogyasztásának 16%-át, Szlovéniáénak 20%-át adja. Szlovénia fontolgatja egy második blokk építését, de tavaly elhalasztotta az erről szóló népszavazást.Malawi engedélyezi a Kayelekera uránbánya újraindítását:
A Malawi Atomenergia Hatóság kiadta a sugárbiztonsági engedélyt a Lotus (Africa) Limited számára, így újraindulhat a Kayelekera uránbánya, amely több mint egy évtizede, 2014 óta állt a zuhanó uránárak és biztonsági problémák miatt. A bánya 85%-át az ausztrál Lotus Resources helyi leányvállalata birtokolja. A Lotus szerint a bánya újraindítása teljesen finanszírozott, kb. 43 millió dollár (37 millió euró) tőkével.Venezuela és Irán nukleáris együttműködést tervez:
Venezuela és Irán a nukleáris tudomány és technológia terén való együttműködésről tárgyalt. Az iráni állami média szerint Mohammad Eslami, az Iráni Atomenergia Szervezet vezetője és Alberto Quintero, Venezuela tudományos miniszterhelyettese egyetemi és kutatási programok elindításáról egyeztetett. Venezuelában nincs kereskedelmi atomerőmű, de 2010-ben Oroszországgal írt alá megállapodást új atomerőművek lehetőségéről. Iránnak egy működő atomerőműve van Bushehr-1-nél, egy másik ugyanott épül, mindkettőt Oroszország szállította. -
@ b6dcdddf:dfee5ee7
2025-05-06 15:58:23You can now fund projects on Geyser using Credit Cards, Apple Pay, Bank Transfers, and more.
The best part: 🧾 You pay in fiat and ⚡️ the creator receives Bitcoin.
You heard it right! Let's dive in 👇
First, how does it work? For contributors, it's easy! Once the project creator has verified their identity, anyone can contribute with fiat methods. Simply go through the usual contribution flow and select 'Pay with Fiat'. The first contribution is KYC-free.
Why does this matter? 1. Many Bitcoiners don't want to spend their Bitcoin: 👉 Number go up (NgU) 👉 Capital gains taxes With fiat contributions, there's no more excuse to contribute towards Bitcoin builders and creators! 2. Non-bitcoin holders want to support projects too. If someone loves your mission but only has a debit card, they used to be stuck. Now? They can back your Bitcoin project with familiar fiat tools. Now, they can do it all through Geyser!
So, why swap fiat into Bitcoin? Because Bitcoin is borderless. Fiat payouts are limited to certain countries, banks, and red tape. By auto-swapping fiat to Bitcoin, we ensure: 🌍 Instant payouts to creators all around the world ⚡️ No delays or restrictions 💥 Every contribution is also a silent Bitcoin buy
How to enable Fiat contributions If you’re a creator, it’s easy: - Go to your Dashboard → Wallet - Click “Enable Fiat Contributions” - Complete a quick ID verification (required by our payment provider) ✅ That’s it — your project is now open to global fiat supporters.
Supporting Bitcoin adoption At Geyser, our mission is to empower Bitcoin creators and builders. Adding fiat options amplifies our mission. It brings more people into the ecosystem while staying true to what we believe: ⚒️ Build on Bitcoin 🌱 Fund impactful initiatives 🌎 Enable global participation
**Support projects with fiat now! ** We've compiled a list of projects that currently have fiat contributions enabled. If you've been on the fence to support them because you didn't want to spend your Bitcoin, now's the time to do your first contribution!
Education - Citadel Dispatch: https://geyser.fund/project/citadel - @FREEMadeiraOrg: https://geyser.fund/project/freemadeira - @MyfirstBitcoin_: https://geyser.fund/project/miprimerbitcoin
Circular Economies - @BitcoinEkasi: https://geyser.fund/project/bitcoinekasi - Madagascar Bitcoin: https://geyser.fund/project/madagasbit - @BitcoinChatt : https://geyser.fund/project/bitcoinchatt - Uganda Gayaza BTC Market: https://geyser.fund/project/gayazabtcmarket
Activism - Education Bitcoin Channel: https://geyser.fund/project/streamingsats
Sports - The Sats Fighter Journey: https://geyser.fund/project/thesatsfighterjourney
Culture - Bitcoin Tarot Cards: https://geyser.fund/project/bitcointarotcard
originally posted at https://stacker.news/items/973003
-
@ 2e8970de:63345c7a
2025-05-06 15:13:49https://www.epi.org/blog/wage-growth-since-1979-has-not-been-stagnant-but-it-has-definitely-been-suppressed/
originally posted at https://stacker.news/items/972959