-
@ 502ab02a:a2860397
2025-05-07 01:08:58สัปดาห์นี้ถือว่าเป็นเบรคคั่นพักผ่อนแก้เครียดนิดหน่อยแล้วกันครับ เรามาเล่าย้อนอดีตกันนิดหน่อย เหมือนสตาร์วอส์ที่ฉายภาค 4-5-6 แล้วย้อนไป 1-2-3 ฮาๆๆๆ
เคยได้ยินคำว่า Nuremberg Trials ไหมครับ ย้อนความนิดนึงว่า Nuremberg (Nürnberg อ่านว่า เนือร์นแบร์ก) คือชื่อเมืองในเยอรมนีที่เคยเป็นเวทีพิจารณาคดีประวัติศาสตร์หลังสงครามโลกครั้งที่ 2
“Nuremberg Trials” คือการไต่สวนผู้มีส่วนเกี่ยวข้องกับ อาชญากรรมสงครามของนาซีเยอรมัน หลังสงครามโลกครั้งที่ 2 ในปี 1945–46 พันธมิตรผู้ชนะสงครามได้จับตัวผู้นำนาซี นักการเมือง หมอ นักวิทยาศาสตร์ มาขึ้นศาล ข้อหาของพวกเขาไม่ได้แค่ฆ่าคน แต่รวมถึงการละเมิดศีลธรรมมนุษย์ขั้นพื้นฐาน อย่างการทดลองทางการแพทย์กับนักโทษ โดยไม่มีการขอความยินยอม
จากการไต่สวนนี้ จึงเกิดหลักจริยธรรมที่ชื่อว่า “Nuremberg Code” ซึ่งกลายเป็นรากฐานของการทดลองทางการแพทย์ยุคใหม่ หัวใจของโค้ดนี้คือคำว่า “Informed Consent” แปลว่า ถ้าจะทำอะไรกับร่างกายใคร ต้องได้รับความยินยอมจากเขาอย่างเต็มใจ และมีข้อมูลครบถ้วน นี่แหละ คือบทเรียนจากบาดแผลของสงครามโลก
แต่แล้ว...ในปี 2020 โลกก็เข้าสู่ยุคที่ใครบางคนบอกว่า “ต้องเชื่อผู้เชี่ยวชาญ” ใครกังวล = คนไม่รักสังคม ใครถามเยอะ = คนต่อต้านวิทยาศาสตร์ การยินยอมโดยสมัครใจ เริ่มกลายเป็นแค่คำเชิงสัญลักษณ์
ในช่วงหลังนี้เราอาจจะได้ยินข่าวหรือทฤษฎีในอินเตอร์เนทเกี่ยวกับคำว่า Nuremberg 2.0 กันนะครับ เพราะเริ่มผุดขึ้นตามกระทู้เงียบ ๆ คลิปใต้ดิน และเวทีเสวนาแปลก ๆ ที่ไม่มีใครอยากอ้างชื่อบนเวที TED Talk ซึ่ง กลุ่มที่ใช้คำนี้มักจะหมายถึงความต้องการให้มีการ “ไต่สวน” หรือ “เอาผิด” กับนักการเมือง นักวิทยาศาสตร์ แพทย์ หรือองค์กรที่เกี่ยวข้องกับ การออกคำสั่ง การบังคับ การเซ็นเซอร์ข้อมูลที่ขัดแย้งกับแนวทางรัฐ การเผยแพร่ข้อมูลโดยไม่โปร่งใส พวกเขามองว่า นโยบายเหล่านั้นละเมิดสิทธิเสรีภาพของประชาชนในระดับที่เปรียบได้กับ “อาชญากรรมต่อมนุษยชาติ” จึงเสนอแนวคิด “Nuremberg 2.0”
พวกเขาไม่ได้เรียกร้องแค่ความโปร่งใส แต่เขาอยากเห็นการทบทวน ว่าใครกันแน่ที่ละเมิดหลักจริยธรรมที่โลกเคยตกลงกันไว้เมื่อ 80 ปีก่อน
Nuremberg คือการไต่สวนคนที่ใช้อำนาจรัฐฆ่าคนอย่างจงใจ Nuremberg 2.0 คือคำเตือนว่า “การใช้ความกลัวครอบงำเสรีภาพ” อาจไม่ต่างกันนัก
ทีนี้เคยสงสัยไหม แล้วมันเกี่ยวอะไรกับอาหาร?
เพราะจากวิกฤตโรคระบาด เราเริ่มเห็น “วิทยาศาสตร์แบบผูกขาด” คุมเกมส์ บริษัทเทคโนโลยีเริ่มเข้ามาทำอาหาร ชื่อใหม่ของเนื้อสเต๊กกลายเป็น “โปรตีนทางเลือก” อาหารจากแล็บกลายเป็น “ทางรอดของโลก” สารเคมีอัดลงไปแทนเนื้อจริง ๆ แต่มีฉลากติดว่า "รักษ์โลก ปลอดภัย ยั่งยืน"
แต่ถ้ามองให้ลึกลงไปอีกนิด บางกลุ่มคนกลับเริ่มเห็นอะไรบางอย่างที่ขนลุกกว่า เพราะมันคือ “ระบบควบคุมสุขภาพ” ที่อาศัย “ความกลัว” เป็นหัวเชื้อ และ “วิทยาศาสตร์แบบผูกขาด” เป็นกลไก
จากนั้น…ทุกอย่างก็จะถูกเสิร์ฟอย่างสวยงามในรูปแบบ "นวัตกรรมเพื่ออนาคต" ไม่ว่าจะเป็นอาหารเสริมชนิดใหม่ เนื้อสัตว์ปลูกในแล็บ หรืออาหารที่ไม่ต้องเคี้ยว
ลองคิดเล่น ๆครับ ถ้าสารอาหารถูกควบคุมได้ เหมือนที่เราเคยถูกบังคับกับบางอย่างได้ล่ะ? วันหนึ่ง เราอาจถูกขอให้ "กิน" ในสิ่งที่ระบบสุขภาพอนาคตเขาบอกว่าดี แล้วถ้าเฮียบอกว่าไม่อยากกิน...เขาอาจไม่ห้าม แต่ App สุขภาพจะเตือนว่า “คุณมีพฤติกรรมเสี่ยงต่อโลกใบนี้” แต้มเครดิตจะสุขภาพจะลดลงและส่วนลดข้าวกล่องเนื้อจากจุลินทรีย์จะไม่เข้าบัญชีเฮียอีกเลย
ใช่…มันไม่เหมือนการบังคับ แต่มันคือการสร้าง “ระบบทางเดียว” ที่ทำให้คนที่อยากเดินออกนอกแถว เหมือนเดินลงเหว
Nuremberg 2.0 จึงไม่ใช่แค่เรื่องของอดีตหรือโรคระบาด แต่มันเป็นกระจกที่สะท้อนว่า “ถ้าเราไม่เรียนรู้จากประวัติศาสตร์ เราอาจกินซ้ำรอยมันเข้าไปในมื้อเย็น”
อนาคตของอาหารอาจไม่ได้อยู่ในจาน แต่อยู่ในนโยบาย อยู่ในบริษัทที่ผลิตโปรตีนจากอากาศ อยู่ในทุนที่ซื้อนักวิทยาศาสตร์ไว้ทั้งวงการ และถ้าเราหลับตาอีกครั้ง หลายคนก็กลัวว่า...บทไต่สวน Nuremberg รอบใหม่ อาจไม่สามารถเกิดขึ้นอีกต่อไป เพราะคราวนี้ คนร้ายจะไม่ได้ถือปืน แต่อาจถือใบรับรองโภชนาการระดับโลกในมือแทน
#pirateketo #กูต้องรู้มั๊ย #ม้วนหางสิลูก #siamstr
-
@ a39d19ec:3d88f61e
2025-04-22 12:44:42Die Debatte um Migration, Grenzsicherung und Abschiebungen wird in Deutschland meist emotional geführt. Wer fordert, dass illegale Einwanderer abgeschoben werden, sieht sich nicht selten dem Vorwurf des Rassismus ausgesetzt. Doch dieser Vorwurf ist nicht nur sachlich unbegründet, sondern verkehrt die Realität ins Gegenteil: Tatsächlich sind es gerade diejenigen, die hinter jeder Forderung nach Rechtssicherheit eine rassistische Motivation vermuten, die selbst in erster Linie nach Hautfarbe, Herkunft oder Nationalität urteilen.
Das Recht steht über Emotionen
Deutschland ist ein Rechtsstaat. Das bedeutet, dass Regeln nicht nach Bauchgefühl oder politischer Stimmungslage ausgelegt werden können, sondern auf klaren gesetzlichen Grundlagen beruhen müssen. Einer dieser Grundsätze ist in Artikel 16a des Grundgesetzes verankert. Dort heißt es:
„Auf Absatz 1 [Asylrecht] kann sich nicht berufen, wer aus einem Mitgliedstaat der Europäischen Gemeinschaften oder aus einem anderen Drittstaat einreist, in dem die Anwendung des Abkommens über die Rechtsstellung der Flüchtlinge und der Europäischen Menschenrechtskonvention sichergestellt ist.“
Das bedeutet, dass jeder, der über sichere Drittstaaten nach Deutschland einreist, keinen Anspruch auf Asyl hat. Wer dennoch bleibt, hält sich illegal im Land auf und unterliegt den geltenden Regelungen zur Rückführung. Die Forderung nach Abschiebungen ist daher nichts anderes als die Forderung nach der Einhaltung von Recht und Gesetz.
Die Umkehrung des Rassismusbegriffs
Wer einerseits behauptet, dass das deutsche Asyl- und Aufenthaltsrecht strikt durchgesetzt werden soll, und andererseits nicht nach Herkunft oder Hautfarbe unterscheidet, handelt wertneutral. Diejenigen jedoch, die in einer solchen Forderung nach Rechtsstaatlichkeit einen rassistischen Unterton sehen, projizieren ihre eigenen Denkmuster auf andere: Sie unterstellen, dass die Debatte ausschließlich entlang ethnischer, rassistischer oder nationaler Kriterien geführt wird – und genau das ist eine rassistische Denkweise.
Jemand, der illegale Einwanderung kritisiert, tut dies nicht, weil ihn die Herkunft der Menschen interessiert, sondern weil er den Rechtsstaat respektiert. Hingegen erkennt jemand, der hinter dieser Kritik Rassismus wittert, offenbar in erster Linie die „Rasse“ oder Herkunft der betreffenden Personen und reduziert sie darauf.
Finanzielle Belastung statt ideologischer Debatte
Neben der rechtlichen gibt es auch eine ökonomische Komponente. Der deutsche Wohlfahrtsstaat basiert auf einem Solidarprinzip: Die Bürger zahlen in das System ein, um sich gegenseitig in schwierigen Zeiten zu unterstützen. Dieser Wohlstand wurde über Generationen hinweg von denjenigen erarbeitet, die hier seit langem leben. Die Priorität liegt daher darauf, die vorhandenen Mittel zuerst unter denjenigen zu verteilen, die durch Steuern, Sozialabgaben und Arbeit zum Erhalt dieses Systems beitragen – nicht unter denen, die sich durch illegale Einreise und fehlende wirtschaftliche Eigenleistung in das System begeben.
Das ist keine ideologische Frage, sondern eine rein wirtschaftliche Abwägung. Ein Sozialsystem kann nur dann nachhaltig funktionieren, wenn es nicht unbegrenzt belastet wird. Würde Deutschland keine klaren Regeln zur Einwanderung und Abschiebung haben, würde dies unweigerlich zur Überlastung des Sozialstaates führen – mit negativen Konsequenzen für alle.
Sozialpatriotismus
Ein weiterer wichtiger Aspekt ist der Schutz der Arbeitsleistung jener Generationen, die Deutschland nach dem Zweiten Weltkrieg mühsam wieder aufgebaut haben. Während oft betont wird, dass die Deutschen moralisch kein Erbe aus der Zeit vor 1945 beanspruchen dürfen – außer der Verantwortung für den Holocaust –, ist es umso bedeutsamer, das neue Erbe nach 1945 zu respektieren, das auf Fleiß, Disziplin und harter Arbeit beruht. Der Wiederaufbau war eine kollektive Leistung deutscher Menschen, deren Früchte nicht bedenkenlos verteilt werden dürfen, sondern vorrangig denjenigen zugutekommen sollten, die dieses Fundament mitgeschaffen oder es über Generationen mitgetragen haben.
Rechtstaatlichkeit ist nicht verhandelbar
Wer sich für eine konsequente Abschiebepraxis ausspricht, tut dies nicht aus rassistischen Motiven, sondern aus Respekt vor der Rechtsstaatlichkeit und den wirtschaftlichen Grundlagen des Landes. Der Vorwurf des Rassismus in diesem Kontext ist daher nicht nur falsch, sondern entlarvt eine selektive Wahrnehmung nach rassistischen Merkmalen bei denjenigen, die ihn erheben.
-
@ 39cc53c9:27168656
2025-04-09 07:59:35The new website is finally live! I put in a lot of hard work over the past months on it. I'm proud to say that it's out now and it looks pretty cool, at least to me!
Why rewrite it all?
The old kycnot.me site was built using Python with Flask about two years ago. Since then, I've gained a lot more experience with Golang and coding in general. Trying to update that old codebase, which had a lot of design flaws, would have been a bad idea. It would have been like building on an unstable foundation.
That's why I made the decision to rewrite the entire application. Initially, I chose to use SvelteKit with JavaScript. I did manage to create a stable site that looked similar to the new one, but it required Jav aScript to work. As I kept coding, I started feeling like I was repeating "the Python mistake". I was writing the app in a language I wasn't very familiar with (just like when I was learning Python at that mom ent), and I wasn't happy with the code. It felt like spaghetti code all the time.
So, I made a complete U-turn and started over, this time using Golang. While I'm not as proficient in Golang as I am in Python now, I find it to be a very enjoyable language to code with. Most aof my recent pr ojects have been written in Golang, and I'm getting the hang of it. I tried to make the best decisions I could and structure the code as well as possible. Of course, there's still room for improvement, which I'll address in future updates.
Now I have a more maintainable website that can scale much better. It uses a real database instead of a JSON file like the old site, and I can add many more features. Since I chose to go with Golang, I mad e the "tradeoff" of not using JavaScript at all, so all the rendering load falls on the server. But I believe it's a tradeoff that's worth it.
What's new
- UI/UX - I've designed a new logo and color palette for kycnot.me. I think it looks pretty cool and cypherpunk. I am not a graphic designer, but I think I did a decent work and I put a lot of thinking on it to make it pleasant!
- Point system - The new point system provides more detailed information about the listings, and can be expanded to cover additional features across all services. Anyone can request a new point!
- ToS Scrapper: I've implemented a powerful automated terms-of-service scrapper that collects all the ToS pages from the listings. It saves you from the hassle of reading the ToS by listing the lines that are suspiciously related to KYC/AML practices. This is still in development and it will improve for sure, but it works pretty fine right now!
- Search bar - The new search bar allows you to easily filter services. It performs a full-text search on the Title, Description, Category, and Tags of all the services. Looking for VPN services? Just search for "vpn"!
- Transparency - To be more transparent, all discussions about services now take place publicly on GitLab. I won't be answering any e-mails (an auto-reply will prompt to write to the corresponding Gitlab issue). This ensures that all service-related matters are publicly accessible and recorded. Additionally, there's a real-time audits page that displays database changes.
- Listing Requests - I have upgraded the request system. The new form allows you to directly request services or points without any extra steps. In the future, I plan to enable requests for specific changes to parts of the website.
- Lightweight and fast - The new site is lighter and faster than its predecessor!
- Tor and I2P - At last! kycnot.me is now officially on Tor and I2P!
How?
This rewrite has been a labor of love, in the end, I've been working on this for more than 3 months now. I don't have a team, so I work by myself on my free time, but I find great joy in helping people on their private journey with cryptocurrencies. Making it easier for individuals to use cryptocurrencies without KYC is a goal I am proud of!
If you appreciate my work, you can support me through the methods listed here. Alternatively, feel free to send me an email with a kind message!
Technical details
All the code is written in Golang, the website makes use of the chi router for the routing part. I also make use of BigCache for caching database requests. There is 0 JavaScript, so all the rendering load falls on the server, this means it needed to be efficient enough to not drawn with a few users since the old site was reporting about 2M requests per month on average (note that this are not unique users).
The database is running with mariadb, using gorm as the ORM. This is more than enough for this project. I started working with an
sqlite
database, but I ended up migrating to mariadb since it works better with JSON.The scraper is using chromedp combined with a series of keywords, regex and other logic. It runs every 24h and scraps all the services. You can find the scraper code here.
The frontend is written using Golang Templates for the HTML, and TailwindCSS plus DaisyUI for the CSS classes framework. I also use some plain CSS, but it's minimal.
The requests forms is the only part of the project that requires JavaScript to be enabled. It is needed for parsing some from fields that are a bit complex and for the "captcha", which is a simple Proof of Work that runs on your browser, destinated to avoid spam. For this, I use mCaptcha.
-
@ a9e24cc2:597d8933
2025-05-07 01:06:44𝐌𝐄𝐒𝐒𝐀𝐆𝐄 BlackHat_Nexus 𝐅𝐎𝐑 𝐀𝐍𝐘 𝐊𝐈𝐍𝐃 𝐎𝐅 𝐒𝐄𝐑𝐕𝐈𝐂𝐄 𝐑𝐄𝐂𝐎𝐕𝐄𝐑 𝐘𝐎𝐔𝐑 𝐀𝐂𝐂𝐎𝐔𝐍𝐓Fast, Available and Reliable for any of the following services 🤳 Recovery of lost funds🤳 Facebook Hack🤳 WhatsApp Hack 🤳 Instagram Hack🤳 Spying🤳 Windows Hacking🤳 Recover lost wallet 🤳 Credit score trick 🤳 Recover Password🤳 Gmail Hack🤳 SnapChat Hacking 🤳 Cellphone Monitoring 🤳 Tik Tok Hack🤳 Twitter Hack🤳 Lost Phone Tracking🤳 Lost IaptopTracking🤳 Lost Car Tracking🤳 Cloning WhatsApp🤳 Cryptocurrency Wallet🤳 Hacking🤳 Iphone unlock 🤳 Got banned 🤳 Private Number available🤳 Telegram hacking 🤳 Websites hacking 🤳 Hack University 🤳 IOS and Android hack 🤳 Wifi Hacking 🤳 CCTV hacking🤳 Hack Bot Game 🤳 Free fire hack 🤳 Changing of school grades 🤳 Cards 💳hackingNo 🆓 services 🚫WhatsApp +1 3606068592Send a DM https://t.me/BlackHat_Nexus@BlackHat_Nexus
-
@ 752f5d10:88491db3
2025-05-07 00:25:29Opinion about Trendo: Forex Trading & Broker (android)
L o s i n g $111,555 overnight was a crushing blow. The account lockout and subpar customer service that followed left me stunned. However, I gained invaluable insights from this ordeal : Conduct thorough research before committing Verify credentials and read reviews from multiple sources Be c a u t i o u s of un realistic promises Demand responsive and reliable support. Fortunately, I found chelsy__desmarais__54__A T__g=m=a=i=l__D=o=t__c_o_m, which helped me re-cover from this financial setback. W_h_a_t_s_A_p_p ; +1=8=5=9=4=3=6=4=2=1=1
WalletScrutiny #nostrOpinion
-
@ 7bdef7be:784a5805
2025-04-02 12:37:35The following script try, using nak, to find out the last ten people who have followed a
target_pubkey
, sorted by the most recent. It's possibile to shortensearch_timerange
to speed up the search.```
!/usr/bin/env fish
Target pubkey we're looking for in the tags
set target_pubkey "6e468422dfb74a5738702a8823b9b28168abab8655faacb6853cd0ee15deee93"
set current_time (date +%s) set search_timerange (math $current_time - 600) # 24 hours = 86400 seconds
set pubkeys (nak req --kind 3 -s $search_timerange wss://relay.damus.io/ wss://nos.lol/ 2>/dev/null | \ jq -r --arg target "$target_pubkey" ' select(. != null and type == "object" and has("tags")) | select(.tags[] | select(.[0] == "p" and .[1] == $target)) | .pubkey ' | sort -u)
if test -z "$pubkeys" exit 1 end
set all_events "" set extended_search_timerange (math $current_time - 31536000) # One year
for pubkey in $pubkeys echo "Checking $pubkey" set events (nak req --author $pubkey -l 5 -k 3 -s $extended_search_timerange wss://relay.damus.io wss://nos.lol 2>/dev/null | \ jq -c --arg target "$target_pubkey" ' select(. != null and type == "object" and has("tags")) | select(.tags[][] == $target) ' 2>/dev/null)
set count (echo "$events" | jq -s 'length') if test "$count" -eq 1 set all_events $all_events $events end
end
if test -n "$all_events" echo -e "Last people following $target_pubkey:" echo -e ""
set sorted_events (printf "%s\n" $all_events | jq -r -s ' unique_by(.id) | sort_by(-.created_at) | .[] | @json ') for event in $sorted_events set npub (echo $event | jq -r '.pubkey' | nak encode npub) set created_at (echo $event | jq -r '.created_at') if test (uname) = "Darwin" set follow_date (date -r "$created_at" "+%Y-%m-%d %H:%M") else set follow_date (date -d @"$created_at" "+%Y-%m-%d %H:%M") end echo "$follow_date - $npub" end
end ```
-
@ 2183e947:f497b975
2025-03-29 02:41:34Today I was invited to participate in the private beta of a new social media protocol called Pubky, designed by a bitcoin company called Synonym with the goal of being better than existing social media platforms. As a heavy nostr user, I thought I'd write up a comparison.
I can't tell you how to create your own accounts because it was made very clear that only some of the software is currently open source, and how this will all work is still a bit up in the air. The code that is open source can be found here: https://github.com/pubky -- and the most important repo there seems to be this one: https://github.com/pubky/pubky-core
You can also learn more about Pubky here: https://pubky.org/
That said, I used my invite code to create a pubky account and it seemed very similar to onboarding to nostr. I generated a private key, backed up 12 words, and the onboarding website gave me a public key.
Then I logged into a web-based client and it looked a lot like twitter. I saw a feed for posts by other users and saw options to reply to posts and give reactions, which, I saw, included hearts, thumbs up, and other emojis.
Then I investigated a bit deeper to see how much it was like nostr. I opened up my developer console and navigated to my networking tab, where, if this was nostr, I would expect to see queries to relays for posts. Here, though, I saw one query that seemed to be repeated on a loop, which went to a single server and provided it with my pubkey. That single query (well, a series of identical queries to the same server) seemed to return all posts that showed up on my feed. So I infer that the server "knows" what posts to show me (perhaps it has some sort of algorithm, though the marketing material says it does not use algorithms) and the query was on a loop so that if any new posts came in that the server thinks I might want to see, it can add them to my feed.
Then I checked what happens when I create a post. I did so and looked at what happened in my networking tab. If this was nostr, I would expect to see multiple copies of a signed messaged get sent to a bunch of relays. Here, though, I saw one message get sent to the same server that was populating my feed, and that message was not signed, it was a plaintext copy of my message.
I happened to be in a group chat with John Carvalho at the time, who is associated with pubky. I asked him what was going on, and he said that pubky is based around three types of servers: homeservers, DHT servers, and indexer servers. The homeserver is where you create posts and where you query for posts to show on your feed. DHT servers are used for censorship resistance: each user creates an entry on a DHT server saying what homeserver they use, and these entries are signed by their key.
As for indexers, I think those are supposed to speed up the use of the DHT servers. From what I could tell, indexers query DHT servers to find out what homeservers people use. When you query a homeserver for posts, it is supposed to reach out to indexer servers to find out the homeservers of people whose posts the homeserver decided to show you, and then query those homeservers for those posts. I believe they decided not to look up what homeservers people use directly on DHT servers directly because DHT servers are kind of slow, due to having to store and search through all sorts of non-social-media content, whereas indexers only store a simple db that maps each user's pubkey to their homeserver, so they are faster.
Based on all of this info, it seems like, to populate your feed, this is the series of steps:
- you tell your homeserver your pubkey
- it uses some sort of algorithm to decide whose posts to show you
- then looks up the homeservers used by those people on an indexer server
- then it fetches posts from their homeservers
- then your client displays them to you
To create a post, this is the series of steps:
- you tell your homeserver what you want to say to the world
- it stores that message in plaintext and merely asserts that it came from you (it's not signed)
- other people can find out what you said by querying for your posts on your homeserver
Since posts on homeservers are not signed, I asked John what prevents a homeserver from just making up stuff and claiming I said it. He said nothing stops them from doing that, and if you are using a homeserver that starts acting up in that manner, what you should do is start using a new homeserver and update your DHT record to point at your new homeserver instead of the old one. Then, indexers should update their db to show where your new homeserver is, and the homeservers of people who "follow" you should stop pulling content from your old homeserver and start pulling it from your new one. If their homeserver is misbehaving too, I'm not sure what would happen. Maybe it could refuse to show them the content you've posted on your new homeserver, keeping making up fake content on your behalf that you've never posted, and maybe the people you follow would never learn you're being impersonated or have moved to a new homeserver.
John also clarified that there is not currently any tooling for migrating user content from one homeserver to another. If pubky gets popular and a big homeserver starts misbehaving, users will probably need such a tool. But these are early days, so there aren't that many homeservers, and the ones that exist seem to be pretty trusted.
Anyway, those are my initial thoughts on Pubky. Learn more here: https://pubky.org/
-
@ 2dd9250b:6e928072
2025-03-22 00:22:40Vi recentemente um post onde a pessoa diz que aquele final do filme O Doutrinador (2019) não faz sentido porque mesmo o protagonista explodindo o Palácio dos Três Poderes, não acaba com a corrupção no Brasil.
Progressistas não sabem ler e não conseguem interpretar textos corretamente. O final de Doutrinador não tem a ver com isso, tem a ver com a relação entre o Herói e a sua Cidade.
Nas histórias em quadrinhos há uma ligação entre a cidade e o Super-Herói. Gotham City por exemplo, cria o Batman. Isso é mostrado em The Batman (2022) e em Batman: Cavaleiro das Trevas, quando aquele garoto no final, diz para o Batman não fugir, porque ele queria ver o Batman de novo. E o Comissário Gordon diz que o "Batman é o que a cidade de Gotham precisa."
Batman: Cavaleiro das Trevas Ressurge mostra a cidade de Gotham sendo tomada pela corrupção e pela ideologia do Bane. A Cidade vai definhando em imoralidade e o Bruce, ao olhar da prisão a cidade sendo destruída, decide que o Batman precisa voltar porque se Gotham for destruída, o Batman é destruído junto. E isso o da forças para consegue fugir daquele poço e voltar para salvar Gotham.
Isso também é mostrado em Demolidor. Na série Demolidor o Matt Murdock sempre fala que precisa defender a cidade Cozinha do Inferno; que o Fisk não vai dominar a cidade e fazer o que ele quiser nela. Inclusive na terceira temporada isso fica mais evidente na luta final na mansão do Fisk, onde Matt grita que agora a cidade toda vai saber o que ele fez; a cidade vai ver o mal que ele é para Hell's Kitchen, porque a gente sabe que o Fisk fez de tudo para a imagem do Demolidor entrar e descrédito perante os cidadãos, então o que acontece no final do filme O Doutrinador não significa que ele está acabando com a corrupção quando explode o Congresso, ele está praticamente interrompendo o ciclo do sistema, colocando uma falha em sua engrenagem.
Quando você ouve falar de Brasília, você pensa na corrupção dos políticos, onde a farra acontece,, onde corruptos desviam dinheiro arrecadado dos impostos, impostos estes que são centralizados na União. Então quando você ouve falarem de Brasília, sempre pensa que o pessoal que mora lá, mora junto com tudo de podre que acontece no Brasil.
Logo quando o Doutrinador explode tudo ali, ele está basicamente destruindo o mecanismo que suja Brasília. Ele está fazendo isso naquela cidade. Porque o símbolo da cidade é justamente esse, a farsa de que naquele lugar o povo será ouvido e a justiça será feita. Ele está destruindo a ideologia de que o Estado nos protege, nos dá segurança, saúde e educação. Porque na verdade o Estado só existe para privilegiar os políticos, funcionários públicos de auto escalão, suas famílias e amigos. Enquanto que o povo sofre para sustentar a elite política. O protagonista Miguel entendeu isso quando a filha dele morreu na fila do SUS.
-
@ a39d19ec:3d88f61e
2025-03-18 17:16:50Nun da das deutsche Bundesregime den Ruin Deutschlands beschlossen hat, der sehr wahrscheinlich mit dem Werkzeug des Geld druckens "finanziert" wird, kamen mir so viele Gedanken zur Geldmengenausweitung, dass ich diese für einmal niedergeschrieben habe.
Die Ausweitung der Geldmenge führt aus klassischer wirtschaftlicher Sicht immer zu Preissteigerungen, weil mehr Geld im Umlauf auf eine begrenzte Menge an Gütern trifft. Dies lässt sich in mehreren Schritten analysieren:
1. Quantitätstheorie des Geldes
Die klassische Gleichung der Quantitätstheorie des Geldes lautet:
M • V = P • Y
wobei:
- M die Geldmenge ist,
- V die Umlaufgeschwindigkeit des Geldes,
- P das Preisniveau,
- Y die reale Wirtschaftsleistung (BIP).Wenn M steigt und V sowie Y konstant bleiben, muss P steigen – also Inflation entstehen.
2. Gütermenge bleibt begrenzt
Die Menge an real produzierten Gütern und Dienstleistungen wächst meist nur langsam im Vergleich zur Ausweitung der Geldmenge. Wenn die Geldmenge schneller steigt als die Produktionsgütermenge, führt dies dazu, dass mehr Geld für die gleiche Menge an Waren zur Verfügung steht – die Preise steigen.
3. Erwartungseffekte und Spekulation
Wenn Unternehmen und Haushalte erwarten, dass mehr Geld im Umlauf ist, da eine zentrale Planung es so wollte, können sie steigende Preise antizipieren. Unternehmen erhöhen ihre Preise vorab, und Arbeitnehmer fordern höhere Löhne. Dies kann eine sich selbst verstärkende Spirale auslösen.
4. Internationale Perspektive
Eine erhöhte Geldmenge kann die Währung abwerten, wenn andere Länder ihre Geldpolitik stabil halten. Eine schwächere Währung macht Importe teurer, was wiederum Preissteigerungen antreibt.
5. Kritik an der reinen Geldmengen-Theorie
Der Vollständigkeit halber muss erwähnt werden, dass die meisten modernen Ökonomen im Staatsauftrag argumentieren, dass Inflation nicht nur von der Geldmenge abhängt, sondern auch von der Nachfrage nach Geld (z. B. in einer Wirtschaftskrise). Dennoch zeigt die historische Erfahrung, dass eine unkontrollierte Geldmengenausweitung langfristig immer zu Preissteigerungen führt, wie etwa in der Hyperinflation der Weimarer Republik oder in Simbabwe.
-
@ 306555fe:fd7fdf12
2025-05-07 02:12:08{"contract_id":"f5523a8f242339bdd8585e5491a82c2d9a72ff51f7fadc3f7c6f766afe9c0f19","title":"T2- Freelance Development Agreement","content":"# Freelance Development Agreement\n\n## Parties\n\nThis Freelance Development Agreement (the \"Agreement\") is entered into as of [DATE] by and between:\n\nClient: [CLIENT NAME], with an address at [CLIENT ADDRESS] (\"Client\")\n\nDeveloper: [DEVELOPER NAME], with an address at [DEVELOPER ADDRESS] (\"Developer\")\n\n## Services\n\nDeveloper agrees to provide the following services to Client (the \"Services\"):\n\n1. Design and develop a web application according to the specifications outlined in Attachment A.\n2. Provide regular progress updates on a weekly basis.\n3. Deliver the completed project by the deadline specified in the Timeline section.","version":1,"created_at":1746583812,"signers_required":2,"signatures":[{"pubkey":"306555fee4433582b32f6d46f11b644da33896595016893d7da18c75fd7fdf12","sig":"8ad3a9e2a4ef70515853e051882ab11ebc7d56f15be46deb1520d6292711bc4eeb29c723c46a251c98134d06b8950931d19f61f6f8836e4932f0d6682fe8159e","timestamp":1746583928}]}
-
@ 8671a6e5:f88194d1
2025-05-06 16:23:25"I tried pasting my login key into the text field, but no luck—it just wouldn't work. Turns out, the login field becomes completely unusable whenever the on-screen keyboard shows up on my phone. So either no one ever bothered to test this on a phone, or they did and thought, ‘Eh, who needs to actually log in anyway?’."
### \ \ Develop and evolve
Any technology or industry at the forefront of innovation faces the same struggle. Idealists, inventors, and early adopters jump in first, working to make things usable for the technical crowd. Only later do the products begin to take shape for the average user.
Bitcoin’s dropping the Ball on usability (and user-experience)
First, we have to acknowledge the progress we've made. Bitcoin has come a long way in terms of usability—no doubt about it. Even if I still think it’s bad, it’s nowhere near as terrible as it was ten or more years ago. The days of printing a paper wallet from some shady website and hoping it would still work months or years later are behind us. The days of buggy software never getting fixed are mostly over.
The Bitcoin technology itself made progress through many BIPs (Bitcoin Improvement Proposals) and combined with an increasing number of apps, devs, websites and related networks (Liquid, Lightning, Nostr, ....) we can say that we're seeing a strong ecosystem going its way. The ecosystem is alive and expanding, and technically, things are clearly working. The problem is that we’re still building with a mindset where developers and project managers consider usability—but don’t truly care about it in practice. They don’t lead with it. (Yes, there are always exceptions.)
All that progress looks cool, when you see the latest releases of hardware wallets, software wallets, exchanges, nostr clients and services built purely for bitcoin, you're usually thinking that we've progressed nicely. But I want to focus on the downside of all these shiny tools. Because if Bitcoin has made it this far, it’s mostly thanks to people who deeply understand its value and are stubborn enough to push through the friction. They don’t give up when the user experience sucks.
Many bitcoiners completely lost their perspective on the software front in my opinion. Because we could have been so much further ahead, and we didn't because some of the most important components on the user-facing side of Bitcoin (arguably the most important part) hasn’t kept pace with the popularity and possible growth. And that should be a great concern, because Bitcoin is meant to be open and accessible. The blockchain is public. This is supposed to be for everyone. This is an open ledger technology so in theory everything is user-facing to one extent or another. Yet we fail on that front to make the glue stick. Somewhere, we’re easily amused by the tools we create, and often contains hurdles we can’t see or feel. While users reject it after 5 seconds tops.
We didn’t came a lot further yet, because we’ve ignored usability at its core (pun intended).
I’m not talking about usability in the “it works on my machine” sense. I’m talking about usability that meets the standard of modern apps. Think Spotify, Instagram, Uber, Gmail. Products that ordinary people use without reading a manual or digging through forums.
That’s the bar. We’re still far from it.
Bad UX scares your grandma away
… and that’s how many bitcoiners apparently like it.
Subsequently, when I say usability, I’m using it as an umbrella term. For me, it covers user experience, user interface, and real-life, full-cycle testing—from onboarding a brand new user to rolling out a new version of the app. And oh boy, our onboarding is so horrible. (“Hey wanna try bitcoin? Here’s an app that takes up to 4 minutes or more to get though, but wait, you’ll have to install a plugin, or wait I’ll send you an on-chain transaction…)
Take a look at the listings on Bitvocation, an excellent job board for Bitcoiners and related projects. You’ll quickly notice a pattern: almost no companies are hiring software testers. It’s marketing, more marketing, some sales, and of course, full-stack developers. But … No testers.
Because testing has become something that’s often skipped or automated in a hurry. Maybe the devs run a test locally to confirm that the feature they just built doesn’t crash outright. That’s it. And if testing does happen at a company, it’s usually shallow—focused only on the top five percent of critical bugs. The finer points that shape real user experience, like button placement, navigation flow, and responsiveness, are dumped on “the community.”
Which leads to some software being rushed out to production, and only then do teams discover how many problems exist in the real world. If there’s anyone left to care that is, since most teams are scattered all over the world and get paid by the hour by some VC firm on a small runway to a launch date.
This has real life consequences I’ve seen for myself with new users. Like a lightning wallet having a +5 minute onboarding time, and a fat on-screen error for the new users, or a hardware wallet stuck in an endless upgrade loop, just because nobody tested it on a device that was “old” (as in, one year old).
The result is clear: usability and experience testing are so low on the priority list, they may as well not exist. And that’s tragic, because the enthusiasm of new users gets crushed the moment they run into what I call Linux’plaining.
That’s when something obvious fails — like a lookup command that’s copied straight from their own help documentation but doesn’t work — and the answer you get as a user is something like: “Yeah, but first you have to…” followed by an explanation that isn’t mentioned anywhere in the interface or documentation. You were just supposed to know. No one updates the documentation, and no one cares. As most of the projects are very temporary or don’t really care if it succeeds or not, because they’re bitcoiners and bitcoin always wins. Just like PGP always was super cool and good, and users should just be smarter.
Lessons from the past usability disasters
We can always learn from the past especially when its precedents are still echoing through the systems we use today
So here goes, some examples from the legacy / fiat industry:
Lotus Notes, for example. Once a titan in enterprise communication software, which managed to capture about 145 million mailboxes. But its downfall is an example of what happens when you ignore and keep ignoring real-life user needs and fail to evolve with the market. Software like that doesn’t just fade, it collapses under the weight of its own inertia and bloat. If you think bitcoin can’t have that, yes… we’re of course not having a competitor in the market (hard money is hard money, not a mailbox or office software provider of course). But we can erode trust to the extent that it becomes LotusNotes’d.
Its archaic 1990s interface came with clunky navigation and a chaotic document management system. Users got frustrated fast—basic tasks took too long. Picture this: you're stuck in a cubicle, trying to find the calendar function in Lotus Notes while a giant office printer hisses and spits out stacks of paper behind you. The platform never made the leap to modern expectations. It failed to deliver proper mobile clients and clung to outdated tech like LotusScript and the Domino architecture, which made it vulnerable to security issues and incompatible with the web standards of the time. By 2012, IBM pulled the plug on the Lotus brand, as businesses moved en masse to cloud-based alternatives.
Another kind of usability failure has plagued PGP1 (and still does so after 34 years). PGP (Pretty Good Privacy) is a time-tested and rock-solid method for encryption and key exchange, but it’s riddled with usability problems, especially for anyone who isn’t technically inclined.
Its very nature and complexity are already steep hurdles (and yes, you can’t make it fully easy without compromising how it’s supposed to work—granted). But the real problem? Almost zero effort has gone into giving even the most eager new users a manageable learning curve. That neglect slowly killed off any real user base—except for the hardcore encryption folks who already know what they’re doing.
Ask anyone in a shopping street or the historic center of your city if they’ve heard of PGP. And on the off chance someone knows it’s not a trendy new fast-food joint called “Perfectly Grilled Poultry,” the odds of them having actually used it in the past six months are basically zero, unless you happen to bump into that one neckbeard guy in his 60s wearing a stained Star Wars T-shirt named Leonard.
The builders of PGP made one major mistake: they never treated usability as a serious design goal (that’s normal for people knee deep in encryption, I get that, it’s the way it is). PGP is fantastic on itself. Other companies and projects tried to build around it, but while they stumbled, tools like Signal and ProtonMail stepped in; offering the same core features of encryption and secure messaging, minus the headache. They delivered what PGP never could: powerful functionality wrapped in something regular people can actually use. Now, we’ve got encrypted communication flowing through apps like Signal, where all the complex tech is buried so deep in the background, the average user doesn’t even realize it’s there. ProtonMail went one step further even, integrating PGP so cleanly that users never need to exchange keys or understand the cryptography behind it all, yet still benefit from bulletproof encryption.
There’s no debate—this shift is a good thing. History shows that unusable software fades into irrelevance. Whether due to lack of interest, failure to reach critical mass, or a competitor swooping in to eat market share, clunky tools don’t survive. Now, to be clear, Bitcoin doesn’t have to worry about that kind of threat. There’s no real competition when it comes to hard money. Unless, of course, you genuinely believe that flashy shitcoins are a viable alternative—in which case, you might as well stop reading here and go get yourself scammed on the latest Solana airdrop or whatever hype train’s leaving the station today for the degens.
The main takeaway here is that Bitcoin must avoid becoming the next Lotus Notes, bloated with features but neglected by users—or the next PGP, sidelined by its own lack of usability. That kind of trajectory would erode trust, especially if usability and onboarding keep falling behind. And honestly, we’re already seeing signs of this in bitcoin. User adoption in Europe, especially in countries like Germany is noticeably lagging. The introduction of the EU’s MiCA regulations isn’t helping either. Most of the companies that were actually pushing adoption are now either shutting down, leaving the EU, or jumping through creative loopholes just to stay alive. And the last thing on anyone’s mind is improving UX. It takes time, effort, and specialized people to seriously think through how to build this properly, from the beginning, with this ease of use and onboarding in mind. That’s a luxury most teams can’t or won’t prioritize right now. Understandably when the lack of funds is still a major issue within the bitcoin space. (for people sitting on hard money, there’s surprisingly little money flowing into useful projects that aren’t hyped up empty boxes)
The number of nodes being set up by end users worldwide isn’t exactly skyrocketing either. Sure, there’s some growth but let’s not overstate it. Based on Bitnodes’ snapshots taken in March of each year, we’re looking at: 2022 : around 10500 2023 : around 17000 2024 : around 18500 2025 : around 21000 (I know there are different methods of measuring these, like read-only nodes, the % change is roughly the same nonetheless)
In my opinion, if we had non-clunky software that was actually released with proper testing and usability in mind, we could’ve easily doubled those node numbers. A bad user experience with a wallet spreads fast—and brings in exactly zero new users. The same goes for people trying to set up a miner or spin up a node, only to give up after a few frustrating steps. Sure, there are good people out there making guides and videos2 to help mitigate those hurdles, and that helps. But let’s be honest: there’s still very little “wow” factor when average users interact with most Bitcoin software. Almost every time they walk away, it’s because of one of two things—usability issues or bugs.
For the record: if a user can’t set up a wallet because the interface is so rotten or poorly tested, so they don’t know where to click or how to even select a seed word from a list, then that’s a problem — that’s a bug. Argue all you want: sure, it’s not a code-level bug and no, it’s not a system crash. But it is a usability failure. Call it onboarding friction, UX flaw, whatever fits your spreadsheet or circus Maximus of failures in your ticketing system. Bottom line: if your software doesn’t help users accomplish its core purpose, it’s broken. It’s a bug. Pretending it’s something a copywriter or marketing team can fix is pure deflection. The solution isn’t to relabel the problem, 1990’s telecom-style, just to avoid dealing with it. It’s to actually sit down, think, collaborate, and go through the issue, and getting real solutions out. ”No it’s not an issue, that’s how it works” like someone from a failing (and by now defunct) wallet told me once, is not a solution.
You got 21 seconds
The user can’t be onboarded because your software has an “issue”? In my book, that’s a bug. The usual response when you report it? “Yeah, that’s not a priority.” Well, guess what? It actually is a priority. All these small annoyances, hurdles, and bits of BS still plague this industry, and they make the whole experience miserable for regular people trying it out for the first time. The first 21 seconds (yeah, you see what I did there) are the most important when someone opens new software. If it doesn’t click right away—if they’re fiddling with sats or dollar signs, or hunting for some hidden setting buried behind a tiny arrow—it’s game over. They’re annoyed. They’re gone.
And this is exactly why we’re seeing a flood of shitcoin apps sweeping new users off their feet with "faster apps" or "nicer designs" apps that somehow can afford the UI specialists and slick, centralized setups to spread their lies and scams.
I hate to say it, but the Phantom wallet for example, for the Solana network, loaded with fake airdrop schemes and the most blatant scams — has a far better UX than most Bitcoin wallets and Lightning Wallets. Learn from it. Download that **** and get to know what we do wrong and how we can learn from the enemy.
That’s a hard truth. So, instead of just screaming “Uh, shitcooooin!” (yes, we know it is), maybe we should start learning from it. Their apps are better than ours in terms of UI and UX. They attract more people 5x faster (we know that’s also because of the fast gains and retardation playing with the marketing) but we can’t keep ignoring that. Somehow these apps attract more than our trustworthiness, our steady, secure, decentralized hard money truth.
It’s like stepping into one of the best Italian restaurants in town—supposedly. But then the menu’s a mess, the staff is scrolling on their phones, and something smells burnt coming from the kitchen. So, what do you do? You walk out. You cross the street to the fast food joint and order a burger and fries. And as you’re walking out with your food, someone from the Italian place yells at you: “Fast food is bad!” ”Yeah man I know, I wanted a nice Spaghetti aglio e olio, but here I am, digesting a cheeseburger that felt rather spongy.” (the problem is so gone so deep now, that users just walk past that Italian restaurant, don’t even recognize it as a restaurant because it doesn’t have cheeseburgers).
Fear of the dark
Technical people, not marketeers built bitcoin, it’s build on hundreds of small building blocks that interacted over time to have the bitcoin network and it’s immer evolving value. At one point David Chaum cooked up eCash, using blind signatures to let people send digital money anonymously — except it was still stuck on clunky centralized servers. Go back even further, to the 1970s, when Diffie, Hellman, and Rivest introduced public-key cryptography—the magic sauce that gave us secure digital signatures and authentication, making sure your messages stayed private and tamper-proof.
Fast forward to the 1990s, where peer-to-peer started to take off, decentralized networks getting started. Adam Back’s Hashcash in ‘97 used proof-of-work to fight email spam, and the cypherpunks were all about sticking it to the man with privacy-first, the invention 199 Human-Readable 128-bit keys3, decentralized systems. We started to swap files over p2p networks and later, torrents.
All these parts—anonymous cash, encryption, and leaderless networks finally clicked into place when Satoshi Nakamoto poured them into a chain of blocks, built on an ingenious “time-stamping” system: the timechain, or blockchain if you prefer. And just like that, Bitcoin was born—a peer-to-peer money system that didn’t need middlemen and actually worked without any central servers.
So yes, it’s only natural that Bitcoin and the many tools, born from math, obscurity, and cryptography, isn’t exactly always a user-interface darling. That’s also it’s charm for me in any case, as the core is robust and valuable beyond belief. That’s why we love to so see more use, more adoption.
But that doesn’t mean we can’t squash critical “show-stopper” bugs before releasing bitcoin-related software. And it sure as hell doesn’t mean we should act like jerks when a user points out something’s broken, confusing, or just doesn’t meet expectations. We can’t be complacent either about our role as builders of the next generations, as the core is hard money, and it would be a fatal mistake for the world to see it being used only for some rockstars from Wall Street and their counterparts to store their debt laden fiat. We can free people, make them better, make them elevate themselves. And yet, the people we try to elevate, we often alienate. All because we don’t test our stuff well enough. We should be so good, we blow the banking apps away. (they’re blowing themselves out of the market luckily with fiat “features” and overly over the top use of “analytics” to measure your carbon footprint for example).
We should be so damn professional that someone using Bitcoin apps for a full year wouldn’t even notice any bugs, because there wouldn’t be much to get annoyed by.
So… we have to do better. I’ve seen it time and time again — on Lightning tipping apps, Nostr plugins, wallets, hardware wallets, even metal plates we can screw up somehow … you name it. “It works on my machine”, isn’t enough anymore! Those days are over.
Even apps built with solid funding and strong dev and test teams like fedi.xyz4 can miss the mark. While the idea was good and the app itself ran fine without too much hurdles and usual bugs. But usability failed on a different front: there was just nothing meaningful to do in the app beyond poking around, chatting a bit, and sending a few sats back and forth. The communities it’s supposed to connect, just aren’t there, or weren’t there “yet”.
It’s a beautifully designed application and a strong proof-of-concept for federated community funds. But then… nothing. No one I know uses it. Their last blogpost was from beginning of October 2024, which doesn’t bode well, writing this than 6 months after. That said, they got some great onboarding going, usually under 20 seconds, which proves it can be done right (even if it was all a front-end for a more complex backend).
As you can see “usability” is a broad terminology, covering technical aspects, user-interface, but also use-cases. Even if you have a cool app that works really well and is well thought-out users won’t use it if there’s no real substance. You can’t get that critical mass by waiting for customers to come in or communities to embrace it. They won’t, because most of the individuals already had past experiences with bitcoin apps or services, and there’s a reason for them not being on-board already.
A lot of bitcoin companies build tools for new people. Never for the lapsed people, the persons that came in, thought of it as an investment or “a coin”… then left because of a bad experience or the price going down in fiat. All the while we have some software that usually isn’t so kind to new people, or causes loss of funds and time. Even if they make one little “mistake” of not knowing the system beforehand.
Bitcoin’s Moby Dick
\ Bitcoin itself has a big issue here. The user base could grow faster, and more robust, if there wasn’t software that worked as a sort of repellent against users.
I especially see a younger and less tech-savvy audience absolutely disliking the software we have now. No matter if it’s Electrum’s desktop wallet (hardly the sexiest tool out there, although I like it myself, but it lacks some features), Sparrow, or any lightning wallet out there (safe for WoS). I even saw people disliking Proton wallet, which I personally thought of as something really slick, well-made and polished. But even that doesn’t cut it for many people, as the “account” and “wallet” system wasn’t clear enough for them. (You see, we all have the same bias, because we know bitcoin, we look at it from a perspective of “facepalm, of course it’s a wallet named “account”, but when you sit next to a new user, it becomes clear that this is a hurdle. (please proton wallet: name a wallet a wallet, not “account”. But most users already in bitcoin, love what you’re doing)
Naturally disliking usability
The same technically brilliant people who maintain Bitcoin and build its apps haven’t quite tapped into their inner Steve Jobs—if that person even exists in the Bitcoin space. Let’s be honest: the next iOS-style wow moment, or the kind of frictionless usability seen in Spotify or Instagram, probably won’t come from hardcore Bitcoin devs alone. In fact, some builders in the space seem to actively disregard—or even look down on—discussions about usability. Just mention names like Wallet of Satoshi (yes, we all know it’s a custodial frontend) or the need for smoother interactions with Bitcoin, and you’ll get eye-rolls or defensive rants instead of curiosity or openness.
Moving more towards a better user interface for things like Sparrow or Bitcoin Core for example, would bring all kinds of “bad things” according to some, and on top of that, bring in new users (noobs) that ask questions like: “Do you burn all these sats when I make a transaction?” (Yes, that’s a real one.)
I get the “usability sucks” gripe — fear of losing key features, dumbing things down, or opening the door to unwanted changes (like BIP proposals real bitcoiners hate) that tweak bitcoin to suit any user’s whim. Close to no one in bitcoin (really in bitcoin!) wants that, including me.
That fear is however largely unfounded; because Bitcoin doesn’t change without consensus. Any change that would undermine its core use or value proposition simply won’t make it through. And let’s be honest: most of the users who crave these “faster,” centralized alternatives—those drawn to slick apps, one-click solutions, and dopamine-driven UI—will either stick with fiat, ape into the shitcoin-of-the-month, or praise the shiny new CBDC once it drops (“much fast, much cool”). These degen types, chasing fiat gains and jackpot dreams, aren’t relevant to this story, No matter what we build for bitcoin, they’ll always love the fiat-story and will always dislike bitcoin because it’s not a jackpot for them. (Honestly, why don’t they just gamble at a casino?)
People who fear that improving usability will somehow bring down the Bitcoin network are being a bit too paranoid—and honestly, they often don’t understand what usability or proper testing actually means.
They treat it like fluff, when in reality it's fundamental. Usability doesn't mean dumbing things down or compromising Bitcoin's core values; it means understanding why your fancy new app isn’t being used by anyone outside of your bubble. Testing is the beating heart of getting things out with confidence. Nothing more satisfying in software building than to proudly show even your beta versions to users, knowing it’s well tested. It’s much more than clicking a few buttons and tossing your code on GitHub. It's about asking real questions: can someone outside your Telegram group actually use this and will it they be using the software at all?
If you create a Nostr app that opens an in-app browser window and then tries to log you in with your NIPS05 or NIPS07 or whatever number it is that authenticates you, then you need to think about how it’s going to work in real life. Have people already visited this underlying website? Is that website using the exact same mechanism? Is it really working like we think it is in the real world? (Some notable good things are happening with the development of Keychat for example, I have the feeling they get it, it’s not all bad). And yes, there are still bugs and things to improve there, they’re just starting. (The browser section and nostr login need some work imho).
Guess what? You can test your stuff. But it takes time and effort. The kind of effort that, if skipped, gets multiplied across thousands of people. Thousands of people wasting their time trying to use your app, hitting errors, assuming they did something wrong, retrying, googling workarounds—only to eventually realize: it’s not them. It’s a bug. A bug you didn’t catch. Because you didn’t test. And now everyone loses. And guess what? Those users? They’re not coming back.
A good example (to stay positive here) is Fountain App, where the first versions were , eh… let’s say not so good, and then quickly evolved into a company and product that works really well, and also listens to their users and fixes their bugs. The interface can still be better in my opinion, but it’s getting there. And it’s super good now.
A bad example? Alby. (Sorry to say.) It still suffers from a bloated, clunky interface and an onboarding flow that utterly confuses new or returning users. It just doesn’t get the job done. Opinions may vary, sure, but hand this app to any non-technical user and ask them to get online and do a Nostr zap. Watch what happens. If they even manage to get through the initial setup, that is.
Another example? Bitkit. When I tried transferring funds from the "savings" to the "spending" account, the wallet silently opened a Lightning channel—no warning, no explanation—and suddenly my coins were locked up. To make things worse, the wallet still showed the full balance as spendable, even though part of it was now stuck in that channel. That was in November 2024, the last time I touched Bitkit. I wasted too much time trying to figure it out, I haven’t looked back (assuming the project is even still alive, I didn’t see them pop up anywhere).
Some metal BIP39 backup tools are great in theory but poorly executed. I bought one that didn’t even include a simple instruction on how to open it. The person I gave it to spent two hours trying to open it with a screwdriver and even attempted drilling. Turns out, it just slides open with some pressure. A simple instruction would’ve saved all that frustration.
Builders often assume users “just get it,” but a small guide could’ve prevented all the hassle. It’s a small step, but it’s crucial for better user experience. So why not avoid such situations and put a friggin cheap piece of paper in the box so people know how to open it? (The creators would probably facepalm if they read this, “how can users nòt see this?”). Yeah,… put a paper in there with instructions.
That’s natural, because as a creator you’re “in” it, you know. You don’t see how others would overlook something so obvious.
Bitcoiners are extremely bad on that front.
I’ll dive deeper into some examples in part 2 of this post.
By AVB
end of part 1
If you like to support independent thought and writings on bitcoin, follow this substack please https://coinos.io/allesvoorbitcoin/receive\ \ footnotes:
1 https://philzimmermann.com/EN/findpgp/
2 BTC sessions: set up a bitcoin node
-
@ 6ad08392:ea301584
2025-03-14 19:03:20In 2024, I was high as a kite on Nostr hopium and optimism. Early that year, my co-founder and I figured that we could use Nostr as a way to validate ambassadors on “Destination Bitcoin” - the germ of a travel app idea we had at the time that would turn into Satlantis. After some more digging and thinking, we realised that Nostr’s open social graph would be of major benefit, and in exploring that design space, the fuller idea of Satlantis formed: a new kind of social network for travel.
###### ^^2 slides from the original idea here
I still remember the call I had with @pablof7z in January. I was in Dubai pitching the AI idea I was working on at the time, but all I could think and talk about was Satlantis and Nostr.
That conversation made me bullish AF. I came back from the trip convinced we’d struck gold. I pivoted the old company, re-organised the team and booked us for the Sovereign Engineering cohort in Madeira. We put together a whole product roadmap, go to market strategy and cap raise around the use of Nostr. We were going to be the ‘next big Nostr app’.
A couple of events followed in which I announced this all to the world: Bitcoin Atlantis in March and BTC Prague in June being the two main ones. The feedback was incredible. So we doubled down. After being the major financial backer for the Nostr Booth in Prague, I decided to help organise the Nostr Booth initiative and back it financially for a series of Latin American conferences in November. I was convinced this was the biggest thing since bitcoin, so much so that I spent over $50,000 in 2024 on Nostr marketing initiatives. I was certainly high on something.
Sobering up
It’s March 2025 and I’ve sobered up. I now look at Nostr through a different lens. A more pragmatic one. I see Nostr as a tool, as an entrepreneur - who’s more interested in solving a problem, than fixating on the tool(s) being used - should.
A couple things changed for me. One was the sub-standard product we released in November. I was so focused on being a Nostr evangelist that I put our product second. Coupled with the extra technical debt we took on at Satlantis by making everything Nostr native, our product was crap. We traded usability & product stability for Nostr purism & evangelism.
We built a whole suite of features using native event kinds (location kinds, calendar kinds, etc) that we thought other Nostr apps would also use and therefore be interoperable. Turns out no serious players were doing any of that, so we spent a bunch of time over-engineering for no benefit 😂
The other wake up call for me was the Twitter ban in Brazil. Being one of the largest markets for Twitter, I really thought it would have a material impact on global Nostr adoption. When basically nothing happened, I began to question things.
Combined, these experiences helped sober me up and I come down from my high. I was reading “the cold start problem” by Andrew Chen (ex-Uber) at the time and doing a deep dive on network effects. I came to the following realisation:
Nostr’s network effect is going to take WAY longer than we all anticipated initially. This is going to be a long grind. And unlike bitcoin, winning is not inevitable. Bitcoin solves a much more important problem, and it’s the ONLY option. Nostr solves an important problem yes, but it’s far from the only approach. It’s just the implementation arguably in the lead right now.
This sobering up led us to take a different approach with Nostr. We now view it as another tool in the tech-stack, no different to the use of React Native on mobile or AWS for infrastructure. Nostr is something to use if it makes the product better, or avoid if it makes the product and user experience worse. I will share more on this below, including our simple decision making framework. I’ll also present a few more potentially unpopular opinions about Nostr. Four in total actually:
- Nostr is a tool, not a revolution
- Nostr doesn’t solve the multiple social accounts problem
- Nostr is not for censorship resistance
- Grants come with a price
Let’s begin…
Nostr is a tool, not a revolution
Nostr is full of Bitcoiners, and as much as we like to think we’re immune from shiny object syndrome, we are, somewhere deep down afflicted by it like other humans. That’s normal & fine. But…while Bitcoiners have successfully suppressed this desire when it comes to shitcoins, it lies dormant, yearning for the least shitcoin-like thing to emerge which we can throw our guiltless support behind.
That thing arrived and it’s called Nostr.
As a result, we’ve come to project the same kind of purity and maximalism onto it as we do with Bitcoin, because it shares some attributes and it’s clearly not a grift.
The trouble is, in doing so, we’ve put it in the same class as Bitcoin - which is an error.
Nostr is important and in its own small way, revolutionary, but it pales in comparison to Bitcoin’s importance. Think of it this way: If Bitcoin fails, civilisation is fucked. If Nostr fails, we’ll engineer another rich-identity protocol. There is no need for the kind of immaculate conception and path dependence that was necessary for Bitcoin whose genesis and success has been a once in a civilisation event. Equivocating Nostr and Bitcoin to the degree that it has been, is a significant category error. Nostr may ‘win’ or it may just be an experiment on the path to something better. And that’s ok !
I don’t say this to piss anyone off, to piss on Nostr or to piss on myself. I say it because I’d prefer Nostr not remain a place where a few thousand people speak to each other about how cool Nostr is. That’s cute in the short term, but in the grand scheme of things, it’s a waste of a great tool that can make a significant corner of the Internet great again.
By removing the emotional charge and hopium from our relationship to Nostr, we can take a more sober, objective view of it (and hopefully use it more effectively).
Instead of making everything about Nostr (the tool), we can go back to doing what great product people and businesses do: make everything about the customer.
Nobody’s going around marketing their app as a “react native product” - and while I understand that’s a false equivalent in the sense that Nostr is a protocol, while react is a framework - the reality is that it DOES NOT MATTER.
For 99.9999% of the world, what matters is the hole, not the drill. Maybe 1000 people on Earth REALLY care that something is built on Nostr, but for everyone else, what matters is what the app or product does and the problem it solves. Realigning our focus in this way, and looking at not only Nostr, but also Bitcoin as a tool in the toolkit, has transformed the way we’re building.
This inspired an essay I wrote a couple weeks ago called “As Nostr as Possible”. It covers our updated approach to using and building WITH Nostr (not just ‘on’ it). You can find that here:
https://futuresocial.substack.com/p/as-nostr-as-possible-anap
If you’re too busy to read it, don’t fret. The entire theory can be summarised by the diagram below. This is how we now decide what to make Nostr-native, and what to just build on our own. And - as stated in the ANAP essay - that doesn’t mean we’ll never make certain features Nostr-native. If the argument is that Nostr is not going anywhere, then we can always come back to that feature and Nostr-fy it later when resources and protocol stability permit.
Next…
The Nostr all in one approach is not all “positive”
Having one account accessible via many different apps might not be as positive as we initially thought.
If you have one unified presence online, across all of your socials, and you’re posting the same thing everywhere, then yes - being able to post content in one place and it being broadcast everywhere, is great. There’s a reason why people literally PAY for products like Hypefury, Buffer and Hootsuite (aside from scheduling).
BUT…..This is not always the case.
I’ve spoken to hundreds of creators and many have flagged this as a bug not a feature because they tend to have a different audience on different platforms and speak to them differently depending on the platform. We all know this. How you present yourself on LinkedIn is very different to how you do it on Instagram or X.
The story of Weishu (Tencent’s version of TikTok) comes to mind here. Tencent’s WeChat login worked against them because people didn’t want their social graph following them around. Users actually wanted freedom from their existing family & friends, so they chose Douyin (Chinese TikTok) instead.
Perhaps this is more relevant to something like WeChat because the social graph following you around is more personal, but we saw something similar with Instagram and Facebook. Despite over a decade of ownership, Facebook still keeps the social graphs separated.
All this to say that while having a different strategy & approach on different social apps is annoying, it allows users to tap into different markets because each silo has its own ‘flavour’. The people who just post the same thing everywhere are low-quality content creators anyway. The ones who actually care, are using each platform differently.
The ironic part here is that this is arguably more ‘decentralised’ than the protocol approach because these siloes form a ‘marketplace of communities’ which are all somewhat different.
We need to find a smart way of doing this with Nostr. Some way of catering to the appropriate audience where it matters most. Perhaps this will be handled by clients, or by relays. One solution I’ve heard from people in the Nostr space is to just ‘spin up another nPub’ for your different audience. While I have no problem with people doing that - I have multiple nPubs myself - it’s clearly NOT a solution to the underlying problem here.
We’re experimenting with something. Whether it’s a good idea or not remains to be seen. Satlantis users will be able to curate their profiles and remove (hide / delete) content on our app. We’ll implement this in two stages:
Stage 1: Simple\ In the first iteration, we will not broadcast a delete request to relays. This means users can get a nicely curated profile page on Satlantis, but keep a record of their full profile elsewhere on other clients / relays.
Stage 2: More complex\ Later on, we’ll try to give people an option to “delete on Satlantis only” or “delete everywhere”. The difference here is more control for the user. Whether we get this far remains to be seen. We’ll need to experiment with the UX and see whether this is something people really want.
I’m sure neither of these solutions are ‘ideal’ - but they’re what we’re going to try until we have more time & resources to think this through more.
Next…
Nostr is not for Censorship Resistance
I’m sorry to say, but this ship has sailed. At least for now. Maybe it’s a problem again in the future, but who knows when, and if it will ever be a big enough factor anyway.
The truth is, while WE all know that Nostr is superior because it’s a protocol, people do NOT care enough. They are more interested in what’s written ON the box, not what’s necessarily inside the box. 99% of people don’t know wtf a protocol is in the first place - let alone why it matters for censorship resistance to happen at that level, or more importantly, why they should trust Nostr to deliver on that promise.
Furthermore, the few people who did care about “free speech” are now placated enough with Rumble for Video, X for short form and Substack for long form. With Meta now paying lip-service to the movement, it’s game over for this narrative - at least for the foreseeable future.
The "space in people’s minds for censorship resistance has been filled. Both the ‘censorship resistance’ and ‘free speech’ ships have sailed (even though they were fake), and the people who cared enough all boarded.
For the normies who never cared, they still don’t care - or they found their way to the anti-platforms, like Threads, BlueSky or Pornhub.
The small minority of us still here on Nostr…are well…still here. Which is great, but if the goal is to grow the network effect here and bring in more people, then we need to find a new angle. Something more compelling than “your account won’t be deleted.”
I’m not 100% sure what that is. My instinct is that a “network of interoperable applications”, that don’t necessarily or explicitly brand themselves as Nostr, but have it under the hood is the right direction. I think the open social graph and using it in novel ways is compelling. Trouble is, this needs more really well-built and novel apps for non-sovereignty minded people (especially content creators) and people who don’t necessarily care about the reasons Nostr was first built. Also requires us to move beyond just building clones of what already exists.
We’ve been trying to do this Satlantis thing for almost a year now and it’s coming along - albeit WAY slower than I would’ve liked. We’re experimenting our way into a whole new category of product. Something different to what exists today. We’ve made a whole bunch of mistakes and at times I feel like a LARP considering the state of non-delivery.
BUT…what’s on the horizon is very special, and I think that all of the pain, effort and heartache along the way will be 100% worth it. We are going to deliver a killer product that people love, that solves a whole host of travel-related problems and has Nostr under the hood (where nobody, except those who care, will know).
Grants come with a price
This one is less of an opinion and more of an observation. Not sure it really belongs in this essay, but I’ll make a small mention just as food for thought,
Grants are a double-edged sword.
I’m super grateful that OpenSats, et al, are supporting the protocol, and I don’t envy the job they have in trying to decipher what to support and what not to depending on what’s of benefit to the network versus what’s an end user product.
That being said, is the Nostr ecosystem too grant-dependent? This is not a criticism, but a question. Perhaps this is the right thing to do because of how young Nostr is. But I just can’t help but feel like there’s something a-miss.
Grants put the focus on Nostr, instead of the product or customer. Which is fine, if the work the grant covers is for Nostr protocol development or tooling. But when grants subsidise the development of end user products, it ties the builder / grant recipient to Nostr in a way that can misalign them to the customer’s needs. It’s a bit like getting a government grant to build something. Who’s the real customer??
Grants can therefore create an almost communist-like detachment from the market and false economic incentive. To reference the Nostr decision framework I showed you earlier, when you’ve been given a grant, you are focusing more on the X axis, not the Y. This is a trade-off, and all trade-offs have consequences.
Could grants be the reason Nostr is so full of hobbyists and experimental products, instead of serious products? Or is that just a function of how ambitious and early Nostr is?
I don’t know.
Nostr certainly needs better toolkits, SDKs, and infrastructure upon which app and product developers can build. I just hope the grant money finds its way there, and that it yields these tools. Otherwise app developers like us, won’t stick around and build on Nostr. We’ll swap it out with a better tool.
To be clear, this is not me pissing on Nostr or the Grantors. Jack, OpenSats and everyone who’s supported Nostr are incredible. I’m just asking the question.
Final thing I’ll leave this section with is a thought experiment: Would Nostr survive if OpenSats disappeared tomorrow?
Something to think about….
Coda
If you read this far, thank you. There’s a bunch here to digest, and like I said earlier - this not about shitting on Nostr. It is just an enquiry mixed with a little classic Svetski-Sacred-Cow-Slaying.
I want to see Nostr succeed. Not only because I think it’s good for the world, but also because I think it is the best option. Which is why we’ve invested so much in it (something I’ll cover in an upcoming article: “Why we chose to build on Nostr”). I’m firmly of the belief that this is the right toolkit for an internet-native identity and open social graph. What I’m not so sure about is the echo chamber it’s become and the cult-like relationship people have with it.
I look forward to being witch-hunted and burnt at the stake by the Nostr purists for my heresy and blaspheming. I also look forward to some productive discussions as a result of reading this.
Thankyou for your attention.
Until next time.
-
@ bbef5093:71228592
2025-05-06 16:11:35India csökkentené az atomerőművek építési idejét ambiciózus nukleáris céljai eléréséhez
India célja, hogy a jelenlegi 10 évről a „világszínvonalú” 6 évre csökkentse atomerőművi projektjeinek kivitelezési idejét, hogy elérje a 2047-re kitűzött, 100 GW beépített nukleáris kapacitást.
Az SBI Capital Markets (az Indiai Állami Bank befektetési banki leányvállalata) jelentése szerint ez segítene mérsékelni a korábbi költségtúllépéseket, és vonzóbbá tenné az országot a globális befektetők számára.
A jelentés szerint a jelenlegi, mintegy 8 GW kapacitás és a csak 7 GW-nyi építés alatt álló kapacitás mellett „jelentős gyorsítás” szükséges a célok eléréséhez.
A kormány elindította a „nukleáris energia missziót”, amelyhez körülbelül 2,3 milliárd dollárt (2 milliárd eurót) különített el K+F-re és legalább öt Bharat kis moduláris reaktor (BSMR) telepítésére, de további kihívásokat kell megoldania a célok eléréséhez.
Az építési idők csökkentése kulcsfontosságú, de a jelentés átfogó rendszerszintű reformokat is javasol, beleértve a gyorsabb engedélyezést, a földszerzési szabályok egyszerűsítését, az erőművek körüli védőtávolság csökkentését, és a szabályozó hatóság (Atomic Energy Regulatory Board) nagyobb önállóságát.
A jelentés szerint a nemzet korlátozott uránkészletei miatt elengedhetetlen az üzemanyagforrások diverzifikálása nemzetközi megállapodások révén, valamint az indiai nukleáris program 2. és 3. szakaszának felgyorsítása.
India háromlépcsős nukleáris programja célja egy zárt üzemanyagciklus kialakítása, amely a természetes uránra, a plutóniumra és végül a tóriumra épül. A 2. szakaszban gyorsneutronos reaktorokat használnak, amelyek több energiát nyernek ki az uránból, kevesebb bányászott uránt igényelnek, és a fel nem használt uránt új üzemanyaggá alakítják. A 3. szakaszban fejlett reaktorok működnek majd India hatalmas tóriumkészleteire alapozva.
2025 januárjában az indiai Nuclear Power Corporation (NPCIL) pályázatot írt ki Bharat SMR-ek telepítésére, először nyitva meg a nukleáris szektort indiai magáncégek előtt.
Eddig csak az állami tulajdonú NPCIL építhetett és üzemeltethetett kereskedelmi atomerőműveket Indiában.
A Bharat SMR-ek (a „Bharat” hindiül Indiát jelent) telepítése a „Viksit Bharat” („Fejlődő India”) program része.
Engedélyezési folyamat: „elhúzódó és egymásra épülő”
A Bharat atomerőmű fejlesztésének részletei továbbra sem világosak, de Nirmala Sitharaman pénzügyminiszter júliusban elmondta, hogy az állami National Thermal Power Corporation és a Bharat Heavy Electricals Limited közös vállalkozásában valósulna meg a fejlesztés.
Sitharaman hozzátette, hogy a kormány a magánszektorral közösen létrehozna egy Bharat Small Reactors nevű céget, amely SMR-ek és új nukleáris technológiák kutatás-fejlesztésével foglalkozna.
Az SBI jelentése szerint javítani kell az SMR programot, mert az engedélyezési folyamat jelenleg „elhúzódó és egymásra épülő”, és aránytalan kockázatot jelent a magánszereplők számára a reaktorfejlesztés során.
A program „stratégiailag jó helyzetben van a sikerhez”, mert szigorú belépési feltételeket támaszt, így csak komoly és alkalmas szereplők vehetnek részt benne.
A kormánynak azonban be kellene vezetnie egy kártérítési záradékot, amely védi a magáncégeket az üzemanyag- és nehézvíz-ellátás hiányától, amely az Atomenergia Minisztérium (DAE) hatáskörébe tartozik.
A jelentés szerint mind az üzemanyag, mind a nehézvíz ellátása a DAE-től függ, és „a hozzáférés hiánya” problémát jelenthet. India legtöbb kereskedelmi atomerőműve hazai fejlesztésű, nyomottvizes nehézvizes reaktor.
A jelentés szerint: „A meglévő szabályozási hiányosságok kezelése kulcsfontosságú, hogy a magánszektor vezethesse a kitűzött 100 GW nukleáris kapacitás 50%-ának fejlesztését 2047-ig.”
Az NPCIL nemrégiben közölte, hogy India 2031–32-ig további 18 reaktort kíván hozzáadni az energiamixhez, ezzel az ország nukleáris kapacitása 22,4 GW-ra nő.
A Nemzetközi Atomenergia-ügynökség adatai szerint Indiában 21 reaktor üzemel kereskedelmi forgalomban, amelyek 2023-ban az ország áramtermelésének körülbelül 3%-át adták. Hat egység van építés alatt.
Roszatom pert indított a leállított Hanhikivi-1 projekt miatt Finnországban
Az orosz állami Roszatom atomenergetikai vállalat pert indított Moszkvában a finn Fortum és Outokumpu cégek ellen, és 227,8 milliárd rubel (2,8 milliárd dollár, 2,4 milliárd euró) kártérítést követel a finnországi Hanhikivi-1 atomerőmű szerződésének felmondása miatt – derül ki bírósági dokumentumokból és a Roszatom közleményéből.
A Roszatom a „mérnöki, beszerzési és kivitelezési (EPC) szerződés jogellenes felmondása”, a részvényesi megállapodás, az üzemanyag-ellátási szerződés megsértése, valamint a kölcsön visszafizetésének megtagadása miatt követel kártérítést.
A Fortum a NucNetnek e-mailben azt írta, hogy „nem kapott hivatalos értesítést orosz perről”.
A Fortum 2025. április 29-i negyedéves jelentésében közölte, hogy a Roszatom finn leányvállalata, a Raos Project, valamint a Roszatom nemzetközi divíziója, a JSC Rusatom Energy International, illetve a Fennovoima (a Hanhikivi projektért felelős finn konzorcium) között a Hanhikivi EPC szerződésével kapcsolatban nemzetközi választottbírósági eljárás zajlik.
2025 februárjában a választottbíróság úgy döntött, hogy nincs joghatósága a Fortummal szembeni követelések ügyében. „Ez a döntés végleges volt, így a Fortum nem része a választottbírósági eljárásnak” – közölte a cég.
A Fortum 2015-ben kisebbségi tulajdonos lett a Fennovoima projektben, de a teljes tulajdonrészt 2020-ban leírta.
A Fennovoima konzorcium, amelyben a Roszatom a Raos-on keresztül 34%-os kisebbségi részesedéssel rendelkezett, 2022 májusában felmondta a Hanhikivi-1 létesítésére vonatkozó szerződést az ukrajnai háború miatti késedelmek és megnövekedett kockázatok miatt.
A projekt technológiája az orosz AES-2006 típusú nyomottvizes reaktor lett volna.
2021 áprilisában a Fennovoima közölte, hogy a projekt teljes beruházási költsége 6,5–7 milliárd euróról 7–7,5 milliárd euróra nőtt.
2022 augusztusában a Roszatom és a Fennovoima kölcsönösen milliárdos kártérítési igényt nyújtott be egymás ellen a projekt leállítása miatt.
A Fennovoima nemzetközi választottbírósági eljárást indított 1,7 milliárd euró előleg visszafizetéséért. A Roszatom 3 milliárd eurós ellenkeresetet nyújtott be. Ezek az ügyek jelenleg is nemzetközi bíróságok előtt vannak.
Dél-koreai delegáció Csehországba utazik nukleáris szerződés aláírására
Egy dél-koreai delegáció 2025. május 6-án Csehországba utazik, hogy részt vegyen egy több milliárd dolláros szerződés aláírásán, amely két új atomerőmű építéséről szól a Dukovany telephelyen – közölte a dél-koreai kereskedelmi, ipari és energetikai minisztérium.
A delegáció, amelyben kormányzati és parlamenti tisztviselők is vannak, kétnapos prágai látogatásra indul, hogy részt vegyen a szerdára tervezett aláírási ceremónián.
A küldöttség találkozik Petr Fiala cseh miniszterelnökkel és Milos Vystrcil szenátusi elnökkel is, hogy megvitassák a Dukovany projektet.
Fiala múlt héten bejelentette, hogy Prága május 7-én írja alá a Dukovany szerződést a Korea Hydro & Nuclear Power (KHNP) céggel.
A cseh versenyhivatal nemrég engedélyezte a szerződés aláírását a KHNP-vel, miután elutasította a francia EDF fellebbezését.
A versenyhivatal április 24-i döntése megerősítette a korábbi ítéletet, amelyet az EDF megtámadott, miután 2024 júliusában elvesztette a tenderpályázatot a KHNP-vel szemben.
Ez lehetővé teszi, hogy a két dél-koreai APR1400 reaktor egység szerződését aláírják Dukovanyban, Dél-Csehországban. A szerződés az ország történetének legnagyobb energetikai beruházása, értéke legalább 400 milliárd korona (16 milliárd euró, 18 milliárd dollár).
A szerződést eredetileg márciusban írták volna alá, de a vesztes pályázók (EDF, Westinghouse) fellebbezései, dél-koreai politikai bizonytalanságok és a cseh cégek lokalizációs igényei miatt csúszott.
A KHNP januárban rendezte a szellemi tulajdonjogi vitát a Westinghouse-zal, amely korábban azt állította, hogy a KHNP az ő technológiáját használja az APR1400 reaktorokban.
A szerződés aláírása Dél-Korea első külföldi atomerőmű-építési projektje lesz 2009 óta, amikor a KHNP négy APR1400 reaktort épített az Egyesült Arab Emírségekben, Barakahban.
Csehországban hat kereskedelmi reaktor működik: négy orosz VVER-440-es Dukovanyban, két nagyobb VVER-1000-es Temelínben. Az IAEA szerint ezek az egységek a cseh áramtermelés mintegy 36,7%-át adják.
Az USA-nak „minél előbb” új reaktort kell építenie – mondta a DOE jelöltje a szenátusi bizottság előtt
Az USA-nak minél előbb új atomerőművet kell építenie, és elő kell mozdítania a fejlett reaktorok fejlesztését, engedélyezését és telepítését – hangzott el a szenátusi energiaügyi bizottság előtt.
Ted Garrish, aki a DOE nukleáris energiaügyi helyettes államtitkári posztjára jelöltként jelent meg, elmondta: az országnak új reaktort kell telepítenie, legyen az nagy, kis moduláris vagy mikroreaktor.
Az USA-ban jelenleg nincs épülő kereskedelmi atomerőmű, az utolsó kettő, a Vogtle-3 és Vogtle-4 2023-ban, illetve 2024-ben indult el Georgiában.
„A nukleáris energia kivételes lehetőség a növekvő villamosenergia-igény megbízható, megfizethető és biztonságos kielégítésére” – mondta Garrish, aki tapasztalt atomenergetikai vezető. Szerinte az USA-nak nemzetbiztonsági okokból is fejlesztenie kell a hazai urándúsító ipart.
Vizsgálni kell a nemzetközi piacot és a kormányközi megállapodások lehetőségét az amerikai nukleáris fejlesztők és ellátási láncok számára, valamint meg kell oldani a kiégett fűtőelemek elhelyezésének problémáját.
1987-ben a Kongresszus a nevadai Yucca Mountain-t jelölte ki a kiégett fűtőelemek végleges tárolóhelyének, de 2009-ben az Obama-adminisztráció leállította a projektet.
Az USA-ban az 1950-es évek óta mintegy 83 000 tonna radioaktív hulladék, köztük kiégett fűtőelem halmozódott fel, amelyet jelenleg acél- és betonkonténerekben tárolnak az erőművek telephelyein.
Garrish korábban a DOE nemzetközi ügyekért felelős helyettes államtitkára volt (2018–2021), jelenleg az Egyesült Haladó Atomenergia Szövetség igazgatótanácsának elnöke.
Egyéb hírek
Szlovénia közös munkát sürget az USA-val a nukleáris energiában:
Az USA és Horvátország tisztviselői együttműködésről tárgyaltak Közép- és Délkelet-Európa energiaellátásának diverzifikálása érdekében, különös tekintettel a kis moduláris reaktorokra (SMR). Horvátország és Szlovénia közösen tulajdonolja a szlovéniai Krško atomerőművet, amely egyetlen 696 MW-os nyomottvizes reaktorával Horvátország áramfogyasztásának 16%-át, Szlovéniáénak 20%-át adja. Szlovénia fontolgatja egy második blokk építését, de tavaly elhalasztotta az erről szóló népszavazást.Malawi engedélyezi a Kayelekera uránbánya újraindítását:
A Malawi Atomenergia Hatóság kiadta a sugárbiztonsági engedélyt a Lotus (Africa) Limited számára, így újraindulhat a Kayelekera uránbánya, amely több mint egy évtizede, 2014 óta állt a zuhanó uránárak és biztonsági problémák miatt. A bánya 85%-át az ausztrál Lotus Resources helyi leányvállalata birtokolja. A Lotus szerint a bánya újraindítása teljesen finanszírozott, kb. 43 millió dollár (37 millió euró) tőkével.Venezuela és Irán nukleáris együttműködést tervez:
Venezuela és Irán a nukleáris tudomány és technológia terén való együttműködésről tárgyalt. Az iráni állami média szerint Mohammad Eslami, az Iráni Atomenergia Szervezet vezetője és Alberto Quintero, Venezuela tudományos miniszterhelyettese egyetemi és kutatási programok elindításáról egyeztetett. Venezuelában nincs kereskedelmi atomerőmű, de 2010-ben Oroszországgal írt alá megállapodást új atomerőművek lehetőségéről. Iránnak egy működő atomerőműve van Bushehr-1-nél, egy másik ugyanott épül, mindkettőt Oroszország szállította. -
@ b6dcdddf:dfee5ee7
2025-05-06 15:58:23You can now fund projects on Geyser using Credit Cards, Apple Pay, Bank Transfers, and more.
The best part: 🧾 You pay in fiat and ⚡️ the creator receives Bitcoin.
You heard it right! Let's dive in 👇
First, how does it work? For contributors, it's easy! Once the project creator has verified their identity, anyone can contribute with fiat methods. Simply go through the usual contribution flow and select 'Pay with Fiat'. The first contribution is KYC-free.
Why does this matter? 1. Many Bitcoiners don't want to spend their Bitcoin: 👉 Number go up (NgU) 👉 Capital gains taxes With fiat contributions, there's no more excuse to contribute towards Bitcoin builders and creators! 2. Non-bitcoin holders want to support projects too. If someone loves your mission but only has a debit card, they used to be stuck. Now? They can back your Bitcoin project with familiar fiat tools. Now, they can do it all through Geyser!
So, why swap fiat into Bitcoin? Because Bitcoin is borderless. Fiat payouts are limited to certain countries, banks, and red tape. By auto-swapping fiat to Bitcoin, we ensure: 🌍 Instant payouts to creators all around the world ⚡️ No delays or restrictions 💥 Every contribution is also a silent Bitcoin buy
How to enable Fiat contributions If you’re a creator, it’s easy: - Go to your Dashboard → Wallet - Click “Enable Fiat Contributions” - Complete a quick ID verification (required by our payment provider) ✅ That’s it — your project is now open to global fiat supporters.
Supporting Bitcoin adoption At Geyser, our mission is to empower Bitcoin creators and builders. Adding fiat options amplifies our mission. It brings more people into the ecosystem while staying true to what we believe: ⚒️ Build on Bitcoin 🌱 Fund impactful initiatives 🌎 Enable global participation
**Support projects with fiat now! ** We've compiled a list of projects that currently have fiat contributions enabled. If you've been on the fence to support them because you didn't want to spend your Bitcoin, now's the time to do your first contribution!
Education - Citadel Dispatch: https://geyser.fund/project/citadel - @FREEMadeiraOrg: https://geyser.fund/project/freemadeira - @MyfirstBitcoin_: https://geyser.fund/project/miprimerbitcoin
Circular Economies - @BitcoinEkasi: https://geyser.fund/project/bitcoinekasi - Madagascar Bitcoin: https://geyser.fund/project/madagasbit - @BitcoinChatt : https://geyser.fund/project/bitcoinchatt - Uganda Gayaza BTC Market: https://geyser.fund/project/gayazabtcmarket
Activism - Education Bitcoin Channel: https://geyser.fund/project/streamingsats
Sports - The Sats Fighter Journey: https://geyser.fund/project/thesatsfighterjourney
Culture - Bitcoin Tarot Cards: https://geyser.fund/project/bitcointarotcard
originally posted at https://stacker.news/items/973003
-
@ 04c915da:3dfbecc9
2025-03-12 15:30:46Recently we have seen a wave of high profile X accounts hacked. These attacks have exposed the fragility of the status quo security model used by modern social media platforms like X. Many users have asked if nostr fixes this, so lets dive in. How do these types of attacks translate into the world of nostr apps? For clarity, I will use X’s security model as representative of most big tech social platforms and compare it to nostr.
The Status Quo
On X, you never have full control of your account. Ultimately to use it requires permission from the company. They can suspend your account or limit your distribution. Theoretically they can even post from your account at will. An X account is tied to an email and password. Users can also opt into two factor authentication, which adds an extra layer of protection, a login code generated by an app. In theory, this setup works well, but it places a heavy burden on users. You need to create a strong, unique password and safeguard it. You also need to ensure your email account and phone number remain secure, as attackers can exploit these to reset your credentials and take over your account. Even if you do everything responsibly, there is another weak link in X infrastructure itself. The platform’s infrastructure allows accounts to be reset through its backend. This could happen maliciously by an employee or through an external attacker who compromises X’s backend. When an account is compromised, the legitimate user often gets locked out, unable to post or regain control without contacting X’s support team. That process can be slow, frustrating, and sometimes fruitless if support denies the request or cannot verify your identity. Often times support will require users to provide identification info in order to regain access, which represents a privacy risk. The centralized nature of X means you are ultimately at the mercy of the company’s systems and staff.
Nostr Requires Responsibility
Nostr flips this model radically. Users do not need permission from a company to access their account, they can generate as many accounts as they want, and cannot be easily censored. The key tradeoff here is that users have to take complete responsibility for their security. Instead of relying on a username, password, and corporate servers, nostr uses a private key as the sole credential for your account. Users generate this key and it is their responsibility to keep it safe. As long as you have your key, you can post. If someone else gets it, they can post too. It is that simple. This design has strong implications. Unlike X, there is no backend reset option. If your key is compromised or lost, there is no customer support to call. In a compromise scenario, both you and the attacker can post from the account simultaneously. Neither can lock the other out, since nostr relays simply accept whatever is signed with a valid key.
The benefit? No reliance on proprietary corporate infrastructure.. The negative? Security rests entirely on how well you protect your key.
Future Nostr Security Improvements
For many users, nostr’s standard security model, storing a private key on a phone with an encrypted cloud backup, will likely be sufficient. It is simple and reasonably secure. That said, nostr’s strength lies in its flexibility as an open protocol. Users will be able to choose between a range of security models, balancing convenience and protection based on need.
One promising option is a web of trust model for key rotation. Imagine pre-selecting a group of trusted friends. If your account is compromised, these people could collectively sign an event announcing the compromise to the network and designate a new key as your legitimate one. Apps could handle this process seamlessly in the background, notifying followers of the switch without much user interaction. This could become a popular choice for average users, but it is not without tradeoffs. It requires trust in your chosen web of trust, which might not suit power users or large organizations. It also has the issue that some apps may not recognize the key rotation properly and followers might get confused about which account is “real.”
For those needing higher security, there is the option of multisig using FROST (Flexible Round-Optimized Schnorr Threshold). In this setup, multiple keys must sign off on every action, including posting and updating a profile. A hacker with just one key could not do anything. This is likely overkill for most users due to complexity and inconvenience, but it could be a game changer for large organizations, companies, and governments. Imagine the White House nostr account requiring signatures from multiple people before a post goes live, that would be much more secure than the status quo big tech model.
Another option are hardware signers, similar to bitcoin hardware wallets. Private keys are kept on secure, offline devices, separate from the internet connected phone or computer you use to broadcast events. This drastically reduces the risk of remote hacks, as private keys never touches the internet. It can be used in combination with multisig setups for extra protection. This setup is much less convenient and probably overkill for most but could be ideal for governments, companies, or other high profile accounts.
Nostr’s security model is not perfect but is robust and versatile. Ultimately users are in control and security is their responsibility. Apps will give users multiple options to choose from and users will choose what best fits their need.
-
@ 2e8970de:63345c7a
2025-05-06 15:13:49https://www.epi.org/blog/wage-growth-since-1979-has-not-been-stagnant-but-it-has-definitely-been-suppressed/
originally posted at https://stacker.news/items/972959
-
@ 04c915da:3dfbecc9
2025-03-10 23:31:30Bitcoin has always been rooted in freedom and resistance to authority. I get that many of you are conflicted about the US Government stacking but by design we cannot stop anyone from using bitcoin. Many have asked me for my thoughts on the matter, so let’s rip it.
Concern
One of the most glaring issues with the strategic bitcoin reserve is its foundation, built on stolen bitcoin. For those of us who value private property this is an obvious betrayal of our core principles. Rather than proof of work, the bitcoin that seeds this reserve has been taken by force. The US Government should return the bitcoin stolen from Bitfinex and the Silk Road.
Usually stolen bitcoin for the reserve creates a perverse incentive. If governments see a bitcoin as a valuable asset, they will ramp up efforts to confiscate more bitcoin. The precedent is a major concern, and I stand strongly against it, but it should be also noted that governments were already seizing coin before the reserve so this is not really a change in policy.
Ideally all seized bitcoin should be burned, by law. This would align incentives properly and make it less likely for the government to actively increase coin seizures. Due to the truly scarce properties of bitcoin, all burned bitcoin helps existing holders through increased purchasing power regardless. This change would be unlikely but those of us in policy circles should push for it regardless. It would be best case scenario for American bitcoiners and would create a strong foundation for the next century of American leadership.
Optimism
The entire point of bitcoin is that we can spend or save it without permission. That said, it is a massive benefit to not have one of the strongest governments in human history actively trying to ruin our lives.
Since the beginning, bitcoiners have faced horrible regulatory trends. KYC, surveillance, and legal cases have made using bitcoin and building bitcoin businesses incredibly difficult. It is incredibly important to note that over the past year that trend has reversed for the first time in a decade. A strategic bitcoin reserve is a key driver of this shift. By holding bitcoin, the strongest government in the world has signaled that it is not just a fringe technology but rather truly valuable, legitimate, and worth stacking.
This alignment of incentives changes everything. The US Government stacking proves bitcoin’s worth. The resulting purchasing power appreciation helps all of us who are holding coin and as bitcoin succeeds our government receives direct benefit. A beautiful positive feedback loop.
Realism
We are trending in the right direction. A strategic bitcoin reserve is a sign that the state sees bitcoin as an asset worth embracing rather than destroying. That said, there is a lot of work left to be done. We cannot be lulled into complacency, the time to push forward is now, and we cannot take our foot off the gas. We have a seat at the table for the first time ever. Let's make it worth it.
We must protect the right to free usage of bitcoin and other digital technologies. Freedom in the digital age must be taken and defended, through both technical and political avenues. Multiple privacy focused developers are facing long jail sentences for building tools that protect our freedom. These cases are not just legal battles. They are attacks on the soul of bitcoin. We need to rally behind them, fight for their freedom, and ensure the ethos of bitcoin survives this new era of government interest. The strategic reserve is a step in the right direction, but it is up to us to hold the line and shape the future.
-
@ 40bdcc08:ad00fd2c
2025-05-06 14:24:22Introduction
Bitcoin’s
OP_RETURN
opcode, a mechanism for embedding small data in transactions, has ignited a significant debate within the Bitcoin community. Originally designed to support limited metadata while preserving Bitcoin’s role as a peer-to-peer electronic cash system,OP_RETURN
is now at the center of proposals that could redefine Bitcoin’s identity. The immutable nature of Bitcoin’s timechain makes it an attractive platform for data storage, creating tension with those who prioritize its monetary function. This discussion, particularly around Bitcoin Core pull request #32406 (GitHub PR #32406), highlights a critical juncture for Bitcoin’s future.What is
OP_RETURN
?Introduced in 2014,
OP_RETURN
allows users to attach up to 80 bytes of data to a Bitcoin transaction. Unlike other transaction outputs,OP_RETURN
outputs are provably unspendable, meaning they don’t burden the Unspent Transaction Output (UTXO) set—a critical database for Bitcoin nodes. This feature was a compromise to provide a standardized, less harmful way to include metadata, addressing earlier practices that embedded data in ways that bloated the UTXO set. The 80-byte limit and restriction to oneOP_RETURN
output per transaction are part of Bitcoin Core’s standardness rules, which guide transaction relay and mining but are not enforced by the network’s consensus rules (Bitcoin Stack Exchange).Standardness vs. Consensus Rules
Standardness rules are Bitcoin Core’s default policies for relaying and mining transactions. They differ from consensus rules, which define what transactions are valid across the entire network. For
OP_RETURN
: - Consensus Rules: AllowOP_RETURN
outputs with data up to the maximum script size (approximately 10,000 bytes) and multiple outputs per transaction (Bitcoin Stack Exchange). - Standardness Rules: LimitOP_RETURN
data to 80 bytes and one output per transaction to discourage excessive data storage and maintain network efficiency.Node operators can adjust these policies using settings like
-datacarrier
(enables/disablesOP_RETURN
relay) and-datacarriersize
(sets the maximum data size, defaulting to 83 bytes to account for theOP_RETURN
opcode and pushdata byte). These settings allow flexibility but reflect Bitcoin Core’s default stance on limiting data usage.The Proposal: Pull Request #32406
Bitcoin Core pull request #32406, proposed by developer instagibbs, seeks to relax these standardness restrictions (GitHub PR #32406). Key changes include: - Removing Default Size Limits: The default
-datacarriersize
would be uncapped, allowing largerOP_RETURN
data without a predefined limit. - Allowing Multiple Outputs: The restriction to oneOP_RETURN
output per transaction would be lifted, with the total data size across all outputs subject to a configurable limit. - Deprecating Configuration Options: The-datacarrier
and-datacarriersize
settings are marked as deprecated, signaling potential removal in future releases, which could limit node operators’ ability to enforce custom restrictions.This proposal does not alter consensus rules, meaning miners and nodes can already accept transactions with larger or multiple
OP_RETURN
outputs. Instead, it changes Bitcoin Core’s default relay policy to align with existing practices, such as miners accepting non-standard transactions via services like Marathon Digital’s Slipstream (CoinDesk).Node Operator Flexibility
Currently, node operators can customize
OP_RETURN
handling: - Default Settings: Relay transactions with oneOP_RETURN
output up to 80 bytes. - Custom Settings: Operators can disableOP_RETURN
relay (-datacarrier=0
) or adjust the size limit (e.g.,-datacarriersize=100
). These options remain in #32406 but are deprecated, suggesting that future Bitcoin Core versions might not support such customization, potentially standardizing the uncapped policy.Arguments in Favor of Relaxing Limits
Supporters of pull request #32406 and similar proposals argue that the current restrictions are outdated and ineffective. Their key points include: - Ineffective Limits: Developers bypass the 80-byte limit using methods like Inscriptions, which store data in other transaction parts, often at higher cost and inefficiency (BitcoinDev Mailing List). Relaxing
OP_RETURN
could channel data into a more efficient format. - Preventing UTXO Bloat: By encouragingOP_RETURN
use, which doesn’t affect the UTXO set, the proposal could reduce reliance on harmful alternatives like unspendable Taproot outputs used by projects like Citrea’s Clementine bridge. - Supporting Innovation: Projects like Citrea require more data (e.g., 144 bytes) for security proofs, and relaxed limits could enable new Layer 2 solutions (CryptoSlate). - Code Simplification: Developers like Peter Todd argue that these limits complicate Bitcoin Core’s codebase unnecessarily (CoinGeek). - Aligning with Practice: Miners already process non-standard transactions, and uncapping defaults could improve fee estimation and reduce reliance on out-of-band services, as noted by ismaelsadeeq in the pull request discussion.In the GitHub discussion, developers like Sjors and TheCharlatan expressed support (Concept ACK), citing these efficiency and innovation benefits.
Arguments Against Relaxing Limits
Opponents, including prominent developers and community members, raise significant concerns about the implications of these changes: - Deviation from Bitcoin’s Purpose: Critics like Luke Dashjr, who called the proposal “utter insanity,” argue that Bitcoin’s base layer should prioritize peer-to-peer cash, not data storage (CoinDesk). Jason Hughes warned it could turn Bitcoin into a “worthless altcoin” (BeInCrypto). - Blockchain Bloat: Additional data increases the storage and processing burden on full nodes, potentially making node operation cost-prohibitive and threatening decentralization (CryptoSlate). - Network Congestion: Unrestricted data could lead to “spam” transactions, raising fees and hindering Bitcoin’s use for financial transactions. - Risk of Illicit Content: The timechain’s immutability means data, including potentially illegal or objectionable content, is permanently stored on every node. The 80-byte limit acts as a practical barrier, and relaxing it could exacerbate this issue. - Preserving Consensus: Developers like John Carvalho view the limits as a hard-won community agreement, not to be changed lightly.
In the pull request discussion, nsvrn and moth-oss expressed concerns about spam and centralization, advocating for gradual changes. Concept NACKs from developers like wizkid057 and Luke Dashjr reflect strong opposition.
Community Feedback
The GitHub discussion for pull request #32406 shows a divided community: - Support (Concept ACK): Sjors, polespinasa, ismaelsadeeq, miketwenty1, TheCharlatan, Psifour. - Opposition (Concept NACK): wizkid057, BitcoinMechanic, Retropex, nsvrn, moth-oss, Luke Dashjr. - Other: Peter Todd provided a stale ACK, indicating partial or outdated support.
Additional discussions on the BitcoinDev mailing list and related pull requests (e.g., #32359 by Peter Todd) highlight similar arguments, with #32359 proposing a more aggressive removal of all
OP_RETURN
limits and configuration options (GitHub PR #32359).| Feedback Type | Developers | Key Points | |---------------|------------|------------| | Concept ACK | Sjors, ismaelsadeeq, others | Improves efficiency, supports innovation, aligns with mining practices. | | Concept NACK | Luke Dashjr, wizkid057, others | Risks bloat, spam, centralization, and deviation from Bitcoin’s purpose. | | Stale ACK | Peter Todd | Acknowledges proposal but with reservations or outdated support. |
Workarounds and Their Implications
The existence of workarounds, such as Inscriptions, which exploit SegWit discounts to embed data, is a key argument for relaxing
OP_RETURN
limits. These methods are costlier and less efficient, often costing more thanOP_RETURN
for data under 143 bytes (BitcoinDev Mailing List). Supporters argue that formalizing largerOP_RETURN
data could streamline these use cases. Critics, however, see workarounds as a reason to strengthen, not weaken, restrictions, emphasizing the need to address underlying incentives rather than accommodating bypasses.Ecosystem Pressures
External factors influence the debate: - Miners: Services like Marathon Digital’s Slipstream process non-standard transactions for a fee, showing that market incentives already bypass standardness rules. - Layer 2 Projects: Citrea’s Clementine bridge, requiring more data for security proofs, exemplifies the demand for relaxed limits to support innovative applications. - Community Dynamics: The debate echoes past controversies, like the Ordinals debate, where data storage via inscriptions raised similar concerns about Bitcoin’s purpose (CoinDesk).
Bitcoin’s Identity at Stake
The
OP_RETURN
debate is not merely technical but philosophical, questioning whether Bitcoin should remain a focused monetary system or evolve into a broader data platform. Supporters see relaxed limits as a pragmatic step toward efficiency and innovation, while opponents view them as a risk to Bitcoin’s decentralization, accessibility, and core mission. The community’s decision will have lasting implications, affecting node operators, miners, developers, and users.Conclusion
As Bitcoin navigates this crossroads, the community must balance the potential benefits of relaxed
OP_RETURN
limits—such as improved efficiency and support for new applications—against the risks of blockchain bloat, network congestion, and deviation from its monetary roots. The ongoing discussion, accessible via pull request #32406 on GitHub (GitHub PR #32406). Readers are encouraged to explore the debate and contribute to ensuring that any changes align with Bitcoin’s long-term goals as a decentralized, secure, and reliable system. -
@ 2b1964b8:851949fa
2025-03-02 19:00:56Routine Picture-in-Picture American Sign Language Interpretation in American Broadcasting
(PiP, ASL)
Picture-in-picture sign language interpretation is a standard feature in news broadcasts across the globe. Why hasn’t America become a leader in picture-in-picture implementation too?
Misconception.
There are prevalent misunderstandings about the necessity of ASL interpreters in the media and beyond. As recently as January 2025, an American influencer with ~10M social followers on Instagram and X combined, referred to sign language interpreters during emergency briefings as a distraction.
Such views overlook the fact that, for many deaf individuals, American Sign Language is their primary language. It is wrongly assumed that deaf Americans know—or should know—English. American Sign Language differs in grammatical structure from English. Moreover, human interpreters are able to convey nuances that captions often miss, such as non-manual markers; facial expressions, body movements, head positions utilized in sign language to convey meaning. English is the native language for many hearing Americans, who have access to it throughout the United States without any additional expectation placed upon them.
A deeper understanding reveals that many nations have their own unique signed languages, reflecting their local deaf culture and community — Brazilian Sign Language, British Sign Language, Finnish Sign Language, French Sign Language, Japanese Sign Language, Mexican Sign Language, Nigerian Sign Language, and South African Sign Language, among numerous others.
Bottom Line: American Sign Language is the native language for many American-born deaf individuals, and English is the native language for many American-born hearing individuals. It is a one-for-one relationship; both are equal.
In an era where information dissemination is instantaneous, ensuring that mainstream broadcasts are accessible to all citizens is paramount.
Public Figures Including Language Access In Their Riders
What's a rider? A rider is an addendum or supplemental clause added to a contract that expands or adjusts the contract's terms. Riders are commonly used in agreements for public figures to specify additional requirements such as personal preferences or technical needs.
A Simple Yet Powerful Action
Public figures have a unique ability to shape industry standards, and by including language access in their riders, they can make a profound impact with minimal effort. * On-site American Sign Language interpretation ensures that deaf and hard-of-hearing individuals can fully engage with speeches and live events. * Open captions (burned-in captions) for all live and post-production interview segments guarantee accessibility across platforms, making spoken content instantly available to a wider audience. These implements don’t just benefit deaf constitutents—they also support language learners, individuals in sound-sensitive environments and any person who relies on, or simply refers, visual reinforcement to engage with spoken content.
For public figures, adding these 2 requests to a rider is one of the most efficient and immediate ways to promote accessibility. By normalizing language access as a standard expectation, you encourage event organizers, broadcasters, and production teams to adopt these practices universally.
As a result, there will be an industry shift from accessibility as an occasional accommodation to an industry norm, ensuring that future events, interviews, and media content are more accessible for all. Beyond immediate accessibility, the regular presence of interpreters in public spaces increases awareness of sign language. Seeing interpreters in mainstream media can spark interest among both deaf and hearing children to pursue careers in interpretation, expanding future language access and representation.
Year-Round Commitment to Accessibility
Too often, language access is only considered when an immediate demand arises, which leads to rushed or inadequate solutions. While some events may include interpretation or captioning, these efforts can fall short when they lack the expertise and coordination necessary for true disability justice. Thoughtful, proactive planning ensures that language access is seamlessly integrated into events, rather than being a reactive measure.
Best practices happen when all key players are involved from the start: * Accessibility leads with combined production and linguistic knowledge who can ensure accessibility remains central to the purpose rather than allowing themselves to be caught up in the spectacle of an event. * Language experts who ensure accuracy and cultural competency.
* Production professionals who understand event logistics.By prioritizing accessibility year-round, organizations create spaces where disability justice is not just accommodated, but expected—ensuring that every audience member, regardless of language needs, has access to information and engagement.
-
@ 8f69ac99:4f92f5fd
2025-05-06 14:21:13A concepção popular de "anarquia" evoca frequentemente caos, colapso e violência. Mas e se anarquia significasse outra coisa? E se representasse um mundo onde as pessoas cooperam e se coordenam sem autoridades impostas? E se implicasse liberdade, ordem voluntária e resiliência—sem coerção?
Bitcoin é um dos raros exemplos funcionais de princípios anarquistas em acção. Não tem CEO, nem Estado, nem planeador central—e, no entanto, o sistema funciona. Faz cumprir regras. Propõe um novo modelo de governação e oferece uma exploração concreta do anarcocapitalismo.
Para o compreendermos, temos de mudar de perspectiva. Bitcoin não é apenas software ou um instrumento de investimento—é um sistema vivo: uma ordem espontânea.
Ordem Espontânea, Teoria dos Jogos e o Papel dos Incentivos Económicos
Na política e economia contemporâneas, presume-se geralmente que a ordem tem de vir de cima. Governos, corporações e burocracias são vistos como essenciais para organizar a sociedade em grande escala.
Mas esta crença nem sempre se verifica.
Os mercados surgem espontaneamente da troca. A linguagem evolui sem supervisão central. Projectos de código aberto prosperam graças a contribuições voluntárias. Nenhum destes sistemas precisa de um rei—e, no entanto, têm estrutura e funcionam.
Bitcoin insere-se nesta tradição de ordens emergentes. Não é ditado por uma entidade única, mas é governado através de código, consenso dos utilizadores e incentivos económicos que recompensam a cooperação e penalizam a desonestidade.
Código Como Constituição
Bitcoin funciona com base num conjunto de regras de software transparentes e verificáveis. Estas regras determinam quem pode adicionar blocos, com que frequência, o que constitui uma transacção válida e como são criadas novas moedas.
Estas regras não são impostas por exércitos nem pela polícia. São mantidas por uma rede descentralizada de milhares de nós, cada um a correr voluntariamente software que valida o cumprimento das regras. Se alguém tentar quebrá-las, o resto da rede simplesmente rejeita a sua versão.
Isto não é governo por maioria—é aceitação baseada em regras.
Cada operador de nó escolhe qual versão do software quer executar. Se uma alteração proposta não tiver consenso suficiente, não se propaga. Foi assim que as "guerras do tamanho do bloco" foram resolvidas—não por votação, mas através de sinalização do que os utilizadores estavam dispostos a aceitar.
Este modelo de governação ascendente é voluntário, sem permissões, e extraordinariamente resiliente. Representa um novo paradigma de sistemas autorregulados.
Mineiros, Incentivos e a Segurança Baseada na Teoria dos Jogos
Bitcoin assegura a sua rede utilizando a Teoria de Jogos. Os mineiros que seguem o protocolo são recompensados financeiramente. Quem tenta enganar—como reescrever blocos ou gastar duas vezes—sofre perdas financeiras e desperdiça recursos.
Agir honestamente é mais lucrativo.
A genialidade de Bitcoin está em alinhar incentivos egoístas com o bem comum. Elimina a necessidade de confiar em administradores ou esperar benevolência. Em vez disso, torna a fraude economicamente irracional.
Isto substitui o modelo tradicional de "confiar nos líderes" por um mais robusto: construir sistemas onde o mau comportamento é desencorajado por design.
Isto é segurança anarquista—não a ausência de regras, mas a ausência de governantes.
Associação Voluntária e Confiança Construída em Consenso
Qualquer pessoa pode usar Bitcoin. Não há controlo de identidade, nem licenças, nem processo de aprovação. Basta descarregar o software e começar a transaccionar.
Ainda assim, Bitcoin não é um caos desorganizado. Os utilizadores seguem regras rigorosas do protocolo. Porquê? Porque é o consenso que dá valor às "moedas". Sem ele, a rede fragmenta-se e falha.
É aqui que Bitcoin desafia as ideias convencionais sobre anarquia. Mostra que sistemas voluntários podem gerar estabilidade—não porque as pessoas são altruístas, mas porque os incentivos bem desenhados tornam a cooperação a escolha racional.
Bitcoin é sem confiança (trustless), mas promove confiança.
Uma Prova de Conceito Viva
Muitos acreditam que, sem controlo central, a sociedade entraria em colapso. Bitcoin prova que isso não é necessariamente verdade.
É uma rede monetária global, sem permissões, capaz de fazer cumprir direitos de propriedade, coordenar recursos e resistir à censura—sem uma autoridade central. Baseia-se apenas em regras, incentivos e participação voluntária.
Bitcoin não é um sistema perfeito. É um projecto dinâmico, em constante evolução. Mas isso faz parte do que o torna tão relevante: é real, está a funcionar e continua a melhorar.
Conclusão
A anarquia não tem de significar caos. Pode significar cooperação sem coerção. Bitcoin prova isso.
Procuramos, desesperados, por alternativas às instituições falhadas, inchadas e corruptas. Bitcoin oferece mais do que dinheiro digital. É uma prova viva de que podemos construir sociedades descentralizadas, eficientes e justas.
E isso, por si só, já é revolucionário.
Photo by Floris Van Cauwelaert on Unsplash
-
@ 90c656ff:9383fd4e
2025-05-06 14:10:48Bitcoin has been gaining increasing acceptance as a means of payment, evolving from being just a digital investment asset to becoming a viable alternative to traditional currencies. Today, many companies around the world already accept Bitcoin, providing consumers with greater financial freedom and reducing reliance on traditional banking intermediaries.
- Global companies that accept Bitcoin
Over the years, several well-known companies have begun accepting Bitcoin, recognizing its benefits such as security, transparency, and low transaction fees. Among the most prominent are:
01 - Microsoft: The tech giant allows users to add funds to their Microsoft accounts using Bitcoin. This enables the purchase of digital content such as games, apps, and software available in the Microsoft Store. 02 - Overstock: One of the largest online retailers that accepts Bitcoin for the purchase of furniture, electronics, and home goods. Overstock was an early adopter, signaling a strong commitment to financial innovation. 03 - AT&T: The U.S. telecommunications company was the first in its industry to accept Bitcoin payments, giving customers the option to pay their bills with cryptocurrency through BitPay. 04 - Twitch: While Twitch does not natively support Bitcoin donations or payments, many streamers use third-party services like NOWPayments, Streamlabs (with Coinbase integration), or Plisio to accept crypto tips and donations. This opens a path for Bitcoin support through external platforms, especially within the content creator community. 05 - Namecheap: A leading domain registrar and web hosting provider that accepts Bitcoin for domain registration and hosting services, showcasing Bitcoin’s usefulness in the digital economy.
- Small businesses and local commerce
Beyond large corporations, a growing number of small businesses and local merchants are embracing Bitcoin, particularly in cities that are becoming hubs for digital innovation.
01 - Restaurants and cafés: In cities like Lisbon, London, and New York, several cafés and eateries accept Bitcoin as payment, attracting tech-savvy customers. 02 - Hotels and tourism: Certain hotel chains and travel platforms now accept Bitcoin, simplifying bookings and removing the need for currency exchange for international travelers. 03 - Online stores: Many small e-commerce businesses offer Bitcoin as a payment option or even operate exclusively using cryptocurrency, benefiting from borderless, fast transactions.
- Advantages for businesses and consumers
The growing acceptance of Bitcoin is largely driven by its advantages:
01 - Lower transaction fees: Businesses can reduce costs associated with credit card fees and payment processors. 02 - No intermediaries: Direct peer-to-peer payments cut down on bureaucracy and reduce fraud risks. 03 - Global access: Bitcoin allows for cross-border payments without the need for currency exchange, ideal for international transactions.
In summary, the adoption of Bitcoin as a means of payment continues to expand, with companies of all sizes recognizing its strategic value. From large enterprises to independent creators and local shops, Bitcoin is gradually becoming a more practical and accepted financial tool. While challenges such as volatility and regulatory uncertainty remain, the broader trend points toward a future where paying with Bitcoin could be a common part of everyday life.
Thank you very much for reading this far. I hope everything is well with you, and sending a big hug from your favorite Bitcoiner maximalist from Madeira. Long live freedom!
-
@ 97c70a44:ad98e322
2025-01-30 17:15:37There was a slight dust up recently over a website someone runs removing a listing for an app someone built based on entirely arbitrary criteria. I'm not to going to attempt to speak for either wounded party, but I would like to share my own personal definition for what constitutes a "nostr app" in an effort to help clarify what might be an otherwise confusing and opaque purity test.
In this post, I will be committing the "no true Scotsman" fallacy, in which I start with the most liberal definition I can come up with, and gradually refine it until all that is left is the purest, gleamingest, most imaginary and unattainable nostr app imaginable. As I write this, I wonder if anything built yet will actually qualify. In any case, here we go.
It uses nostr
The lowest bar for what a "nostr app" might be is an app ("application" - i.e. software, not necessarily a native app of any kind) that has some nostr-specific code in it, but which doesn't take any advantage of what makes nostr distinctive as a protocol.
Examples might include a scraper of some kind which fulfills its charter by fetching data from relays (regardless of whether it validates or retains signatures). Another might be a regular web 2.0 app which provides an option to "log in with nostr" by requesting and storing the user's public key.
In either case, the fact that nostr is involved is entirely neutral. A scraper can scrape html, pdfs, jsonl, whatever data source - nostr relays are just another target. Likewise, a user's key in this scenario is treated merely as an opaque identifier, with no appreciation for the super powers it brings along.
In most cases, this kind of app only exists as a marketing ploy, or less cynically, because it wants to get in on the hype of being a "nostr app", without the developer quite understanding what that means, or having the budget to execute properly on the claim.
It leverages nostr
Some of you might be wondering, "isn't 'leverage' a synonym for 'use'?" And you would be right, but for one connotative difference. It's possible to "use" something improperly, but by definition leverage gives you a mechanical advantage that you wouldn't otherwise have. This is the second category of "nostr app".
This kind of app gets some benefit out of the nostr protocol and network, but in an entirely selfish fashion. The intention of this kind of app is not to augment the nostr network, but to augment its own UX by borrowing some nifty thing from the protocol without really contributing anything back.
Some examples might include:
- Using nostr signers to encrypt or sign data, and then store that data on a proprietary server.
- Using nostr relays as a kind of low-code backend, but using proprietary event payloads.
- Using nostr event kinds to represent data (why), but not leveraging the trustlessness that buys you.
An application in this category might even communicate to its users via nostr DMs - but this doesn't make it a "nostr app" any more than a website that emails you hot deals on herbal supplements is an "email app". These apps are purely parasitic on the nostr ecosystem.
In the long-term, that's not necessarily a bad thing. Email's ubiquity is self-reinforcing. But in the short term, this kind of "nostr app" can actually do damage to nostr's reputation by over-promising and under-delivering.
It complements nostr
Next up, we have apps that get some benefit out of nostr as above, but give back by providing a unique value proposition to nostr users as nostr users. This is a bit of a fine distinction, but for me this category is for apps which focus on solving problems that nostr isn't good at solving, leaving the nostr integration in a secondary or supporting role.
One example of this kind of app was Mutiny (RIP), which not only allowed users to sign in with nostr, but also pulled those users' social graphs so that users could send money to people they knew and trusted. Mutiny was doing a great job of leveraging nostr, as well as providing value to users with nostr identities - but it was still primarily a bitcoin wallet, not a "nostr app" in the purest sense.
Other examples are things like Nostr Nests and Zap.stream, whose core value proposition is streaming video or audio content. Both make great use of nostr identities, data formats, and relays, but they're primarily streaming apps. A good litmus test for things like this is: if you got rid of nostr, would it be the same product (even if inferior in certain ways)?
A similar category is infrastructure providers that benefit nostr by their existence (and may in fact be targeted explicitly at nostr users), but do things in a centralized, old-web way; for example: media hosts, DNS registrars, hosting providers, and CDNs.
To be clear here, I'm not casting aspersions (I don't even know what those are, or where to buy them). All the apps mentioned above use nostr to great effect, and are a real benefit to nostr users. But they are not True Scotsmen.
It embodies nostr
Ok, here we go. This is the crème de la crème, the top du top, the meilleur du meilleur, the bee's knees. The purest, holiest, most chaste category of nostr app out there. The apps which are, indeed, nostr indigitate.
This category of nostr app (see, no quotes this time) can be defined by the converse of the previous category. If nostr was removed from this type of application, would it be impossible to create the same product?
To tease this apart a bit, apps that leverage the technical aspects of nostr are dependent on nostr the protocol, while apps that benefit nostr exclusively via network effect are integrated into nostr the network. An app that does both things is working in symbiosis with nostr as a whole.
An app that embraces both nostr's protocol and its network becomes an organic extension of every other nostr app out there, multiplying both its competitive moat and its contribution to the ecosystem:
- In contrast to apps that only borrow from nostr on the technical level but continue to operate in their own silos, an application integrated into the nostr network comes pre-packaged with existing users, and is able to provide more value to those users because of other nostr products. On nostr, it's a good thing to advertise your competitors.
- In contrast to apps that only market themselves to nostr users without building out a deep integration on the protocol level, a deeply integrated app becomes an asset to every other nostr app by becoming an organic extension of them through interoperability. This results in increased traffic to the app as other developers and users refer people to it instead of solving their problem on their own. This is the "micro-apps" utopia we've all been waiting for.
Credible exit doesn't matter if there aren't alternative services. Interoperability is pointless if other applications don't offer something your app doesn't. Marketing to nostr users doesn't matter if you don't augment their agency as nostr users.
If I had to choose a single NIP that represents the mindset behind this kind of app, it would be NIP 89 A.K.A. "Recommended Application Handlers", which states:
Nostr's discoverability and transparent event interaction is one of its most interesting/novel mechanics. This NIP provides a simple way for clients to discover applications that handle events of a specific kind to ensure smooth cross-client and cross-kind interactions.
These handlers are the glue that holds nostr apps together. A single event, signed by the developer of an application (or by the application's own account) tells anyone who wants to know 1. what event kinds the app supports, 2. how to link to the app (if it's a client), and (if the pubkey also publishes a kind 10002), 3. which relays the app prefers.
As a sidenote, NIP 89 is currently focused more on clients, leaving DVMs, relays, signers, etc somewhat out in the cold. Updating 89 to include tailored listings for each kind of supporting app would be a huge improvement to the protocol. This, plus a good front end for navigating these listings (sorry nostrapp.link, close but no cigar) would obviate the evil centralized websites that curate apps based on arbitrary criteria.
Examples of this kind of app obviously include many kind 1 clients, as well as clients that attempt to bring the benefits of the nostr protocol and network to new use cases - whether long form content, video, image posts, music, emojis, recipes, project management, or any other "content type".
To drill down into one example, let's think for a moment about forms. What's so great about a forms app that is built on nostr? Well,
- There is a spec for forms and responses, which means that...
- Multiple clients can implement the same data format, allowing for credible exit and user choice, even of...
- Other products not focused on forms, which can still view, respond to, or embed forms, and which can send their users via NIP 89 to a client that does...
- Cryptographically sign forms and responses, which means they are self-authenticating and can be sent to...
- Multiple relays, which reduces the amount of trust necessary to be confident results haven't been deliberately "lost".
Show me a forms product that does all of those things, and isn't built on nostr. You can't, because it doesn't exist. Meanwhile, there are plenty of image hosts with APIs, streaming services, and bitcoin wallets which have basically the same levels of censorship resistance, interoperability, and network effect as if they weren't built on nostr.
It supports nostr
Notice I haven't said anything about whether relays, signers, blossom servers, software libraries, DVMs, and the accumulated addenda of the nostr ecosystem are nostr apps. Well, they are (usually).
This is the category of nostr app that gets none of the credit for doing all of the work. There's no question that they qualify as beautiful nostrcorns, because their value propositions are entirely meaningless outside of the context of nostr. Who needs a signer if you don't have a cryptographic identity you need to protect? DVMs are literally impossible to use without relays. How are you going to find the blossom server that will serve a given hash if you don't know which servers the publishing user has selected to store their content?
In addition to being entirely contextualized by nostr architecture, this type of nostr app is valuable because it does things "the nostr way". By that I mean that they don't simply try to replicate existing internet functionality into a nostr context; instead, they create entirely new ways of putting the basic building blocks of the internet back together.
A great example of this is how Nostr Connect, Nostr Wallet Connect, and DVMs all use relays as brokers, which allows service providers to avoid having to accept incoming network connections. This opens up really interesting possibilities all on its own.
So while I might hesitate to call many of these things "apps", they are certainly "nostr".
Appendix: it smells like a NINO
So, let's say you've created an app, but when you show it to people they politely smile, nod, and call it a NINO (Nostr In Name Only). What's a hacker to do? Well, here's your handy-dandy guide on how to wash that NINO stench off and Become a Nostr.
You app might be a NINO if:
- There's no NIP for your data format (or you're abusing NIP 78, 32, etc by inventing a sub-protocol inside an existing event kind)
- There's a NIP, but no one knows about it because it's in a text file on your hard drive (or buried in your project's repository)
- Your NIP imposes an incompatible/centralized/legacy web paradigm onto nostr
- Your NIP relies on trusted third (or first) parties
- There's only one implementation of your NIP (yours)
- Your core value proposition doesn't depend on relays, events, or nostr identities
- One or more relay urls are hard-coded into the source code
- Your app depends on a specific relay implementation to work (ahem, relay29)
- You don't validate event signatures
- You don't publish events to relays you don't control
- You don't read events from relays you don't control
- You use legacy web services to solve problems, rather than nostr-native solutions
- You use nostr-native solutions, but you've hardcoded their pubkeys or URLs into your app
- You don't use NIP 89 to discover clients and services
- You haven't published a NIP 89 listing for your app
- You don't leverage your users' web of trust for filtering out spam
- You don't respect your users' mute lists
- You try to "own" your users' data
Now let me just re-iterate - it's ok to be a NINO. We need NINOs, because nostr can't (and shouldn't) tackle every problem. You just need to decide whether your app, as a NINO, is actually contributing to the nostr ecosystem, or whether you're just using buzzwords to whitewash a legacy web software product.
If you're in the former camp, great! If you're in the latter, what are you waiting for? Only you can fix your NINO problem. And there are lots of ways to do this, depending on your own unique situation:
- Drop nostr support if it's not doing anyone any good. If you want to build a normal company and make some money, that's perfectly fine.
- Build out your nostr integration - start taking advantage of webs of trust, self-authenticating data, event handlers, etc.
- Work around the problem. Think you need a special relay feature for your app to work? Guess again. Consider encryption, AUTH, DVMs, or better data formats.
- Think your idea is a good one? Talk to other devs or open a PR to the nips repo. No one can adopt your NIP if they don't know about it.
- Keep going. It can sometimes be hard to distinguish a research project from a NINO. New ideas have to be built out before they can be fully appreciated.
- Listen to advice. Nostr developers are friendly and happy to help. If you're not sure why you're getting traction, ask!
I sincerely hope this article is useful for all of you out there in NINO land. Maybe this made you feel better about not passing the totally optional nostr app purity test. Or maybe it gave you some actionable next steps towards making a great NINON (Nostr In Not Only Name) app. In either case, GM and PV.
-
@ c1e9ab3a:9cb56b43
2025-05-06 14:05:40If you're an engineer stepping into the Bitcoin space from the broader crypto ecosystem, you're probably carrying a mental model shaped by speed, flexibility, and rapid innovation. That makes sense—most blockchain platforms pride themselves on throughput, programmability, and dev agility.
But Bitcoin operates from a different set of first principles. It’s not competing to be the fastest network or the most expressive smart contract platform. It’s aiming to be the most credible, neutral, and globally accessible value layer in human history.
Here’s why that matters—and why Bitcoin is not just an alternative crypto asset, but a structural necessity in the global financial system.
1. Bitcoin Fixes the Triffin Dilemma—Not With Policy, But Protocol
The Triffin Dilemma shows us that any country issuing the global reserve currency must run persistent deficits to supply that currency to the world. That’s not a flaw of bad leadership—it’s an inherent contradiction. The U.S. must debase its own monetary integrity to meet global dollar demand. That’s a self-terminating system.
Bitcoin sidesteps this entirely by being:
- Non-sovereign – no single nation owns it
- Hard-capped – no central authority can inflate it
- Verifiable and neutral – anyone with a full node can enforce the rules
In other words, Bitcoin turns global liquidity into an engineering problem, not a political one. No other system, fiat or crypto, has achieved that.
2. Bitcoin’s “Ossification” Is Intentional—and It's a Feature
From the outside, Bitcoin development may look sluggish. Features are slow to roll out. Code changes are conservative. Consensus rules are treated as sacred.
That’s the point.
When you’re building the global monetary base layer, stability is not a weakness. It’s a prerequisite. Every other financial instrument, app, or protocol that builds on Bitcoin depends on one thing: assurance that the base layer won’t change underneath them without extreme scrutiny.
So-called “ossification” is just another term for predictability and integrity. And when the market does demand change (SegWit, Taproot), Bitcoin’s soft-fork governance process has proven capable of deploying it safely—without coercive central control.
3. Layered Architecture: Throughput Is Not a Base Layer Concern
You don’t scale settlement at the base layer. You build layered systems. Just as TCP/IP doesn't need to carry YouTube traffic directly, Bitcoin doesn’t need to process every microtransaction.
Instead, it anchors:
- Lightning (fast payments)
- Fedimint (community custody)
- Ark (privacy + UTXO compression)
- Statechains, sidechains, and covenants (coming evolution)
All of these inherit Bitcoin’s security and scarcity, while handling volume off-chain, in ways that maintain auditability and self-custody.
4. Universal Assayability Requires Minimalism at the Base Layer
A core design constraint of Bitcoin is that any participant, anywhere in the world, must be able to independently verify the validity of every transaction and block—past and present—without needing permission or relying on third parties.
This property is called assayability—the ability to “test” or verify the authenticity and integrity of received bitcoin, much like verifying the weight and purity of a gold coin.
To preserve this:
- The base layer must remain resource-light, so running a full node stays accessible on commodity hardware.
- Block sizes must remain small enough to prevent centralization of verification.
- Historical data must remain consistent and tamper-evident, enabling proof chains across time and jurisdiction.
Any base layer that scales by increasing throughput or complexity undermines this fundamental guarantee, making the network more dependent on trust and surveillance infrastructure.
Bitcoin prioritizes global verifiability over throughput—because trustless money requires that every user can check the money they receive.
5. Governance: Not Captured, Just Resistant to Coercion
The current controversy around
OP_RETURN
and proposals to limit inscriptions is instructive. Some prominent devs have advocated for changes to block content filtering. Others see it as overreach.Here's what matters:
- No single dev, or team, can force changes into the network. Period.
- Bitcoin Core is not “the source of truth.” It’s one implementation. If it deviates from market consensus, it gets forked, sidelined, or replaced.
- The economic majority—miners, users, businesses—enforce Bitcoin’s rules, not GitHub maintainers.
In fact, recent community resistance to perceived Core overreach only reinforces Bitcoin’s resilience. Engineers who posture with narcissistic certainty, dismiss dissent, or attempt to capture influence are routinely neutralized by the market’s refusal to upgrade or adopt forks that undermine neutrality or openness.
This is governance via credible neutrality and negative feedback loops. Power doesn’t accumulate in one place. It’s constantly checked by the network’s distributed incentives.
6. Bitcoin Is Still in Its Infancy—And That’s a Good Thing
You’re not too late. The ecosystem around Bitcoin—especially L2 protocols, privacy tools, custody innovation, and zero-knowledge integrations—is just beginning.
If you're an engineer looking for:
- Systems with global scale constraints
- Architectures that optimize for integrity, not speed
- Consensus mechanisms that resist coercion
- A base layer with predictable monetary policy
Then Bitcoin is where serious systems engineers go when they’ve outgrown crypto theater.
Take-away
Under realistic, market-aware assumptions—where:
- Bitcoin’s ossification is seen as a stability feature, not inertia,
- Market forces can and do demand and implement change via tested, non-coercive mechanisms,
- Proof-of-work is recognized as the only consensus mechanism resistant to fiat capture,
- Wealth concentration is understood as a temporary distribution effect during early monetization,
- Low base layer throughput is a deliberate design constraint to preserve verifiability and neutrality,
- And innovation is layered by design, with the base chain providing integrity, not complexity...
Then Bitcoin is not a fragile or inflexible system—it is a deliberately minimal, modular, and resilient protocol.
Its governance is not leaderless chaos; it's a negative-feedback structure that minimizes the power of individuals or institutions to coerce change. The very fact that proposals—like controversial OP_RETURN restrictions—can be resisted, forked around, or ignored by the market without breaking the system is proof of decentralized control, not dysfunction.
Bitcoin is an adversarially robust monetary foundation. Its value lies not in how fast it changes, but in how reliably it doesn't—unless change is forced by real, bottom-up demand and implemented through consensus-tested soft forks.
In this framing, Bitcoin isn't a slower crypto. It's the engineering benchmark for systems that must endure, not entertain.
Final Word
Bitcoin isn’t moving slowly because it’s dying. It’s moving carefully because it’s winning. It’s not an app platform or a sandbox. It’s a protocol layer for the future of money.
If you're here because you want to help build that future, you’re in the right place.
nostr:nevent1qqswr7sla434duatjp4m89grvs3zanxug05pzj04asxmv4rngvyv04sppemhxue69uhkummn9ekx7mp0qgs9tc6ruevfqu7nzt72kvq8te95dqfkndj5t8hlx6n79lj03q9v6xcrqsqqqqqp0n8wc2
nostr:nevent1qqsd5hfkqgskpjjq5zlfyyv9nmmela5q67tgu9640v7r8t828u73rdqpr4mhxue69uhkymmnw3ezucnfw33k76tww3ux76m09e3k7mf0qgsvr6dt8ft292mv5jlt7382vje0mfq2ccc3azrt4p45v5sknj6kkscrqsqqqqqp02vjk5
nostr:nevent1qqstrszamvffh72wr20euhrwa0fhzd3hhpedm30ys4ct8dpelwz3nuqpr4mhxue69uhkymmnw3ezucnfw33k76tww3ux76m09e3k7mf0qgs8a474cw4lqmapcq8hr7res4nknar2ey34fsffk0k42cjsdyn7yqqrqsqqqqqpnn3znl
-
@ 90c656ff:9383fd4e
2025-05-06 13:41:50Bitcoin was created to offer a secure and decentralized alternative to traditional money, enabling financial transactions without the need for intermediaries. DeFi, on the other hand, emerged as an expansion of this concept, proposing decentralized financial services such as lending, exchanges, and yield generation. However, despite its promises of innovation, DeFi carries numerous risks, making it a dangerous bet for those who value the security of their Bitcoin.
What is DeFi?
DeFi refers to a set of financial applications that operate without the intermediation of banks or traditional institutions. These platforms use smart contracts to automate transactions, allowing anyone to access financial services without relying on third parties. In theory, DeFi promises greater financial freedom, but in practice it is full of risks, scams, and technical vulnerabilities that can compromise users' funds.
- The risks of DeFi for Bitcoin holders
Bitcoin is the most secure digital currency in the world, protected by a decentralized and censorship-resistant network. Unlike DeFi, which is still in an experimental phase and has already suffered numerous attacks, Bitcoin remains solid and reliable. When someone places Bitcoin in DeFi platforms, they give up the security of direct custody and trust weaker systems.
The main risks include:
01 - Hackers and code flaws: Smart contracts are written by programmers and may contain bugs that allow massive thefts. Over the years, billions of dollars have been lost due to vulnerabilities in DeFi platforms. 02 - Liquidation risks: Many DeFi applications operate on collateralization systems, where users lock Bitcoin to obtain loans. If the market becomes volatile, those Bitcoins can be liquidated at lower-than-expected prices, causing irreversible losses. 03 - Scams and rug pulls: DeFi is full of shady projects where creators vanish with users’ funds. Without regulation and without guarantees, those who deposit Bitcoin in these platforms may never recover their funds.
- Keeping Bitcoin safe is the best choice
Bitcoin was created to be self-custodied, meaning each user should have direct control over their funds without relying on third parties. By sending Bitcoin to DeFi platforms, that security is lost and the asset is exposed to unnecessary risks. The best way to protect Bitcoin is to store it in a secure wallet, preferably offline (cold storage), avoiding any exposure to smart contracts or vulnerable systems.
In summary, DeFi may seem innovative, but the risks far outweigh the potential benefits—especially for those who value Bitcoin's security. Instead of risking losing funds on insecure platforms, the wisest choice is to keep Bitcoin safely stored, ensuring its long-term preservation. While Bitcoin continues to be the best digital store of value in the world, DeFi remains an unstable and dangerous environment where few win and many end up losing.
Thank you very much for reading this far. I hope everything is well with you, and sending a big hug from your favorite Bitcoiner maximalist from Madeira. Long live freedom!
-
@ 50809a53:e091f164
2025-01-20 22:30:01For starters, anyone who is interested in curating and managing "notes, lists, bookmarks, kind-1 events, or other stuff" should watch this video:
https://youtu.be/XRpHIa-2XCE
Now, assuming you have watched it, I will proceed assuming you are aware of many of the applications that exist for a very similar purpose. I'll break them down further, following a similar trajectory in order of how I came across them, and a bit about my own path on this journey.
We'll start way back in the early 2000s, before Bitcoin existed. We had https://zim-wiki.org/
It is tried and true, and to this day stands to present an option for people looking for a very simple solution to a potentially complex problem. Zim-Wiki works. But it is limited.
Let's step into the realm of proprietary. Obsidian, Joplin, and LogSeq. The first two are entirely cloud-operative applications, with more of a focus on the true benefit of being a paid service. I will assume anyone reading this is capable of exploring the marketing of these applications, or trying their freemium product, to get a feeling for what they are capable of.
I bring up Obsidian because it is very crucial to understand the market placement of publication. We know social media handles the 'hosting' problem of publishing notes "and other stuff" by harvesting data and making deals with advertisers. But- what Obsidian has evolved to offer is a full service known as 'publish'. This means users can stay in the proprietary pipeline, "from thought to web." all for $8/mo.
See: https://obsidian.md/publish
THIS IS NOSTR'S PRIMARY COMPETITION. WE ARE HERE TO DISRUPT THIS MARKET, WITH NOTES AND OTHER STUFF. WITH RELAYS. WITH THE PROTOCOL.
Now, on to Joplin. I have never used this, because I opted to study the FOSS market and stayed free of any reliance on a paid solution. Many people like Joplin, and I gather the reason is because it has allowed itself to be flexible and good options that integrate with Joplin seems to provide good solutions for users who need that functionality. I see Nostr users recommending Joplin, so I felt it was worthwhile to mention as a case-study option. I myself need to investigate it more, but have found comfort in other solutions.
LogSeq - This is my "other solutions." It seems to be trapped in its proprietary web of funding and constraint. I use it because it turns my desktop into a power-house of note archival. But by using it- I AM TRAPPED TOO. This means LogSeq is by no means a working solution for Nostr users who want a long-term archival option.
But the trap is not a cage. It's merely a box. My notes can be exported to other applications with graphing and node-based information structure. Specifically, I can export these notes to:
- Text
- OPML
- HTML
- and, PNG, for whatever that is worth.
Let's try out the PNG option, just for fun. Here's an exported PNG of my "Games on Nostr" list, which has long been abandoned. I once decided to poll some CornyChat users to see what games they enjoyed- and I documented them in a LogSeq page for my own future reference. You can see it here:
https://i.postimg.cc/qMBPDTwr/image.png
This is a very simple example of how a single "page" or "list" in LogSeq can be multipurpose. It is a small list, with multiple "features" or variables at play. First, I have listed out a variety of complex games that might make sense with "multiplayer" identification that relies on our npubs or nip-05 addresses to aggregate user data. We can ALL imagine playing games like Tetris, Snake, or Catan together with our Nostr identities. But of course we are a long way from breaking into the video game market.
On a mostly irrelevant sidenote- you might notice in my example list, that I seem to be excited about a game called Dot.Hack. I discovered this small game on Itch.io and reached out to the developer on Twitter, in an attempt to purple-pill him, but moreso to inquire about his game. Unfortunately there was no response, even without mention of Nostr. Nonetheless, we pioneer on. You can try the game here: https://propuke.itch.io/planethack
So instead let's focus on the structure of "one working list." The middle section of this list is where I polled users, and simply listed out their suggestions. Of course we discussed these before I documented, so it is note a direct result of a poll, but actually a working interaction of poll results! This is crucial because it separates my list from the aggregated data, and implies its relevance/importance.
The final section of this ONE list- is the beginnings of where I conceptually connect nostr with video game functionality. You can look at this as the beginning of a new graph, which would be "Video Game Operability With Nostr".
These three sections make up one concept within my brain. It exists in other users' brains too- but of course they are not as committed to the concept as myself- the one managing the communal discussion.
With LogSeq- I can grow and expand these lists. These lists can become graphs. Those graphs can become entire catalogues of information than can be shared across the web.
I can replicate this system with bookmarks, ideas, application design, shopping lists, LLM prompting, video/music playlists, friend lists, RELAY lists, the LIST goes ON forever!
So where does that lead us? I think it leads us to kind-1 events. We don't have much in the way of "kind-1 event managers" because most developers would agree that "storing kind-1 events locally" is.. at the very least, not so important. But it could be! If only a superapp existed that could interface seamlessly with nostr, yada yada.. we've heard it all before. We aren't getting a superapp before we have microapps. Basically this means frameworking the protocol before worrying about the all-in-one solution.
So this article will step away from the deep desire for a Nostr-enabled, Rust-built, FOSS, non-commercialized FREEDOM APP, that will exist one day, we hope.
Instead, we will focus on simple attempts of the past. I encourage others to chime in with their experience.
Zim-Wiki is foundational. The user constructs pages, and can then develop them into books.
LogSeq has the right idea- but is constrained in too many ways to prove to be a working solution at this time. However, it is very much worth experimenting with, and investigating, and modelling ourselves after.
https://workflowy.com/ is next on our list. This is great for users who think LogSeq is too complex. They "just want simple notes." Get a taste with WorkFlowy. You will understand why LogSeq is powerful if you see value in WF.
I am writing this article in favor of a redesign of LogSeq to be compatible with Nostr. I have been drafting the idea since before Nostr existed- and with Nostr I truly believe it will be possible. So, I will stop to thank everyone who has made Nostr what it is today. I wouldn't be publishing this without you!
One app I need to investigate more is Zettlr. I will mention it here for others to either discuss or investigate, as it is also mentioned some in the video I opened with. https://www.zettlr.com/
On my path to finding Nostr, before its inception, was a service called Deta.Space. This was an interesting project, not entirely unique or original, but completely fresh and very beginner-friendly. DETA WAS AN AWESOME CLOUD OS. And we could still design a form of Nostr ecosystem that is managed in this way. But, what we have now is excellent, and going forward I only see "additional" or supplemental.
Along the timeline, Deta sunsetted their Space service and launched https://deta.surf/
You might notice they advertise that "This is the future of bookmarks."
I have to wonder if perhaps I got through to them that bookmarking was what their ecosystem could empower. While I have not tried Surf, it looks interested, but does not seem to address what I found most valuable about Deta.Space: https://webcrate.app/
WebCrate was an early bookmarking client for Deta.Space which was likely their most popular application. What was amazing about WebCrate was that it delivered "simple bookmarking." At one point I decided to migrate my bookmarks from other apps, like Pocket and WorkFlowy, into WebCrate.
This ended up being an awful decision, because WebCrate is no longer being developed. However, to much credit of Deta.Space, my WebCrate instance is still running and completely functional. I have since migrated what I deem important into a local LogSeq graph, so my bookmarks are safe. But, the development of WebCrate is note.
WebCrate did not provide a working directory of crates. All creates were contained within a single-level directory. Essentially there were no layers. Just collections of links. This isn't enough for any user to effectively manage their catalogue of notes. With some pressure, I did encourage the German developer to flesh out a form of tagging, which did alleviate the problem to some extent. But as we see with Surf, they have pioneered in another direction.
That brings us back to Nostr. Where can we look for the best solution? There simply isn't one yet. But, we can look at some other options for inspiration.
HedgeDoc: https://hedgedoc.org/
I am eager for someone to fork HedgeDoc and employ Nostr sign-in. This is a small step toward managing information together within the Nostr ecosystem. I will attempt this myself eventually, if no one else does, but I am prioritizing my development in this way:
- A nostr client that allows the cataloguing and management of relays locally.
- A LogSeq alternative with Nostr interoperability.
- HedgeDoc + Nostr is #3 on my list, despite being the easiest option.
Check out HedgeDoc 2.0 if you have any interest in a cooperative Markdown experience on Nostr: https://docs.hedgedoc.dev/
Now, this article should catch up all of my dearest followers, and idols, to where I stand with "bookmarking, note-taking, list-making, kind-1 event management, frameworking, and so on..."
Where it leads us to, is what's possible. Let's take a look at what's possible, once we forego ALL OF THE PROPRIETARY WEB'S BEST OPTIONS:
https://denizaydemir.org/
https://denizaydemir.org/graph/how-logseq-should-build-a-world-knowledge-graph/
https://subconscious.network/
Nostr is even inspired by much of the history that has gone into information management systems. nostr:npub1jlrs53pkdfjnts29kveljul2sm0actt6n8dxrrzqcersttvcuv3qdjynqn I know looks up to Gordon Brander, just as I do. You can read his articles here: https://substack.com/@gordonbrander and they are very much worth reading! Also, I could note that the original version of Highlighter by nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft was also inspired partially by WorkFlowy.
About a year ago, I was mesmerized coming across SubText and thinking I had finally found the answer Nostr might even be looking for. But, for now I will just suggest that others read the Readme.md on the SubText Gtihub, as well as articles by Brander.
Good luck everyone. I am here to work with ANYONE who is interested in these type of solution on Nostr.
My first order of business in this space is to spearhead a community of npubs who share this goal. Everyone who is interested in note-taking or list-making or bookmarking is welcome to join. I have created an INVITE-ONLY relay for this very purpose, and anyone is welcome to reach out if they wish to be added to the whitelist. It should be freely readable in the near future, if it is not already, but for now will remain a closed-to-post community to preemptively mitigate attack or spam. Please reach out to me if you wish to join the relay. https://logstr.mycelium.social/
With this article, I hope people will investigate and explore the options available. We have lots of ground to cover, but all of the right resources and manpower to do so. Godspeed, Nostr.
Nostr #Notes #OtherStuff #LogSec #Joplin #Obsidian
-
@ b099870e:f3ba8f5d
2025-05-06 13:08:33A donkey that is tied to a post by a rope will keep walking around the post is an attempt to free it self,only to become more immobilize and attached to the post.
ikigai
-
@ 90c656ff:9383fd4e
2025-05-06 13:01:45Bitcoin has revolutionized the way people conduct financial transactions worldwide. As a decentralized digital currency, it offers new opportunities for e-commerce payments and international money transfers. Its speed, security, and low costs make it an efficient alternative to traditional methods by eliminating intermediaries and facilitating global transactions.
Bitcoin in e-commerce
E-commerce has grown exponentially, and Bitcoin has emerged as an innovative solution for online payments. Large retailers and small businesses are starting to accept Bitcoin as a form of payment, offering benefits to both merchants and consumers.
- Advantages of Bitcoin for e-commerce:
01 - Low transaction fees: Unlike credit cards and payment platforms that charge high fees, Bitcoin transactions generally have lower costs. This benefits merchants, who can reduce expenses and offer more competitive prices to customers. 02 - Elimination of chargebacks: In traditional systems, chargebacks (forced refunds by banks or card operators) are a concern for merchants. Since Bitcoin transactions are irreversible, merchants avoid fraud and disputes. 03 - Global access: Anyone with internet access can pay with Bitcoin, regardless of their location. This allows businesses to expand their market internationally without relying on banks or local payment systems. 04 - Privacy and security: Bitcoin transactions protect user identity, offering greater privacy compared to credit card payments or bank transfers. Additionally, since there’s no need to share personal data, the risk of information theft is reduced.
- Challenges of using Bitcoin in e-commerce:
01 - Volatility: Bitcoin’s price can fluctuate rapidly, making it difficult to set fixed prices for products and services. However, some merchants use payment processors that instantly convert Bitcoin into fiat currency, minimizing this risk. 02 - Limited adoption: Despite its growth, Bitcoin acceptance is not yet universal. Many stores and popular platforms have not adopted it, which can make daily purchases difficult. 03 - Confirmation time: Although Bitcoin is faster than traditional bank transfers, confirmation times may vary depending on the network fee paid. Some solutions, such as the Lightning Network, are being developed to enable instant payments.
Bitcoin in money remittances
Sending money abroad has long been a bureaucratic, costly, and time-consuming process. Traditional services like banks and money transfer companies charge high fees and can take days to complete a transaction. Bitcoin, on the other hand, offers an efficient alternative for global remittances, allowing anyone to send and receive money quickly and affordably.
- Benefits of Bitcoin for remittances:
01 - Reduced costs: While banks and companies like Western Union charge high fees for international transfers, Bitcoin allows money to be sent with minimal costs, regardless of the amount or destination. 02 - Transaction speed: International bank transfers can take several days to complete, especially in countries with limited financial infrastructure. With Bitcoin, money can be sent anywhere in the world within minutes or hours. 03 - Global accessibility: In regions where the banking system is restricted or inefficient, Bitcoin enables people to receive money without depending on banks. This is particularly useful in developing countries where international remittances are an essential source of income. 04 - Independence from intermediaries: Bitcoin operates in a decentralized manner, with no need for banks or transfer companies. This means people can send money directly to friends and family without intermediaries.
- Challenges of Bitcoin remittances:
01 - Conversion to local currency: Although Bitcoin can be received instantly, many people still need to convert it into local currency for everyday use. This may involve additional costs and depend on the availability of exchange services. 02 - Adoption and knowledge: Not everyone understands how Bitcoin works, which can hinder its widespread adoption for remittances. However, growing financial education on the subject can help overcome this barrier. 03 - Regulations and restrictions: Some governments impose restrictions on Bitcoin usage, making remittances more complicated. The evolution of regulations may affect ease of use in certain countries.
In summary, Bitcoin is transforming e-commerce and money remittances around the world. Its ability to eliminate intermediaries, reduce costs, and provide fast and secure payments makes it a viable alternative to traditional financial systems.
In e-commerce, it benefits both merchants and consumers by lowering fees and enhancing privacy. In the remittance sector, it facilitates money transfers to any part of the world, especially for those living in countries with inefficient banking systems.
Despite the challenges, Bitcoin adoption continues to grow, driven by innovative solutions and recognition of its potential as a global payment method. As more businesses and individuals embrace this technology, its presence in e-commerce and international remittances will become increasingly relevant.
Thank you very much for reading this far. I hope everything is well with you, and sending a big hug from your favorite Bitcoiner maximalist from Madeira. Long live freedom!
-
@ 0d97beae:c5274a14
2025-01-11 16:52:08This article hopes to complement the article by Lyn Alden on YouTube: https://www.youtube.com/watch?v=jk_HWmmwiAs
The reason why we have broken money
Before the invention of key technologies such as the printing press and electronic communications, even such as those as early as morse code transmitters, gold had won the competition for best medium of money around the world.
In fact, it was not just gold by itself that became money, rulers and world leaders developed coins in order to help the economy grow. Gold nuggets were not as easy to transact with as coins with specific imprints and denominated sizes.
However, these modern technologies created massive efficiencies that allowed us to communicate and perform services more efficiently and much faster, yet the medium of money could not benefit from these advancements. Gold was heavy, slow and expensive to move globally, even though requesting and performing services globally did not have this limitation anymore.
Banks took initiative and created derivatives of gold: paper and electronic money; these new currencies allowed the economy to continue to grow and evolve, but it was not without its dark side. Today, no currency is denominated in gold at all, money is backed by nothing and its inherent value, the paper it is printed on, is worthless too.
Banks and governments eventually transitioned from a money derivative to a system of debt that could be co-opted and controlled for political and personal reasons. Our money today is broken and is the cause of more expensive, poorer quality goods in the economy, a larger and ever growing wealth gap, and many of the follow-on problems that have come with it.
Bitcoin overcomes the "transfer of hard money" problem
Just like gold coins were created by man, Bitcoin too is a technology created by man. Bitcoin, however is a much more profound invention, possibly more of a discovery than an invention in fact. Bitcoin has proven to be unbreakable, incorruptible and has upheld its ability to keep its units scarce, inalienable and counterfeit proof through the nature of its own design.
Since Bitcoin is a digital technology, it can be transferred across international borders almost as quickly as information itself. It therefore severely reduces the need for a derivative to be used to represent money to facilitate digital trade. This means that as the currency we use today continues to fare poorly for many people, bitcoin will continue to stand out as hard money, that just so happens to work as well, functionally, along side it.
Bitcoin will also always be available to anyone who wishes to earn it directly; even China is unable to restrict its citizens from accessing it. The dollar has traditionally become the currency for people who discover that their local currency is unsustainable. Even when the dollar has become illegal to use, it is simply used privately and unofficially. However, because bitcoin does not require you to trade it at a bank in order to use it across borders and across the web, Bitcoin will continue to be a viable escape hatch until we one day hit some critical mass where the world has simply adopted Bitcoin globally and everyone else must adopt it to survive.
Bitcoin has not yet proven that it can support the world at scale. However it can only be tested through real adoption, and just as gold coins were developed to help gold scale, tools will be developed to help overcome problems as they arise; ideally without the need for another derivative, but if necessary, hopefully with one that is more neutral and less corruptible than the derivatives used to represent gold.
Bitcoin blurs the line between commodity and technology
Bitcoin is a technology, it is a tool that requires human involvement to function, however it surprisingly does not allow for any concentration of power. Anyone can help to facilitate Bitcoin's operations, but no one can take control of its behaviour, its reach, or its prioritisation, as it operates autonomously based on a pre-determined, neutral set of rules.
At the same time, its built-in incentive mechanism ensures that people do not have to operate bitcoin out of the good of their heart. Even though the system cannot be co-opted holistically, It will not stop operating while there are people motivated to trade their time and resources to keep it running and earn from others' transaction fees. Although it requires humans to operate it, it remains both neutral and sustainable.
Never before have we developed or discovered a technology that could not be co-opted and used by one person or faction against another. Due to this nature, Bitcoin's units are often described as a commodity; they cannot be usurped or virtually cloned, and they cannot be affected by political biases.
The dangers of derivatives
A derivative is something created, designed or developed to represent another thing in order to solve a particular complication or problem. For example, paper and electronic money was once a derivative of gold.
In the case of Bitcoin, if you cannot link your units of bitcoin to an "address" that you personally hold a cryptographically secure key to, then you very likely have a derivative of bitcoin, not bitcoin itself. If you buy bitcoin on an online exchange and do not withdraw the bitcoin to a wallet that you control, then you legally own an electronic derivative of bitcoin.
Bitcoin is a new technology. It will have a learning curve and it will take time for humanity to learn how to comprehend, authenticate and take control of bitcoin collectively. Having said that, many people all over the world are already using and relying on Bitcoin natively. For many, it will require for people to find the need or a desire for a neutral money like bitcoin, and to have been burned by derivatives of it, before they start to understand the difference between the two. Eventually, it will become an essential part of what we regard as common sense.
Learn for yourself
If you wish to learn more about how to handle bitcoin and avoid derivatives, you can start by searching online for tutorials about "Bitcoin self custody".
There are many options available, some more practical for you, and some more practical for others. Don't spend too much time trying to find the perfect solution; practice and learn. You may make mistakes along the way, so be careful not to experiment with large amounts of your bitcoin as you explore new ideas and technologies along the way. This is similar to learning anything, like riding a bicycle; you are sure to fall a few times, scuff the frame, so don't buy a high performance racing bike while you're still learning to balance.
-
@ 90c656ff:9383fd4e
2025-05-06 12:18:09Digital wallets are important tools for storing and managing Bitcoin. They allow people to keep their private keys, access their funds, and make transactions in a practical and secure way. However, with several types of wallets available and the risks of incorrect use, it is essential to understand their features and follow good security practices.
What is a digital wallet?
A digital wallet is a software or device that stores the private and public keys linked to Bitcoin. Simply put, it doesn’t “store” Bitcoin itself but provides secure access to the network to verify and sign transactions.
Private keys work like a secret password that allows spending Bitcoins, while public keys are like account numbers that can be shared to receive payments. Keeping the private key secure is very important, as whoever has access to it controls the funds.
- Types of Digital Wallets
There are different types of digital wallets, each with specific features that meet various needs, whether for daily use or long-term storage.
- Hot Wallets
Wallets connected to the internet, designed for frequent use. Examples: Mobile apps, desktop wallets, online wallets.
Advantages:
01 - Accessible and easy to use 02 - Ideal for daily and quick transactions
Disadvantages:
01 - More exposed to cyberattacks such as phishing or hacking
- Cold Wallets
Wallets that keep private keys offline, increasing security. Examples: Hardware wallets, paper wallets, dedicated USB devices.
Advantages:
01 - High protection against hackers since they are not online 02 - Ideal for large amounts of Bitcoin or long-term storage
Disadvantages:
01 - Less practical for daily use 02 - Can be physically damaged or lost if not handled carefully
- Hardware Wallets
Physical devices, like Ledger or Trezor, that store private keys offline.
Advantages:
01 - Easy-to-use and secure interface 02 - Resistant to viruses and online attacks
Disadvantages:
01 - Higher initial cost 02 - Require care to avoid physical damage
- Paper Wallets
Involve printing or writing down private and public keys on a piece of paper.
Advantages:
01 - Completely offline and immune to digital attacks 02 - Simple and low-cost
Disadvantages:
01 - Vulnerable to physical damage such as water, fire, or loss 02 - Difficult to recover if lost
- Security in Digital Wallets
Protecting a digital wallet is essential to safeguard your Bitcoins from loss or theft.
Below are important practices to improve security:
- Private Key Protection
01 - Never share your private key with anyone 02 - Keep backup copies of the private key or recovery phrase in safe places
- Use of Recovery Phrases
The seed phrase is a sequence of 12 to 24 words that helps recover funds if the wallet is lost.
01 - Store the seed phrase offline and avoid taking pictures or saving it on internet-connected devices
- Two-Factor Authentication (2FA)
01 - Whenever possible, enable 2FA to protect accounts linked to online wallets or exchanges 02 - This adds an extra layer of security by requiring a second code to log in
- Updates and Maintenance
Keep the wallet software up to date to ensure protection against vulnerabilities
01 - Use only wallets from trustworthy and reputable developers
- Choosing the Right Wallet According to Need
01 - For frequent transactions, choose hot wallets but keep only small amounts 02 - For large amounts, use cold wallets like hardware or paper wallets, which are more secure
Risks and How to Avoid Them
- Hacker Attacks
Risk: Unauthorized access to hot wallets connected to the internet Prevention: Use cold wallets for storing large amounts and avoid clicking on suspicious links
- Loss of Access
Risk: Loss of private keys or the recovery phrase, making funds unrecoverable Prevention: Regularly make backups and store information in secure places
- Social Engineering and Phishing
Risk: Hackers trick people into giving up their private keys or personal information Prevention: Be suspicious of messages or websites that request your private keys. Never share sensitive data
- Physical Failures
Risk: Damage to devices or loss of paper wallets Prevention: Store backups in locations resistant to water, fire, and other threats
In summary, digital wallets are essential for the security and use of Bitcoin. Choosing the right wallet and following good security practices are key steps to protecting your assets.
Hot wallets offer convenience for daily use, while cold wallets provide strong security for long-term storage. Regardless of the type you choose, taking care of your private keys and recovery phrase is fundamental to ensure your Bitcoin remains under your control.
By understanding the types of wallets and implementing appropriate security measures, users can safely and efficiently take advantage of Bitcoin, maximizing the benefits of this digital revolution.
Thank you very much for reading this far. I hope everything is well with you, and sending a big hug from your favorite Bitcoiner maximalist from Madeira. Long live freedom!
-
@ dd664d5e:5633d319
2025-01-09 21:39:15Instructions
- Place 2 medium-sized, boiled potatoes and a handful of sliced leeks in a pot.
- Fill the pot with water or vegetable broth, to cover the potatoes twice over.
- Add a splash of white wine, if you like, and some bouillon powder, if you went with water instead of broth.
- Bring the soup to a boil and then simmer for 15 minutes.
- Puree the soup, in the pot, with a hand mixer. It shouldn't be completely smooth, when you're done, but rather have small bits and pieces of the veggies floating around.
- Bring the soup to a boil, again, and stir in one container (200-250 mL) of heavy cream.
- Thicken the soup, as needed, and then simmer for 5 more minutes.
- Garnish with croutons and veggies (here I used sliced green onions and radishes) and serve.
Guten Appetit!
- Place 2 medium-sized, boiled potatoes and a handful of sliced leeks in a pot.
-
@ 3f770d65:7a745b24
2025-01-05 18:56:33New Year’s resolutions often feel boring and repetitive. Most revolve around getting in shape, eating healthier, or giving up alcohol. While the idea is interesting—using the start of a new calendar year as a catalyst for change—it also seems unnecessary. Why wait for a specific date to make a change? If you want to improve something in your life, you can just do it. You don’t need an excuse.
That’s why I’ve never been drawn to the idea of making a list of resolutions. If I wanted a change, I’d make it happen, without worrying about the calendar. At least, that’s how I felt until now—when, for once, the timing actually gave me a real reason to embrace the idea of New Year’s resolutions.
Enter Olas.
If you're a visual creator, you've likely experienced the relentless grind of building a following on platforms like Instagram—endless doomscrolling, ever-changing algorithms, and the constant pressure to stay relevant. But what if there was a better way? Olas is a Nostr-powered alternative to Instagram that prioritizes community, creativity, and value-for-value exchanges. It's a game changer.
Instagram’s failings are well-known. Its algorithm often dictates whose content gets seen, leaving creators frustrated and powerless. Monetization hurdles further alienate creators who are forced to meet arbitrary follower thresholds before earning anything. Additionally, the platform’s design fosters endless comparisons and exposure to negativity, which can take a significant toll on mental health.
Instagram’s algorithms are notorious for keeping users hooked, often at the cost of their mental health. I've spoken about this extensively, most recently at Nostr Valley, explaining how legacy social media is bad for you. You might find yourself scrolling through content that leaves you feeling anxious or drained. Olas takes a fresh approach, replacing "doomscrolling" with "bloomscrolling." This is a common theme across the Nostr ecosystem. The lack of addictive rage algorithms allows the focus to shift to uplifting, positive content that inspires rather than exhausts.
Monetization is another area where Olas will set itself apart. On Instagram, creators face arbitrary barriers to earning—needing thousands of followers and adhering to restrictive platform rules. Olas eliminates these hurdles by leveraging the Nostr protocol, enabling creators to earn directly through value-for-value exchanges. Fans can support their favorite artists instantly, with no delays or approvals required. The plan is to enable a brand new Olas account that can get paid instantly, with zero followers - that's wild.
Olas addresses these issues head-on. Operating on the open Nostr protocol, it removes centralized control over one's content’s reach or one's ability to monetize. With transparent, configurable algorithms, and a community that thrives on mutual support, Olas creates an environment where creators can grow and succeed without unnecessary barriers.
Join me on my New Year's resolution. Join me on Olas and take part in the #Olas365 challenge! It’s a simple yet exciting way to share your content. The challenge is straightforward: post at least one photo per day on Olas (though you’re welcome to share more!).
Download on Android or download via Zapstore.
Let's make waves together.
-
@ e6817453:b0ac3c39
2025-01-05 14:29:17The Rise of Graph RAGs and the Quest for Data Quality
As we enter a new year, it’s impossible to ignore the boom of retrieval-augmented generation (RAG) systems, particularly those leveraging graph-based approaches. The previous year saw a surge in advancements and discussions about Graph RAGs, driven by their potential to enhance large language models (LLMs), reduce hallucinations, and deliver more reliable outputs. Let’s dive into the trends, challenges, and strategies for making the most of Graph RAGs in artificial intelligence.
Booming Interest in Graph RAGs
Graph RAGs have dominated the conversation in AI circles. With new research papers and innovations emerging weekly, it’s clear that this approach is reshaping the landscape. These systems, especially those developed by tech giants like Microsoft, demonstrate how graphs can:
- Enhance LLM Outputs: By grounding responses in structured knowledge, graphs significantly reduce hallucinations.
- Support Complex Queries: Graphs excel at managing linked and connected data, making them ideal for intricate problem-solving.
Conferences on linked and connected data have increasingly focused on Graph RAGs, underscoring their central role in modern AI systems. However, the excitement around this technology has brought critical questions to the forefront: How do we ensure the quality of the graphs we’re building, and are they genuinely aligned with our needs?
Data Quality: The Foundation of Effective Graphs
A high-quality graph is the backbone of any successful RAG system. Constructing these graphs from unstructured data requires attention to detail and rigorous processes. Here’s why:
- Richness of Entities: Effective retrieval depends on graphs populated with rich, detailed entities.
- Freedom from Hallucinations: Poorly constructed graphs amplify inaccuracies rather than mitigating them.
Without robust data quality, even the most sophisticated Graph RAGs become ineffective. As a result, the focus must shift to refining the graph construction process. Improving data strategy and ensuring meticulous data preparation is essential to unlock the full potential of Graph RAGs.
Hybrid Graph RAGs and Variations
While standard Graph RAGs are already transformative, hybrid models offer additional flexibility and power. Hybrid RAGs combine structured graph data with other retrieval mechanisms, creating systems that:
- Handle diverse data sources with ease.
- Offer improved adaptability to complex queries.
Exploring these variations can open new avenues for AI systems, particularly in domains requiring structured and unstructured data processing.
Ontology: The Key to Graph Construction Quality
Ontology — defining how concepts relate within a knowledge domain — is critical for building effective graphs. While this might sound abstract, it’s a well-established field blending philosophy, engineering, and art. Ontology engineering provides the framework for:
- Defining Relationships: Clarifying how concepts connect within a domain.
- Validating Graph Structures: Ensuring constructed graphs are logically sound and align with domain-specific realities.
Traditionally, ontologists — experts in this discipline — have been integral to large enterprises and research teams. However, not every team has access to dedicated ontologists, leading to a significant challenge: How can teams without such expertise ensure the quality of their graphs?
How to Build Ontology Expertise in a Startup Team
For startups and smaller teams, developing ontology expertise may seem daunting, but it is achievable with the right approach:
- Assign a Knowledge Champion: Identify a team member with a strong analytical mindset and give them time and resources to learn ontology engineering.
- Provide Training: Invest in courses, workshops, or certifications in knowledge graph and ontology creation.
- Leverage Partnerships: Collaborate with academic institutions, domain experts, or consultants to build initial frameworks.
- Utilize Tools: Introduce ontology development tools like Protégé, OWL, or SHACL to simplify the creation and validation process.
- Iterate with Feedback: Continuously refine ontologies through collaboration with domain experts and iterative testing.
So, it is not always affordable for a startup to have a dedicated oncologist or knowledge engineer in a team, but you could involve consulters or build barefoot experts.
You could read about barefoot experts in my article :
Even startups can achieve robust and domain-specific ontology frameworks by fostering in-house expertise.
How to Find or Create Ontologies
For teams venturing into Graph RAGs, several strategies can help address the ontology gap:
-
Leverage Existing Ontologies: Many industries and domains already have open ontologies. For instance:
-
Public Knowledge Graphs: Resources like Wikipedia’s graph offer a wealth of structured knowledge.
- Industry Standards: Enterprises such as Siemens have invested in creating and sharing ontologies specific to their fields.
-
Business Framework Ontology (BFO): A valuable resource for enterprises looking to define business processes and structures.
-
Build In-House Expertise: If budgets allow, consider hiring knowledge engineers or providing team members with the resources and time to develop expertise in ontology creation.
-
Utilize LLMs for Ontology Construction: Interestingly, LLMs themselves can act as a starting point for ontology development:
-
Prompt-Based Extraction: LLMs can generate draft ontologies by leveraging their extensive training on graph data.
- Domain Expert Refinement: Combine LLM-generated structures with insights from domain experts to create tailored ontologies.
Parallel Ontology and Graph Extraction
An emerging approach involves extracting ontologies and graphs in parallel. While this can streamline the process, it presents challenges such as:
- Detecting Hallucinations: Differentiating between genuine insights and AI-generated inaccuracies.
- Ensuring Completeness: Ensuring no critical concepts are overlooked during extraction.
Teams must carefully validate outputs to ensure reliability and accuracy when employing this parallel method.
LLMs as Ontologists
While traditionally dependent on human expertise, ontology creation is increasingly supported by LLMs. These models, trained on vast amounts of data, possess inherent knowledge of many open ontologies and taxonomies. Teams can use LLMs to:
- Generate Skeleton Ontologies: Prompt LLMs with domain-specific information to draft initial ontology structures.
- Validate and Refine Ontologies: Collaborate with domain experts to refine these drafts, ensuring accuracy and relevance.
However, for validation and graph construction, formal tools such as OWL, SHACL, and RDF should be prioritized over LLMs to minimize hallucinations and ensure robust outcomes.
Final Thoughts: Unlocking the Power of Graph RAGs
The rise of Graph RAGs underscores a simple but crucial correlation: improving graph construction and data quality directly enhances retrieval systems. To truly harness this power, teams must invest in understanding ontologies, building quality graphs, and leveraging both human expertise and advanced AI tools.
As we move forward, the interplay between Graph RAGs and ontology engineering will continue to shape the future of AI. Whether through adopting existing frameworks or exploring innovative uses of LLMs, the path to success lies in a deep commitment to data quality and domain understanding.
Have you explored these technologies in your work? Share your experiences and insights — and stay tuned for more discussions on ontology extraction and its role in AI advancements. Cheers to a year of innovation!
-
@ 56cd780f:cbde8b29
2025-05-06 11:54:40Is it actually called “summary”?
-
@ a4a6b584:1e05b95b
2025-01-02 18:13:31The Four-Layer Framework
Layer 1: Zoom Out
Start by looking at the big picture. What’s the subject about, and why does it matter? Focus on the overarching ideas and how they fit together. Think of this as the 30,000-foot view—it’s about understanding the "why" and "how" before diving into the "what."
Example: If you’re learning programming, start by understanding that it’s about giving logical instructions to computers to solve problems.
- Tip: Keep it simple. Summarize the subject in one or two sentences and avoid getting bogged down in specifics at this stage.
Once you have the big picture in mind, it’s time to start breaking it down.
Layer 2: Categorize and Connect
Now it’s time to break the subject into categories—like creating branches on a tree. This helps your brain organize information logically and see connections between ideas.
Example: Studying biology? Group concepts into categories like cells, genetics, and ecosystems.
- Tip: Use headings or labels to group similar ideas. Jot these down in a list or simple diagram to keep track.
With your categories in place, you’re ready to dive into the details that bring them to life.
Layer 3: Master the Details
Once you’ve mapped out the main categories, you’re ready to dive deeper. This is where you learn the nuts and bolts—like formulas, specific techniques, or key terminology. These details make the subject practical and actionable.
Example: In programming, this might mean learning the syntax for loops, conditionals, or functions in your chosen language.
- Tip: Focus on details that clarify the categories from Layer 2. Skip anything that doesn’t add to your understanding.
Now that you’ve mastered the essentials, you can expand your knowledge to include extra material.
Layer 4: Expand Your Horizons
Finally, move on to the extra material—less critical facts, trivia, or edge cases. While these aren’t essential to mastering the subject, they can be useful in specialized discussions or exams.
Example: Learn about rare programming quirks or historical trivia about a language’s development.
- Tip: Spend minimal time here unless it’s necessary for your goals. It’s okay to skim if you’re short on time.
Pro Tips for Better Learning
1. Use Active Recall and Spaced Repetition
Test yourself without looking at notes. Review what you’ve learned at increasing intervals—like after a day, a week, and a month. This strengthens memory by forcing your brain to actively retrieve information.
2. Map It Out
Create visual aids like diagrams or concept maps to clarify relationships between ideas. These are particularly helpful for organizing categories in Layer 2.
3. Teach What You Learn
Explain the subject to someone else as if they’re hearing it for the first time. Teaching exposes any gaps in your understanding and helps reinforce the material.
4. Engage with LLMs and Discuss Concepts
Take advantage of tools like ChatGPT or similar large language models to explore your topic in greater depth. Use these tools to:
- Ask specific questions to clarify confusing points.
- Engage in discussions to simulate real-world applications of the subject.
- Generate examples or analogies that deepen your understanding.Tip: Use LLMs as a study partner, but don’t rely solely on them. Combine these insights with your own critical thinking to develop a well-rounded perspective.
Get Started
Ready to try the Four-Layer Method? Take 15 minutes today to map out the big picture of a topic you’re curious about—what’s it all about, and why does it matter? By building your understanding step by step, you’ll master the subject with less stress and more confidence.
-
@ 56cd780f:cbde8b29
2025-05-06 11:54:39A few weeks ago, I ran into an old friend at a coffee shop. We hadn’t spoken in years, and within five minutes, she said something I’ve heard countless times:
“I just feel like I’m so behind.”
Behind who? Behind what?
There’s this idea—quiet, nagging, oddly universal—that we’re all somehow in a race we didn’t sign up for. That we’re supposed to have hit certain milestones by certain ages. That if we’re not married, promoted, rich, settled, happy (and photogenic) by 30 or 40 or pick your poison, then we’ve failed some invisible test.
Where did this come from?
Some of it’s cultural, obviously. Social media compresses timelines. You’re 27, doom-scrolling, and suddenly someone from high school just IPO’d their startup and got engaged in Rome. Another just bought a house with a kitchen island the size of a small country. You wonder if you missed a memo.
But beneath that, there’s something deeper. A belief that life is linear. That it should look like a staircase: school, job, marriage, house, kids, success. But real life? It’s a squiggle. A mess. A beautiful disaster.
Here’s the truth: You’re not behind. There’s no schedule. There’s only your path, and the courage it takes to stay on it—even when it looks wildly different from everyone else’s.
I say this as someone who has taken the “scenic route.” I changed careers in my 30s. I moved cities on a hunch. I dropped things that looked great on paper because they felt wrong in my gut. I’ve had seasons of momentum and seasons of stuckness. Both were necessary.
“Catching up” assumes there’s a fixed destination. But what if there isn’t? What if the point isn’t arrival, but presence? Progress that feels real, not performative?
If you need a permission slip to stop comparing, let this be it.
You’re not late. You’re not early.
You’re right on time. -
@ e4950c93:1b99eccd
2025-05-06 10:35:37Qu'est-ce qu'une matière naturelle ? La question fait débat, et chacun-e privilégiera ses propres critères. Voici comment les matières sont classées sur ce site. La liste est régulièrement mise à jour en fonction des produits ajoutés. N'hésitez pas à partager votre avis !
✅ Matières naturelles
Matières d'origine végétale, animale ou minérale, sans transformation chimique altérant leur structure moléculaire.
🌱 Principaux critères : - Biodégradabilité - Non-toxicité - Présence naturelle nécessitant le minimum de transformation
🔍 Liste des matières naturelles : - Bois - Cellulose régénérée (cupra, lyocell, modal, viscose) - Chanvre - Coton - Cuir - Liège - Lin - Laine - Latex naturel, caoutchouc - Métal - Soie - Terre - Verre - … (Autres matières)
⚠️ Bien que "naturelles", ces matières peuvent générer des impacts négatifs selon leurs conditions de production (pollution par pesticides, consommation d’eau excessive, traitement chimique, exploitation animale…). Ces impacts sont mentionnés sur la fiche de chaque matière.
Les versions biologiques de ces matières (sans traitement chimique, maltraitance animale, etc.) sont privilégiées pour référencer les produits sur ce site, tel qu'indiqué sur la fiche de chaque matière (à venir).
Les versions conventionnelles ne sont référencées que tant que lorsqu'il n'a pas encore été trouvé d'alternative plus durable pour cette catégorie de produits.
🚫 Matières non naturelles
Matières synthétiques ou fortement modifiées, souvent issues de la pétrochimie.
📌 Principaux problèmes : - Toxicité et émissions de microplastiques - Dépendance aux énergies fossiles - Mauvaise biodégradabilité
🔍 Liste des matières non naturelles : - Acrylique - Élasthanne, lycra, spandex - Polyamides, nylon - Polyester - Silicone - … (Autres matières)
⚠️ Ces matières ne sont pas admises sur le site. Néanmoins, elles peuvent être présentes dans certains produits référencés lorsque :
- elles sont utilisées en accessoire amovible (ex. : élastiques, boutons… généralement non indiqué dans la composition par la marque) pouvant être retiré pour le recyclage ou compostage, et
- aucune alternative 100 % naturelle n’a encore été identifiée pour cette catégorie de produits.
Dans ce cas, un avertissement est alors affiché sur la fiche du produit.
Cet article est publié sur origine-nature.com 🌐 See this article in English
-
@ 56cd780f:cbde8b29
2025-05-06 11:54:36There’s something sacred about morning air — the way it carries just enough chill to remind you you’re alive, without pushing you back inside. I’ve been starting my days on the balcony lately. Not because it’s glamorous (it isn’t), or because I have a routine (I don’t), but because it’s the only space in my apartment that feels both open and still.
This morning I made coffee with too much cinnamon and curled up with a blanket that’s seen better days. I watched the city slowly wake up — one barking dog, two joggers, and the clatter of a recycling truck below. It’s odd how these tiny patterns become a kind of comfort.
I used to think that slowing down meant falling behind. But here, perched on the third floor with my feet on cold concrete and the sky just starting to blush, I feel like I’m exactly where I’m supposed to be.
If you’re reading this, maybe you needed that reminder too.
— Natalie
-
@ 3f770d65:7a745b24
2024-12-31 17:03:46Here are my predictions for Nostr in 2025:
Decentralization: The outbox and inbox communication models, sometimes referred to as the Gossip model, will become the standard across the ecosystem. By the end of 2025, all major clients will support these models, providing seamless communication and enhanced decentralization. Clients that do not adopt outbox/inbox by then will be regarded as outdated or legacy systems.
Privacy Standards: Major clients such as Damus and Primal will move away from NIP-04 DMs, adopting more secure protocol possibilities like NIP-17 or NIP-104. These upgrades will ensure enhanced encryption and metadata protection. Additionally, NIP-104 MLS tools will drive the development of new clients and features, providing users with unprecedented control over the privacy of their communications.
Interoperability: Nostr's ecosystem will become even more interconnected. Platforms like the Olas image-sharing service will expand into prominent clients such as Primal, Damus, Coracle, and Snort, alongside existing integrations with Amethyst, Nostur, and Nostrudel. Similarly, audio and video tools like Nostr Nests and Zap.stream will gain seamless integration into major clients, enabling easy participation in live events across the ecosystem.
Adoption and Migration: Inspired by early pioneers like Fountain and Orange Pill App, more platforms will adopt Nostr for authentication, login, and social systems. In 2025, a significant migration from a high-profile application platform with hundreds of thousands of users will transpire, doubling Nostr’s daily activity and establishing it as a cornerstone of decentralized technologies.
-
@ 6e468422:15deee93
2024-12-21 19:25:26We didn't hear them land on earth, nor did we see them. The spores were not visible to the naked eye. Like dust particles, they softly fell, unhindered, through our atmosphere, covering the earth. It took us a while to realize that something extraordinary was happening on our planet. In most places, the mushrooms didn't grow at all. The conditions weren't right. In some places—mostly rocky places—they grew large enough to be noticeable. People all over the world posted pictures online. "White eggs," they called them. It took a bit until botanists and mycologists took note. Most didn't realize that we were dealing with a species unknown to us.
We aren't sure who sent them. We aren't even sure if there is a "who" behind the spores. But once the first portals opened up, we learned that these mushrooms aren't just a quirk of biology. The portals were small at first—minuscule, even. Like a pinhole camera, we were able to glimpse through, but we couldn't make out much. We were only able to see colors and textures if the conditions were right. We weren't sure what we were looking at.
We still don't understand why some mushrooms open up, and some don't. Most don't. What we do know is that they like colder climates and high elevations. What we also know is that the portals don't stay open for long. Like all mushrooms, the flush only lasts for a week or two. When a portal opens, it looks like the mushroom is eating a hole into itself at first. But the hole grows, and what starts as a shimmer behind a grey film turns into a clear picture as the egg ripens. When conditions are right, portals will remain stable for up to three days. Once the fruit withers, the portal closes, and the mushroom decays.
The eggs grew bigger year over year. And with it, the portals. Soon enough, the portals were big enough to stick your finger through. And that's when things started to get weird...
-
@ 20986fb8:cdac21b3
2024-12-18 03:19:36English
Introducing YakiHonne: Write Without Limits
YakiHonne is the ultimate text editor designed to help you express yourself creatively, no matter the language.
Features you'll love:
- 🌟 Rich Formatting: Add headings, bold, italics, and more.
- 🌏 Multilingual Support: Seamlessly write in English, Chinese, Arabic, and Japanese.
- 🔗 Interactive Links: Learn more about YakiHonne.Benefits:
1. Easy to use. 2. Enhance readability with customizable styles.
3. Supports various complex formats including LateX."YakiHonne is a game-changer for content creators."
-
@ fe32298e:20516265
2024-12-16 20:59:13Today I learned how to install NVapi to monitor my GPUs in Home Assistant.
NVApi is a lightweight API designed for monitoring NVIDIA GPU utilization and enabling automated power management. It provides real-time GPU metrics, supports integration with tools like Home Assistant, and offers flexible power management and PCIe link speed management based on workload and thermal conditions.
- GPU Utilization Monitoring: Utilization, memory usage, temperature, fan speed, and power consumption.
- Automated Power Limiting: Adjusts power limits dynamically based on temperature thresholds and total power caps, configurable per GPU or globally.
- Cross-GPU Coordination: Total power budget applies across multiple GPUs in the same system.
- PCIe Link Speed Management: Controls minimum and maximum PCIe link speeds with idle thresholds for power optimization.
- Home Assistant Integration: Uses the built-in RESTful platform and template sensors.
Getting the Data
sudo apt install golang-go git clone https://github.com/sammcj/NVApi.git cd NVapi go run main.go -port 9999 -rate 1 curl http://localhost:9999/gpu
Response for a single GPU:
[ { "index": 0, "name": "NVIDIA GeForce RTX 4090", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 16, "power_limit_watts": 450, "memory_total_gb": 23.99, "memory_used_gb": 0.46, "memory_free_gb": 23.52, "memory_usage_percent": 2, "temperature": 38, "processes": [], "pcie_link_state": "not managed" } ]
Response for multiple GPUs:
[ { "index": 0, "name": "NVIDIA GeForce RTX 3090", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 14, "power_limit_watts": 350, "memory_total_gb": 24, "memory_used_gb": 0.43, "memory_free_gb": 23.57, "memory_usage_percent": 2, "temperature": 36, "processes": [], "pcie_link_state": "not managed" }, { "index": 1, "name": "NVIDIA RTX A4000", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 10, "power_limit_watts": 140, "memory_total_gb": 15.99, "memory_used_gb": 0.56, "memory_free_gb": 15.43, "memory_usage_percent": 3, "temperature": 41, "processes": [], "pcie_link_state": "not managed" } ]
Start at Boot
Create
/etc/systemd/system/nvapi.service
:``` [Unit] Description=Run NVapi After=network.target
[Service] Type=simple Environment="GOPATH=/home/ansible/go" WorkingDirectory=/home/ansible/NVapi ExecStart=/usr/bin/go run main.go -port 9999 -rate 1 Restart=always User=ansible
Environment="GPU_TEMP_CHECK_INTERVAL=5"
Environment="GPU_TOTAL_POWER_CAP=400"
Environment="GPU_0_LOW_TEMP=40"
Environment="GPU_0_MEDIUM_TEMP=70"
Environment="GPU_0_LOW_TEMP_LIMIT=135"
Environment="GPU_0_MEDIUM_TEMP_LIMIT=120"
Environment="GPU_0_HIGH_TEMP_LIMIT=100"
Environment="GPU_1_LOW_TEMP=45"
Environment="GPU_1_MEDIUM_TEMP=75"
Environment="GPU_1_LOW_TEMP_LIMIT=140"
Environment="GPU_1_MEDIUM_TEMP_LIMIT=125"
Environment="GPU_1_HIGH_TEMP_LIMIT=110"
[Install] WantedBy=multi-user.target ```
Home Assistant
Add to Home Assistant
configuration.yaml
and restart HA (completely).For a single GPU, this works: ``` sensor: - platform: rest name: MYPC GPU Information resource: http://mypc:9999 method: GET headers: Content-Type: application/json value_template: "{{ value_json[0].index }}" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature scan_interval: 1 # seconds
- platform: template sensors: mypc_gpu_0_gpu: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} GPU" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_memory: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Memory" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_power: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_0_power_limit: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_0_temperature: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Temperature" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'temperature') }}" unit_of_measurement: "°C" ```
For multiple GPUs: ``` rest: scan_interval: 1 resource: http://mypc:9999 sensor: - name: "MYPC GPU0 Information" value_template: "{{ value_json[0].index }}" json_attributes_path: "$.0" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature - name: "MYPC GPU1 Information" value_template: "{{ value_json[1].index }}" json_attributes_path: "$.1" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature
-
platform: template sensors: mypc_gpu_0_gpu: friendly_name: "MYPC GPU0 GPU" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_memory: friendly_name: "MYPC GPU0 Memory" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_power: friendly_name: "MYPC GPU0 Power" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_0_power_limit: friendly_name: "MYPC GPU0 Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_0_temperature: friendly_name: "MYPC GPU0 Temperature" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'temperature') }}" unit_of_measurement: "C"
-
platform: template sensors: mypc_gpu_1_gpu: friendly_name: "MYPC GPU1 GPU" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_1_memory: friendly_name: "MYPC GPU1 Memory" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_1_power: friendly_name: "MYPC GPU1 Power" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_1_power_limit: friendly_name: "MYPC GPU1 Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_1_temperature: friendly_name: "MYPC GPU1 Temperature" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'temperature') }}" unit_of_measurement: "C"
```
Basic entity card:
type: entities entities: - entity: sensor.mypc_gpu_0_gpu secondary_info: last-updated - entity: sensor.mypc_gpu_0_memory secondary_info: last-updated - entity: sensor.mypc_gpu_0_power secondary_info: last-updated - entity: sensor.mypc_gpu_0_power_limit secondary_info: last-updated - entity: sensor.mypc_gpu_0_temperature secondary_info: last-updated
Ansible Role
```
-
name: install go become: true package: name: golang-go state: present
-
name: git clone git: repo: "https://github.com/sammcj/NVApi.git" dest: "/home/ansible/NVapi" update: yes force: true
go run main.go -port 9999 -rate 1
-
name: install systemd service become: true copy: src: nvapi.service dest: /etc/systemd/system/nvapi.service
-
name: Reload systemd daemons, enable, and restart nvapi become: true systemd: name: nvapi daemon_reload: yes enabled: yes state: restarted ```
-
@ 90c656ff:9383fd4e
2025-05-06 10:32:35Bitcoin is a new form of digital money that offers financial freedom and access to a global economy without traditional intermediaries. To take full advantage of this technology, it's important to understand how to buy, store, and use it safely and efficiently. This guide covers the main steps and best practices to incorporate Bitcoin into your daily life, emphasizing how to protect your assets and get the most out of them.
Buying Bitcoin is the first step to participating in its decentralized network. There are several ways to acquire Bitcoin, depending on individual preferences and needs.
- Exchange Platforms:
01 - How it works: Exchanges are online platforms that allow users to buy Bitcoin using traditional currencies like dollars, euros, or reais. 02 - Process: Create an account, complete identity verification (KYC process), and deposit funds to start trading.
Tips: Choose reliable exchanges with strong security and good reputations.
- Bitcoin ATMs:
01 - How it works: Some ATMs allow users to buy Bitcoin with cash or credit cards. 02 - Use: Insert the desired amount, scan your digital wallet, and receive the Bitcoin immediately.
- Peer-to-Peer (P2P) Buying:
01 - How it works: P2P platforms connect buyers and sellers directly, allowing them to negotiate specific terms. 02 - Tips: Check the seller's reputation and use platforms that offer escrow services or transaction guarantees.
Security is crucial when handling Bitcoin. Proper storage protects your funds against loss, hacking, and unauthorized access.
- Digital Wallets:
01 - Definition: A digital wallet is software or a physical device that stores the private keys needed to access your Bitcoin.
Types of Wallets:
01 - Hot wallets: Connected to the internet; suitable for frequent use but more vulnerable to attacks (e.g., mobile apps and web wallets). 02 - Cold wallets: Keep Bitcoin offline; more secure for storing large amounts (e.g., hardware wallets and paper wallets).
- Hardware Wallets:
01 - How they work: Physical devices like Ledger or Trezor store your private keys offline. 02 - Advantages: High security against digital attacks and easy to transport.
- Paper Wallets:
01 - How they work: Involve printing or writing down your private keys on a piece of paper. 02 - Precautions: Store in a safe place, protected from moisture, fire, and unauthorized access.
- Backup and Recovery:
01 - Best practice: Regularly back up your wallet and store your recovery phrase (seed phrase) in a secure location. 02 - Warning: Never share your recovery phrase or private key with anyone.
Using Bitcoin goes beyond investment. It can be used for daily transactions, purchases, and transferring value efficiently.
- Transactions:
01 - How to send Bitcoin: Enter the recipient’s address, the amount to send, and confirm the transaction from your wallet. 02 - Fees: Transaction fees go to miners and may vary based on network demand.
- Purchasing Goods and Services:
01 - Merchants that accept Bitcoin: Many businesses, both physical and online, now accept Bitcoin. Look for the Bitcoin logo or consult updated lists of accepting merchants. 02 - How to pay: Scan the seller’s QR code and send the payment directly from your wallet.
International Transfers: Bitcoin enables fast global transfers, often with lower fees than banks or conventional remittance services.
Bill Payments: In some countries, it's already possible to pay for services and even taxes with Bitcoin, depending on local infrastructure.
- Tips for Using Bitcoin Safely:
01 - Choose trusted wallets and services: Only use well-known, reputable wallets and exchanges. 02 - Enable two-factor authentication (2FA): Activate 2FA to protect your accounts on exchanges and online services. 03 - Don’t leave funds on exchanges: After buying Bitcoin on an exchange, transfer your funds to a wallet you control to reduce the risk of loss from hacks. 04 - Educate yourself: Understanding the basics of Bitcoin and digital security is key to avoiding mistakes and fraud.
In summary, buying, storing, and using Bitcoin might seem complex at first, but it becomes simple and accessible with time. By following best security practices and learning the basics, anyone can benefit from this innovative technology.
Bitcoin is not just a financial option; it’s a powerful tool that supports economic freedom and access to a global economy. With the right knowledge, you can integrate Bitcoin into your life securely and effectively.
Thank you very much for reading this far. I hope everything is well with you, and sending a big hug from your favorite Bitcoiner maximalist from Madeira. Long live freedom!
-
@ 005bc4de:ef11e1a2
2025-05-06 11:54:14May 6 marks my "Nostr birthday." This means I've been on Nostr for two years now. See my initial "Running nostr" note timestamped and archived on the Hive blockchain at https://peakd.com/bitcoin/@crrdlx/running-nostr
Two years ago, I really had no idea what Nostr was. I was asking, "What is this Nostr thing?"
And, I had no idea what I was doing then while using the front end clients. The clients were clunky and since the protocol was rather plastic (still kinda is). As evidence to my ignorance, the spinning wheels on Coracle.social just kept spinning. I didn't realize that since I was only following two people, one being myself, there was nothing to load from relays except my one "Running nostr" note. Hence, the Coracle wheels just spun in their mesmerizing manner. At least they're soothing to watch.
Yet, despite my ignorance, I had an inkling of a notion that Nostr was indeed something different, maybe special. Otherwise, I wouldn't have taken the time to capture an animated gif and make that Hive post to chronicle my first Nostr note.
For fun, I made another "Running nostr" note yesterday using Coracle.social. It still has those muted, earthy tones, but the wheels are not there anymore for long. Coracle, like Nostr, has come a long way in two years. It loads much faster now, which means less wheel spinning. I kind of miss the wheels for some reason, they build the drama and expectation of what might appear.
!HBIT
-
@ 6f6b50bb:a848e5a1
2024-12-15 15:09:52Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
-
Scoperta. Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
-
Efficienza. La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
Scoperta. Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
Efficienza. La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!
Tradotto da:\ https://willwhitney.com/computing-inside-ai.htmlhttps://willwhitney.com/computing-inside-ai.html
-
-
@ 1b9fc4cd:1d6d4902
2025-05-06 11:06:40Music has always been dynamic, molding and reflecting cultural shifts across generations. From the smoky underground clubs of Northern England to the gritty, graffiti-laden walls of New York City's punk venues, and the rain-soaked streets of Seattle, the evolution of music is a testament to the ever-changing landscape of human expression. Daniel Siegel Alonso takes you on a witty and insightful journey through pivotal moments in music history: The Beatles at The Cavern Club, punk rock's birth at CBGBs, and the Seattle grunge explosion.
The Beatles do The Cavern
Close your eyes and imagine: It's 1961, and you're down in the basement of The Cavern Club in Liverpool; it's packed with sweat-drenched, eager faces, and the air thick, dripping with anticipation. On stage, four young lads who would soon become the most famous band in the world are tuning their guitars. The Beatles, with their mop-top haircuts and cheeky grins, are on the precipice of changing music for generations.
Before they were household names, John, Paul, George, and Ringo honed their craft in this humble, dimly lit venue. The Cavern Club was their proving ground, where they transitioned from covering American icons Chuck Berry and Little Richard to showcasing their original material. Here, they first captivated audiences with their infectious energy and groundbreaking sound.
The group's time at The Cavern Club was pivotal. It was where they caught the eye of Brian Epstein, who would become its manager, and later, record producer George Martin, aka the fifth Beatle. This tiny, subterranean venue was the launchpad for a nuclear cultural revolution. The Beatles didn't just play pop and rock music; they constructed an identity, a lifestyle, and, in hindsight, an era. They embodied the spirit of the Swinging 60s, melding rock 'n' roll with a bouncy pop sensibility that was both rowdy and charming.
Anarchy in the Big Apple
Daniel Siegel Alonso fast-forwards to the mid-70s, and we're in an entirely different world. Bankrupt Manhattan, in the bowels of a biker bar on the Bowery called CBGBs--a mouthful of an acronym standing for Country, Bluegrass, and Blues. The stage is dilapidated, and the sound system is a haphazard collection of amps and speakers at best. Here, the raw energy of punk rock was born, thrashing and pogoing its way into the mainstream.
CBGBs became the center of a musical revolt. Groups like The Ramones, Blondie, and Television took to the ramshackle stage, bringing with them a loud, fast, and unapologetically raw sound. Punk was a direct response to the bloated excesses of middle-of-the-road rock and bands like Yes, Chicago, and Fleetwood Mac; punk was do-it-yourself, back to basics, and in-your-face.
The Ramones epitomized this new angsty attitude with their black leather jackets and torn jeans. The songs they wrote were short, sharp, and shocking to audiences accustomed to indulgent guitar solos and elaborate stage productions. CBGBs was more than just a venue; it was a breeding ground for a cultural movement. It embraced the DIY ethic, encouraging emerging bands to play regardless of polish or professionalism. This sense of independence and defiance reverberated with a new generation of listeners disenchanted by the status quo.
The Last Great Rock Revolution
Siegel Alonso jumps ahead another decade to Seattle, a city known more for its rain than its rock-and-roll. Yet, over three decades ago, Seattle was the epicenter of grunge, a new genre that would once again redefine music. The core of this movement was a collection of venues like The Crocodile and The Off Ramp, where bands like Nirvana, Pearl Jam, and Soundgarden first made their mark.
Grunge was a gritty, angst-filled reaction to the over-produced pop and ostentatious hair metal of the 80s. It combined the raw energy of punk from the previous decade with heavy metal's strength, birthing a sound that was both abrasive and softly melodic. Grunge poster boy Kurt Cobain, with his ragged sweaters and unkempt wiry hair, became the reluctant voice of the last analog generation. Nirvana's breakout album, "Nevermind," was a seismic pop culture event, forcing grunge into the global mainstream.
Seattle's grunge scene was characterized by authenticity and a sense of community. Bands often collaborated and supported each other, creating a tight-knit musical ecosystem. The city's isolation from traditional music industry hubs allowed for a unique sound to develop, one that was untainted by commercial pressures.
Connecting the Dots
What ties these three musical moments together is their grassroots beginnings. The Beatles, the first wave of punk rock, and grunge all began in small, dingy venues, driven by pure passion and a craving to disrupt the status quo. Each musical chapter mirrored and influenced the cultural zeitgeist of its time, providing a soundtrack to their respective eras' social changes and attitudes.
The Cavern Club, CBGBs, and Seattle's grunge venues were more than places where bands performed; they were incubators of innovation and rebellion. They nurtured the raw, unpolished energy that would shape the future of popular music.
As Siegel Alonso reflects on these musical milestones, a pattern of evolution emerges driven by a handful of fundamental ingredients: authenticity, community, and a bold embrace of the unknown. Music's narrative is one of constant change, and as these examples depict, it's often in the most unexpected places that the next big thing begins to take shape.
-
@ dd664d5e:5633d319
2024-12-14 15:25:56Christmas season hasn't actually started, yet, in Roman #Catholic Germany. We're in Advent until the evening of the 24th of December, at which point Christmas begins (with the Nativity, at Vespers), and continues on for 40 days until Mariä Lichtmess (Presentation of Christ in the temple) on February 2nd.
It's 40 days because that's how long the post-partum isolation is, before women were allowed back into the temple (after a ritual cleansing).
That is the day when we put away all of the Christmas decorations and bless the candles, for the next year. (Hence, the British name "Candlemas".) It used to also be when household staff would get paid their cash wages and could change employer. And it is the day precisely in the middle of winter.
Between Christmas Eve and Candlemas are many celebrations, concluding with the Twelfth Night called Epiphany or Theophany. This is the day some Orthodox celebrate Christ's baptism, so traditions rotate around blessing of waters.
The Monday after Epiphany was the start of the farming season, in England, so that Sunday all of the ploughs were blessed, but the practice has largely died out.
Our local tradition is for the altar servers to dress as the wise men and go door-to-door, carrying their star and looking for the Baby Jesus, who is rumored to be lying in a manger.
They collect cash gifts and chocolates, along the way, and leave the generous their powerful blessing, written over the door. The famous 20 * C + M + B * 25 blessing means "Christus mansionem benedicat" (Christ, bless this house), or "Caspar, Melchior, Balthasar" (the names of the three kings), depending upon who you ask.
They offer the cash to the Baby Jesus (once they find him in the church's Nativity scene), but eat the sweets, themselves. It is one of the biggest donation-collections in the world, called the "Sternsinger" (star singers). The money goes from the German children, to help children elsewhere, and they collect around €45 million in cash and coins, every year.
As an interesting aside:
The American "groundhog day", derives from one of the old farmers' sayings about Candlemas, brought over by the Pennsylvania Dutch. It says, that if the badger comes out of his hole and sees his shadow, then it'll remain cold for 4 more weeks. When they moved to the USA, they didn't have any badgers around, so they switched to groundhogs, as they also hibernate in winter.
-
@ e4950c93:1b99eccd
2025-05-06 10:32:20Contenu à venir.
-
@ d0bf8568:4db3e76f
2024-12-12 21:26:52Depuis sa création en 2009, Bitcoin suscite débats passionnés. Pour certains, il s’agit d’une tentative utopique d’émancipation du pouvoir étatique, inspirée par une philosophie libertarienne radicale. Pour d’autres, c'est une réponse pragmatique à un système monétaire mondial jugé défaillant. Ces deux perspectives, souvent opposées, méritent d’être confrontées pour mieux comprendre les enjeux et les limites de cette révolution monétaire.
La critique économique de Bitcoin
David Cayla, dans ses dernières contributions, perçoit Bitcoin comme une tentative de réaliser une « utopie libertarienne ». Selon lui, les inventeurs des cryptomonnaies cherchent à créer une monnaie détachée de l’État et des rapports sociaux traditionnels, en s’inspirant de l’école autrichienne d’économie. Cette école, incarnée par des penseurs comme Friedrich Hayek, prône une monnaie privée et indépendante de l’intervention politique.
Cayla reproche à Bitcoin de nier la dimension intrinsèquement sociale et politique de la monnaie. Pour lui, toute monnaie repose sur la confiance collective et la régulation étatique. En rendant la monnaie exogène et dénuée de dette, Bitcoin chercherait à éviter les interactions sociales au profit d’une logique purement individuelle. Selon cette analyse, Bitcoin serait une illusion, incapable de remplacer les monnaies traditionnelles qui sont des « biens publics » soutenus par des institutions.
Cependant, cette critique semble minimiser la légitimité des motivations derrière Bitcoin. La crise financière de 2008, marquée par des abus systémiques et des sauvetages bancaires controversés, a largement érodé la confiance dans les systèmes monétaires centralisés. Bitcoin n’est pas qu’une réaction idéologique : c’est une tentative de créer un système monétaire transparent, immuable et résistant aux manipulations.
La perspective maximaliste
Pour les bitcoiners, Bitcoin n’est pas une utopie mais une réponse pragmatique à un problème réel : la centralisation excessive et les défaillances des monnaies fiat. Contrairement à ce que Cayla affirme, Bitcoin ne cherche pas à nier la société, mais à protéger les individus de la coercition étatique. En déplaçant la confiance des institutions vers un protocole neutre et transparent, Bitcoin redéfinit les bases des échanges monétaires.
L’accusation selon laquelle Bitcoin serait « asocial » ignore son écosystème communautaire mondial. Les bitcoiners collaborent activement à l’amélioration du réseau, organisent des événements éducatifs et développent des outils inclusifs. Bitcoin n’est pas un rejet des rapports sociaux, mais une tentative de les reconstruire sur des bases plus justes et transparentes.
Un maximaliste insisterait également sur l’émergence historique de la monnaie. Contrairement à l’idée que l’État aurait toujours créé la monnaie, de nombreux exemples montrent que celle-ci émergeait souvent du marché, avant d’être capturée par les gouvernements pour financer des guerres ou imposer des monopoles. Bitcoin réintroduit une monnaie à l’abri des manipulations politiques, similaire aux systèmes basés sur l’or, mais en mieux : numérique, immuable et accessible.
Utopie contre réalité
Si Cayla critique l’absence de dette dans Bitcoin, Nous y voyons une vertu. La « monnaie-dette » actuelle repose sur un système de création monétaire inflationniste qui favorise les inégalités et l’érosion du pouvoir d’achat. En limitant son offre à 21 millions d’unités, Bitcoin offre une alternative saine et prévisible, loin des politiques monétaires arbitraires.
Cependant, les maximalistes ne nient pas que Bitcoin ait encore des défis à relever. La volatilité de sa valeur, sa complexité technique pour les nouveaux utilisateurs, et les tensions entre réglementation et liberté individuelle sont des questions ouvertes. Mais pour autant, ces épreuves ne remettent pas en cause sa pertinence. Au contraire, elles illustrent l’importance de réfléchir à des modèles alternatifs.
Conclusion : un débat essentiel
Bitcoin n’est pas une utopie libertarienne, mais une révolution monétaire sans précédent. Les critiques de Cayla, bien qu’intellectuellement stimulantes, manquent de saisir l’essence de Bitcoin : une monnaie qui libère les individus des abus systémiques des États et des institutions centralisées. Loin de simplement avoir un « potentiel », Bitcoin est déjà en train de redéfinir les rapports entre la monnaie, la société et la liberté. Bitcoin offre une expérience unique : celle d’une monnaie mondiale, neutre et décentralisée, qui redonne le pouvoir aux individus. Que l’on soit sceptique ou enthousiaste, il est clair que Bitcoin oblige à repenser les rapports entre monnaie, société et État.
En fin de compte, le débat sur Bitcoin n’est pas seulement une querelle sur sa légitimité, mais une interrogation sur la manière dont il redessine les rapports sociaux et sociétaux autour de la monnaie. En rendant le pouvoir monétaire à chaque individu, Bitcoin propose un modèle où les interactions économiques peuvent être réalisées sans coercition, renforçant ainsi la confiance mutuelle et les communautés globales. Cette discussion, loin d’être close, ne fait que commencer.
-
@ e6817453:b0ac3c39
2024-12-07 15:06:43I started a long series of articles about how to model different types of knowledge graphs in the relational model, which makes on-device memory models for AI agents possible.
We model-directed graphs
Also, graphs of entities
We even model hypergraphs
Last time, we discussed why classical triple and simple knowledge graphs are insufficient for AI agents and complex memory, especially in the domain of time-aware or multi-model knowledge.
So why do we need metagraphs, and what kind of challenge could they help us to solve?
- complex and nested event and temporal context and temporal relations as edges
- multi-mode and multilingual knowledge
- human-like memory for AI agents that has multiple contexts and relations between knowledge in neuron-like networks
MetaGraphs
A meta graph is a concept that extends the idea of a graph by allowing edges to become graphs. Meta Edges connect a set of nodes, which could also be subgraphs. So, at some level, node and edge are pretty similar in properties but act in different roles in a different context.
Also, in some cases, edges could be referenced as nodes.
This approach enables the representation of more complex relationships and hierarchies than a traditional graph structure allows. Let’s break down each term to understand better metagraphs and how they differ from hypergraphs and graphs.Graph Basics
- A standard graph has a set of nodes (or vertices) and edges (connections between nodes).
- Edges are generally simple and typically represent a binary relationship between two nodes.
- For instance, an edge in a social network graph might indicate a “friend” relationship between two people (nodes).
Hypergraph
- A hypergraph extends the concept of an edge by allowing it to connect any number of nodes, not just two.
- Each connection, called a hyperedge, can link multiple nodes.
- This feature allows hypergraphs to model more complex relationships involving multiple entities simultaneously. For example, a hyperedge in a hypergraph could represent a project team, connecting all team members in a single relation.
- Despite its flexibility, a hypergraph doesn’t capture hierarchical or nested structures; it only generalizes the number of connections in an edge.
Metagraph
- A metagraph allows the edges to be graphs themselves. This means each edge can contain its own nodes and edges, creating nested, hierarchical structures.
- In a meta graph, an edge could represent a relationship defined by a graph. For instance, a meta graph could represent a network of organizations where each organization’s structure (departments and connections) is represented by its own internal graph and treated as an edge in the larger meta graph.
- This recursive structure allows metagraphs to model complex data with multiple layers of abstraction. They can capture multi-node relationships (as in hypergraphs) and detailed, structured information about each relationship.
Named Graphs and Graph of Graphs
As you can notice, the structure of a metagraph is quite complex and could be complex to model in relational and classical RDF setups. It could create a challenge of luck of tools and software solutions for your problem.
If you need to model nested graphs, you could use a much simpler model of Named graphs, which could take you quite far.The concept of the named graph came from the RDF community, which needed to group some sets of triples. In this way, you form subgraphs inside an existing graph. You could refer to the subgraph as a regular node. This setup simplifies complex graphs, introduces hierarchies, and even adds features and properties of hypergraphs while keeping a directed nature.
It looks complex, but it is not so hard to model it with a slight modification of a directed graph.
So, the node could host graphs inside. Let's reflect this fact with a location for a node. If a node belongs to a main graph, we could set the location to null or introduce a main node . it is up to youNodes could have edges to nodes in different subgraphs. This structure allows any kind of nesting graphs. Edges stay location-free
Meta Graphs in Relational Model
Let’s try to make several attempts to model different meta-graphs with some constraints.
Directed Metagraph where edges are not used as nodes and could not contain subgraphs
In this case, the edge always points to two sets of nodes. This introduces an overhead of creating a node set for a single node. In this model, we can model empty node sets that could require application-level constraints to prevent such cases.
Directed Metagraph where edges are not used as nodes and could contain subgraphs
Adding a node set that could model a subgraph located in an edge is easy but could be separate from in-vertex or out-vert.
I also do not see a direct need to include subgraphs to a node, as we could just use a node set interchangeably, but it still could be a case.Directed Metagraph where edges are used as nodes and could contain subgraphs
As you can notice, we operate all the time with node sets. We could simply allow the extension node set to elements set that include node and edge IDs, but in this case, we need to use uuid or any other strategy to differentiate node IDs from edge IDs. In this case, we have a collision of ephemeral edges or ephemeral nodes when we want to change the role and purpose of the node as an edge or vice versa.
A full-scale metagraph model is way too complex for a relational database.
So we need a better model.Now, we have more flexibility but loose structural constraints. We cannot show that the element should have one vertex, one vertex, or both. This type of constraint has been moved to the application level. Also, the crucial question is about query and retrieval needs.
Any meta-graph model should be more focused on domain and needs and should be used in raw form. We did it for a pure theoretical purpose. -
@ bbef5093:71228592
2025-05-06 05:17:07*Holtec új atomenergetikai együttműködése Utah államban*
Az amerikai Holtec International stratégiai megállapodást kötött Utah állammal és a Hi Tech Solutions nukleáris szolgáltatóval, hogy elősegítse saját fejlesztésű, SMR-300 típusú kis moduláris reaktorainak (SMR) telepítését Utahban és az Egyesült Államok Mountain West régiójában. A megállapodás értelmében 2028-ig állandó képzési központot hoznak létre Utahban, amely az üzemeltetéshez, karbantartáshoz és az új generációs nukleáris technológiákhoz szükséges munkaerő képzését célozza, szoros együttműködésben helyi egyetemekkel, főiskolákkal és szakiskolákkal.
A projekt Utah állam energetikai stratégiájának („Operation Gigawatt” és „Built Here”) része, amely a következő tíz évben megduplázná az állam energiatermelését, kiemelt szerepet szánva a tiszta nukleáris energiának. A Holtec Utah-t választotta nyugati gyártóbázisának is, ezzel erősítve az amerikai nukleáris ellátási láncot és hosszú távú munkahelyeket teremtve. A cég a 2030-as években akár 4 GW összkapacitású SMR-300 telepítését tervezi főként Utahban és Wyomingban.
Az SMR-300 egy nyomottvizes reaktor, amely 300 MW villamos vagy 1050 MW hőteljesítményt biztosít, és a Holtec tervei szerint kulcsszerepet játszhat a tiszta, szén-dioxid-mentes energiatermelésben.
*Amerikai SMR-piac helyzete*
Az SMR-technológia az utóbbi években került a figyelem középpontjába, mivel gyorsabban és olcsóbban telepíthető, mint a hagyományos atomerőművek, és lehetőséget kínál magánbefektetők számára is. Ugyanakkor az első amerikai SMR-projekt, a NuScale és a Utah Associated Municipal Power Systems közös beruházása költségnövekedés miatt meghiúsult, ami rávilágított a technológia gazdasági kihívásaira.
Nemzetközi fejlemények és további hírek
- Az ausztrál Okapi Resources és az Urenco együttműködési megállapodást kötött új urán dúsítási technológia fejlesztésére, amely tisztább és olcsóbb dúsítási eljárást ígér.
- Az Argonne National Laboratory sikeresen kicserélt egy 30 éves kulcsfontosságú alkatrészt a nátriumhűtésű gyorsreaktor-tesztberendezésében, ami hozzájárulhat a gyorsreaktor-technológia fejlesztéséhez az Egyesült Államokban.
- A Tennessee Egyetem ösztöndíjprogramot indít a Fülöp-szigeteki nukleáris szakemberek képzésére, támogatva ezzel a délkelet-ázsiai ország nukleáris energiafejlesztési törekvéseit.
- Bulgáriában a Westinghouse Electric Company helyi beszállítókkal kötött megállapodásokat két új AP1000 típusú reaktor építéséhez a kozloduji atomerőműben.
-
@ e6817453:b0ac3c39
2024-12-07 15:03:06Hey folks! Today, let’s dive into the intriguing world of neurosymbolic approaches, retrieval-augmented generation (RAG), and personal knowledge graphs (PKGs). Together, these concepts hold much potential for bringing true reasoning capabilities to large language models (LLMs). So, let’s break down how symbolic logic, knowledge graphs, and modern AI can come together to empower future AI systems to reason like humans.
The Neurosymbolic Approach: What It Means ?
Neurosymbolic AI combines two historically separate streams of artificial intelligence: symbolic reasoning and neural networks. Symbolic AI uses formal logic to process knowledge, similar to how we might solve problems or deduce information. On the other hand, neural networks, like those underlying GPT-4, focus on learning patterns from vast amounts of data — they are probabilistic statistical models that excel in generating human-like language and recognizing patterns but often lack deep, explicit reasoning.
While GPT-4 can produce impressive text, it’s still not very effective at reasoning in a truly logical way. Its foundation, transformers, allows it to excel in pattern recognition, but the models struggle with reasoning because, at their core, they rely on statistical probabilities rather than true symbolic logic. This is where neurosymbolic methods and knowledge graphs come in.
Symbolic Calculations and the Early Vision of AI
If we take a step back to the 1950s, the vision for artificial intelligence was very different. Early AI research was all about symbolic reasoning — where computers could perform logical calculations to derive new knowledge from a given set of rules and facts. Languages like Lisp emerged to support this vision, enabling programs to represent data and code as interchangeable symbols. Lisp was designed to be homoiconic, meaning it treated code as manipulatable data, making it capable of self-modification — a huge leap towards AI systems that could, in theory, understand and modify their own operations.
Lisp: The Earlier AI-Language
Lisp, short for “LISt Processor,” was developed by John McCarthy in 1958, and it became the cornerstone of early AI research. Lisp’s power lay in its flexibility and its use of symbolic expressions, which allowed developers to create programs that could manipulate symbols in ways that were very close to human reasoning. One of the most groundbreaking features of Lisp was its ability to treat code as data, known as homoiconicity, which meant that Lisp programs could introspect and transform themselves dynamically. This ability to adapt and modify its own structure gave Lisp an edge in tasks that required a form of self-awareness, which was key in the early days of AI when researchers were exploring what it meant for machines to “think.”
Lisp was not just a programming language—it represented the vision for artificial intelligence, where machines could evolve their understanding and rewrite their own programming. This idea formed the conceptual basis for many of the self-modifying and adaptive algorithms that are still explored today in AI research. Despite its decline in mainstream programming, Lisp’s influence can still be seen in the concepts used in modern machine learning and symbolic AI approaches.
Prolog: Formal Logic and Deductive Reasoning
In the 1970s, Prolog was developed—a language focused on formal logic and deductive reasoning. Unlike Lisp, based on lambda calculus, Prolog operates on formal logic rules, allowing it to perform deductive reasoning and solve logical puzzles. This made Prolog an ideal candidate for expert systems that needed to follow a sequence of logical steps, such as medical diagnostics or strategic planning.
Prolog, like Lisp, allowed symbols to be represented, understood, and used in calculations, creating another homoiconic language that allows reasoning. Prolog’s strength lies in its rule-based structure, which is well-suited for tasks that require logical inference and backtracking. These features made it a powerful tool for expert systems and AI research in the 1970s and 1980s.
The language is declarative in nature, meaning that you define the problem, and Prolog figures out how to solve it. By using formal logic and setting constraints, Prolog systems can derive conclusions from known facts, making it highly effective in fields requiring explicit logical frameworks, such as legal reasoning, diagnostics, and natural language understanding. These symbolic approaches were later overshadowed during the AI winter — but the ideas never really disappeared. They just evolved.
Solvers and Their Role in Complementing LLMs
One of the most powerful features of Prolog and similar logic-based systems is their use of solvers. Solvers are mechanisms that can take a set of rules and constraints and automatically find solutions that satisfy these conditions. This capability is incredibly useful when combined with LLMs, which excel at generating human-like language but need help with logical consistency and structured reasoning.
For instance, imagine a scenario where an LLM needs to answer a question involving multiple logical steps or a complex query that requires deducing facts from various pieces of information. In this case, a solver can derive valid conclusions based on a given set of logical rules, providing structured answers that the LLM can then articulate in natural language. This allows the LLM to retrieve information and ensure the logical integrity of its responses, leading to much more robust answers.
Solvers are also ideal for handling constraint satisfaction problems — situations where multiple conditions must be met simultaneously. In practical applications, this could include scheduling tasks, generating optimal recommendations, or even diagnosing issues where a set of symptoms must match possible diagnoses. Prolog’s solver capabilities and LLM’s natural language processing power can make these systems highly effective at providing intelligent, rule-compliant responses that traditional LLMs would struggle to produce alone.
By integrating neurosymbolic methods that utilize solvers, we can provide LLMs with a form of deductive reasoning that is missing from pure deep-learning approaches. This combination has the potential to significantly improve the quality of outputs for use-cases that require explicit, structured problem-solving, from legal queries to scientific research and beyond. Solvers give LLMs the backbone they need to not just generate answers but to do so in a way that respects logical rigor and complex constraints.
Graph of Rules for Enhanced Reasoning
Another powerful concept that complements LLMs is using a graph of rules. A graph of rules is essentially a structured collection of logical rules that interconnect in a network-like structure, defining how various entities and their relationships interact. This structured network allows for complex reasoning and information retrieval, as well as the ability to model intricate relationships between different pieces of knowledge.
In a graph of rules, each node represents a rule, and the edges define relationships between those rules — such as dependencies or causal links. This structure can be used to enhance LLM capabilities by providing them with a formal set of rules and relationships to follow, which improves logical consistency and reasoning depth. When an LLM encounters a problem or a question that requires multiple logical steps, it can traverse this graph of rules to generate an answer that is not only linguistically fluent but also logically robust.
For example, in a healthcare application, a graph of rules might include nodes for medical symptoms, possible diagnoses, and recommended treatments. When an LLM receives a query regarding a patient’s symptoms, it can use the graph to traverse from symptoms to potential diagnoses and then to treatment options, ensuring that the response is coherent and medically sound. The graph of rules guides reasoning, enabling LLMs to handle complex, multi-step questions that involve chains of reasoning, rather than merely generating surface-level responses.
Graphs of rules also enable modular reasoning, where different sets of rules can be activated based on the context or the type of question being asked. This modularity is crucial for creating adaptive AI systems that can apply specific sets of logical frameworks to distinct problem domains, thereby greatly enhancing their versatility. The combination of neural fluency with rule-based structure gives LLMs the ability to conduct more advanced reasoning, ultimately making them more reliable and effective in domains where accuracy and logical consistency are critical.
By implementing a graph of rules, LLMs are empowered to perform deductive reasoning alongside their generative capabilities, creating responses that are not only compelling but also logically aligned with the structured knowledge available in the system. This further enhances their potential applications in fields such as law, engineering, finance, and scientific research — domains where logical consistency is as important as linguistic coherence.
Enhancing LLMs with Symbolic Reasoning
Now, with LLMs like GPT-4 being mainstream, there is an emerging need to add real reasoning capabilities to them. This is where neurosymbolic approaches shine. Instead of pitting neural networks against symbolic reasoning, these methods combine the best of both worlds. The neural aspect provides language fluency and recognition of complex patterns, while the symbolic side offers real reasoning power through formal logic and rule-based frameworks.
Personal Knowledge Graphs (PKGs) come into play here as well. Knowledge graphs are data structures that encode entities and their relationships — they’re essentially semantic networks that allow for structured information retrieval. When integrated with neurosymbolic approaches, LLMs can use these graphs to answer questions in a far more contextual and precise way. By retrieving relevant information from a knowledge graph, they can ground their responses in well-defined relationships, thus improving both the relevance and the logical consistency of their answers.
Imagine combining an LLM with a graph of rules that allow it to reason through the relationships encoded in a personal knowledge graph. This could involve using deductive databases to form a sophisticated way to represent and reason with symbolic data — essentially constructing a powerful hybrid system that uses LLM capabilities for language fluency and rule-based logic for structured problem-solving.
My Research on Deductive Databases and Knowledge Graphs
I recently did some research on modeling knowledge graphs using deductive databases, such as DataLog — which can be thought of as a limited, data-oriented version of Prolog. What I’ve found is that it’s possible to use formal logic to model knowledge graphs, ontologies, and complex relationships elegantly as rules in a deductive system. Unlike classical RDF or traditional ontology-based models, which sometimes struggle with complex or evolving relationships, a deductive approach is more flexible and can easily support dynamic rules and reasoning.
Prolog and similar logic-driven frameworks can complement LLMs by handling the parts of reasoning where explicit rule-following is required. LLMs can benefit from these rule-based systems for tasks like entity recognition, logical inferences, and constructing or traversing knowledge graphs. We can even create a graph of rules that governs how relationships are formed or how logical deductions can be performed.
The future is really about creating an AI that is capable of both deep contextual understanding (using the powerful generative capacity of LLMs) and true reasoning (through symbolic systems and knowledge graphs). With the neurosymbolic approach, these AIs could be equipped not just to generate information but to explain their reasoning, form logical conclusions, and even improve their own understanding over time — getting us a step closer to true artificial general intelligence.
Why It Matters for LLM Employment
Using neurosymbolic RAG (retrieval-augmented generation) in conjunction with personal knowledge graphs could revolutionize how LLMs work in real-world applications. Imagine an LLM that understands not just language but also the relationships between different concepts — one that can navigate, reason, and explain complex knowledge domains by actively engaging with a personalized set of facts and rules.
This could lead to practical applications in areas like healthcare, finance, legal reasoning, or even personal productivity — where LLMs can help users solve complex problems logically, providing relevant information and well-justified reasoning paths. The combination of neural fluency with symbolic accuracy and deductive power is precisely the bridge we need to move beyond purely predictive AI to truly intelligent systems.
Let's explore these ideas further if you’re as fascinated by this as I am. Feel free to reach out, follow my YouTube channel, or check out some articles I’ll link below. And if you’re working on anything in this field, I’d love to collaborate!
Until next time, folks. Stay curious, and keep pushing the boundaries of AI!
-
@ dc0173e0:5499dd3b
2025-05-06 03:33:23Texas’s
-
@ ffbcb706:b0574044
2025-05-06 09:29:41Markdown test italic bold in openletter Nostr https://openletter.earth/ Have a great day
-
@ c230edd3:8ad4a712
2025-05-06 02:12:57Chef's notes
This cake is not too sweet and very simple to make. The 3 flavors and mild and meld well with the light sweetness.
Details
- ⏲️ Prep time: 15 min
- 🍳 Cook time: 45 min
- 🍽️ Servings: 12
Ingredients
- 1 1/2 cups all-purpose flour
- 1/2 tsp salt
- 2 tsp baking soda
- 1 cup sugar
- 3 large eggs
- 1/2 cup full fat milk
- 3/4 cup unfiltered olive oil
- 2/3 cup finely chopped raw, unsalted almonds
- 2 tsp lavender
- 1 Tbsp powdered sugar
Directions
- Preheat oven to 350 degrees F. Lightly butter 8 inch baking pan.
- In smal bowl, whisk together flour, salt, and baking soda.
- In large bowl, beat eggs and sugar until light colored and fluffy. Add milk.
- Slowly pour and stir in olive oil.
- Fold dry ingredients into the wet ingredients,
- Stir in the almonds and the lavender, reserving some flowers for garnish.
- Pour into prepared pan and bake for 45 min, or until toothpick comes out clean.
- Cool on wire rack, dust with powdered sugar and top with reserved lavender.
-
@ d61f3bc5:0da6ef4a
2025-05-06 01:37:28I remember the first gathering of Nostr devs two years ago in Costa Rica. We were all psyched because Nostr appeared to solve the problem of self-sovereign online identity and decentralized publishing. The protocol seemed well-suited for textual content, but it wasn't really designed to handle binary files, like images or video.
The Problem
When I publish a note that contains an image link, the note itself is resilient thanks to Nostr, but if the hosting service disappears or takes my image down, my note will be broken forever. We need a way to publish binary data without relying on a single hosting provider.
We were discussing how there really was no reliable solution to this problem even outside of Nostr. Peer-to-peer attempts like IPFS simply didn't work; they were hopelessly slow and unreliable in practice. Torrents worked for popular files like movies, but couldn't be relied on for general file hosting.
Awesome Blossom
A year later, I attended the Sovereign Engineering demo day in Madeira, organized by Pablo and Gigi. Many projects were presented over a three hour demo session that day, but one really stood out for me.
Introduced by hzrd149 and Stu Bowman, Blossom blew my mind because it showed how we can solve complex problems easily by simply relying on the fact that Nostr exists. Having an open user directory, with the corresponding social graph and web of trust is an incredible building block.
Since we can easily look up any user on Nostr and read their profile metadata, we can just get them to simply tell us where their files are stored. This, combined with hash-based addressing (borrowed from IPFS), is all we need to solve our problem.
How Blossom Works
The Blossom protocol (Blobs Stored Simply on Mediaservers) is formally defined in a series of BUDs (Blossom Upgrade Documents). Yes, Blossom is the most well-branded protocol in the history of protocols. Feel free to refer to the spec for details, but I will provide a high level explanation here.
The main idea behind Blossom can be summarized in three points:
- Users specify which media server(s) they use via their public Blossom settings published on Nostr;
- All files are uniquely addressable via hashes;
- If an app fails to load a file from the original URL, it simply goes to get it from the server(s) specified in the user's Blossom settings.
Just like Nostr itself, the Blossom protocol is dead-simple and it works!
Let's use this image as an example:
If you look at the URL for this image, you will notice that it looks like this:
blossom.primal.net/c1aa63f983a44185d039092912bfb7f33adcf63ed3cae371ebe6905da5f688d0.jpg
All Blossom URLs follow this format:
[server]/[file-hash].[extension]
The file hash is important because it uniquely identifies the file in question. Apps can use it to verify that the file they received is exactly the file they requested. It also gives us the ability to reliably get the same file from a different server.
Nostr users declare which media server(s) they use by publishing their Blossom settings. If I store my files on Server A, and they get removed, I can simply upload them to Server B, update my public Blossom settings, and all Blossom-capable apps will be able to find them at the new location. All my existing notes will continue to display media content without any issues.
Blossom Mirroring
Let's face it, re-uploading files to another server after they got removed from the original server is not the best user experience. Most people wouldn't have the backups of all the files, and/or the desire to do this work.
This is where Blossom's mirroring feature comes handy. In addition to the primary media server, a Blossom user can set one one or more mirror servers. Under this setup, every time a file is uploaded to the primary server the Nostr app issues a mirror request to the primary server, directing it to copy the file to all the specified mirrors. This way there is always a copy of all content on multiple servers and in case the primary becomes unavailable, Blossom-capable apps will automatically start loading from the mirror.
Mirrors are really easy to setup (you can do it in two clicks in Primal) and this arrangement ensures robust media handling without any central points of failure. Note that you can use professional media hosting services side by side with self-hosted backup servers that anyone can run at home.
Using Blossom Within Primal
Blossom is natively integrated into the entire Primal stack and enabled by default. If you are using Primal 2.2 or later, you don't need to do anything to enable Blossom, all your media uploads are blossoming already.
To enhance user privacy, all Primal apps use the "/media" endpoint per BUD-05, which strips all metadata from uploaded files before they are saved and optionally mirrored to other Blossom servers, per user settings. You can use any Blossom server as your primary media server in Primal, as well as setup any number of mirrors:
## Conclusion
For such a simple protocol, Blossom gives us three major benefits:
- Verifiable authenticity. All Nostr notes are always signed by the note author. With Blossom, the signed note includes a unique hash for each referenced media file, making it impossible to falsify.
- File hosting redundancy. Having multiple live copies of referenced media files (via Blossom mirroring) greatly increases the resiliency of media content published on Nostr.
- Censorship resistance. Blossom enables us to seamlessly switch media hosting providers in case of censorship.
Thanks for reading; and enjoy! 🌸
-
@ e83b66a8:b0526c2b
2025-05-06 09:17:39I’m going to talk about Ethereum, hear me out.
Ethereum is a Turing complete consensus blockchain tokenised by its own currency Ether.
This idea by Vitalik Buterin was incredibly compelling and still is today, even though few real world use cases have emerged.
For example, as a company, I could pay a carbon tax in Ether, locked into a smart contract. If the temperate rises by more than “n” degrees year on year based on a known agreed external (blind) oracle, say a weather station located near my factory.
Fantastic, we now have an automatic climate tax.
In reality, few realistic applications exist, however the idea is very compelling and many flocked to Ethereum as a promise of the future. This inflated its utility token “Ether” into stratospherically high prices.
This, in turn, attracted speculative investors and traders only looking at the price signal of the token and no longer considering the utility. This created a bubble which has gradually deflated over time.
This is why we are seeing Bitcoin, which only attempts to be money, succeed relative to Ethereum.
As Ethereum fails, and Bitcoin development strides on, an opportunity arises to try to do what Ethereum and all the other related altcoins have so far failed to do. Computational utility. And to do this on Bitcoin, the most successful “Crypto”.
The first unintended hijack of Ethereums utility are the JPEGs we are seeing on our blockchain.
This latest drive to make Bitcoin Turing complete is potentially the final destination for developers keen to explore the potential of Bitcoins eco-system.
Perhaps Bitcoin is going to absorb all the altcoins. Perhaps that is the goal of Bitcoins developers.
I don’t comment whether this is good or bad, I’m just exploring whether this may be the agenda.
-
@ e6817453:b0ac3c39
2024-12-07 14:54:46Introduction: Personal Knowledge Graphs and Linked Data
We will explore the world of personal knowledge graphs and discuss how they can be used to model complex information structures. Personal knowledge graphs aren’t just abstract collections of nodes and edges—they encode meaningful relationships, contextualizing data in ways that enrich our understanding of it. While the core structure might be a directed graph, we layer semantic meaning on top, enabling nuanced connections between data points.
The origin of knowledge graphs is deeply tied to concepts from linked data and the semantic web, ideas that emerged to better link scattered pieces of information across the web. This approach created an infrastructure where data islands could connect — facilitating everything from more insightful AI to improved personal data management.
In this article, we will explore how these ideas have evolved into tools for modeling AI’s semantic memory and look at how knowledge graphs can serve as a flexible foundation for encoding rich data contexts. We’ll specifically discuss three major paradigms: RDF (Resource Description Framework), property graphs, and a third way of modeling entities as graphs of graphs. Let’s get started.
Intro to RDF
The Resource Description Framework (RDF) has been one of the fundamental standards for linked data and knowledge graphs. RDF allows data to be modeled as triples: subject, predicate, and object. Essentially, you can think of it as a structured way to describe relationships: “X has a Y called Z.” For instance, “Berlin has a population of 3.5 million.” This modeling approach is quite flexible because RDF uses unique identifiers — usually URIs — to point to data entities, making linking straightforward and coherent.
RDFS, or RDF Schema, extends RDF to provide a basic vocabulary to structure the data even more. This lets us describe not only individual nodes but also relationships among types of data entities, like defining a class hierarchy or setting properties. For example, you could say that “Berlin” is an instance of a “City” and that cities are types of “Geographical Entities.” This kind of organization helps establish semantic meaning within the graph.
RDF and Advanced Topics
Lists and Sets in RDF
RDF also provides tools to model more complex data structures such as lists and sets, enabling the grouping of nodes. This extension makes it easier to model more natural, human-like knowledge, for example, describing attributes of an entity that may have multiple values. By adding RDF Schema and OWL (Web Ontology Language), you gain even more expressive power — being able to define logical rules or even derive new relationships from existing data.
Graph of Graphs
A significant feature of RDF is the ability to form complex nested structures, often referred to as graphs of graphs. This allows you to create “named graphs,” essentially subgraphs that can be independently referenced. For example, you could create a named graph for a particular dataset describing Berlin and another for a different geographical area. Then, you could connect them, allowing for more modular and reusable knowledge modeling.
Property Graphs
While RDF provides a robust framework, it’s not always the easiest to work with due to its heavy reliance on linking everything explicitly. This is where property graphs come into play. Property graphs are less focused on linking everything through triples and allow more expressive properties directly within nodes and edges.
For example, instead of using triples to represent each detail, a property graph might let you store all properties about an entity (e.g., “Berlin”) directly in a single node. This makes property graphs more intuitive for many developers and engineers because they more closely resemble object-oriented structures: you have entities (nodes) that possess attributes (properties) and are connected to other entities through relationships (edges).
The significant benefit here is a condensed representation, which speeds up traversal and queries in some scenarios. However, this also introduces a trade-off: while property graphs are more straightforward to query and maintain, they lack some complex relationship modeling features RDF offers, particularly when connecting properties to each other.
Graph of Graphs and Subgraphs for Entity Modeling
A third approach — which takes elements from RDF and property graphs — involves modeling entities using subgraphs or nested graphs. In this model, each entity can be represented as a graph. This allows for a detailed and flexible description of attributes without exploding every detail into individual triples or lump them all together into properties.
For instance, consider a person entity with a complex employment history. Instead of representing every employment detail in one node (as in a property graph), or as several linked nodes (as in RDF), you can treat the employment history as a subgraph. This subgraph could then contain nodes for different jobs, each linked with specific properties and connections. This approach keeps the complexity where it belongs and provides better flexibility when new attributes or entities need to be added.
Hypergraphs and Metagraphs
When discussing more advanced forms of graphs, we encounter hypergraphs and metagraphs. These take the idea of relationships to a new level. A hypergraph allows an edge to connect more than two nodes, which is extremely useful when modeling scenarios where relationships aren’t just pairwise. For example, a “Project” could connect multiple “People,” “Resources,” and “Outcomes,” all in a single edge. This way, hypergraphs help in reducing the complexity of modeling high-order relationships.
Metagraphs, on the other hand, enable nodes and edges to themselves be represented as graphs. This is an extremely powerful feature when we consider the needs of artificial intelligence, as it allows for the modeling of relationships between relationships, an essential aspect for any system that needs to capture not just facts, but their interdependencies and contexts.
Balancing Structure and Properties
One of the recurring challenges when modeling knowledge is finding the balance between structure and properties. With RDF, you get high flexibility and standardization, but complexity can quickly escalate as you decompose everything into triples. Property graphs simplify the representation by using attributes but lose out on the depth of connection modeling. Meanwhile, the graph-of-graphs approach and hypergraphs offer advanced modeling capabilities at the cost of increased computational complexity.
So, how do you decide which model to use? It comes down to your use case. RDF and nested graphs are strong contenders if you need deep linkage and are working with highly variable data. For more straightforward, engineer-friendly modeling, property graphs shine. And when dealing with very complex multi-way relationships or meta-level knowledge, hypergraphs and metagraphs provide the necessary tools.
The key takeaway is that only some approaches are perfect. Instead, it’s all about the modeling goals: how do you want to query the graph, what relationships are meaningful, and how much complexity are you willing to manage?
Conclusion
Modeling AI semantic memory using knowledge graphs is a challenging but rewarding process. The different approaches — RDF, property graphs, and advanced graph modeling techniques like nested graphs and hypergraphs — each offer unique strengths and weaknesses. Whether you are building a personal knowledge graph or scaling up to AI that integrates multiple streams of linked data, it’s essential to understand the trade-offs each approach brings.
In the end, the choice of representation comes down to the nature of your data and your specific needs for querying and maintaining semantic relationships. The world of knowledge graphs is vast, with many tools and frameworks to explore. Stay connected and keep experimenting to find the balance that works for your projects.
-
@ 9c35fe6b:5977e45b
2025-05-06 08:05:26The Hapi V Nile Cruise offers an exceptional opportunity to connect with Egypt's timeless charm. With ETB Tours Egypt, you can experience this journey in comfort and style, traveling between Luxor and Aswan where history comes alive on the banks of the Nile.
A Journey Through Ancient Time Step aboard the Hapi V and sail past iconic temples, ancient monuments, and vibrant Nubian villages. This Nile Cruise Luxor Aswan itinerary is carefully planned to showcase Egypt’s most legendary sights while giving travelers the comfort of modern amenities. Combine this voyage with Egypt vacation packages to enhance your stay.
Comfort Onboard Every Step of the Way Hapi V provides elegant accommodations, fine dining, and attentive service, making your cruise both relaxing and enriching. Whether you're enjoying your cabin’s Nile view or lounging on the sundeck, every moment is designed for your comfort. With All inclusive Egypt vacations, guests enjoy meals, excursions, and guided tours included.
Ideal for Every Type of Traveler From couples and families to solo adventurers, the Hapi V cruise caters to a variety of travelers. ETB Tours Egypt offers customizable options including Egypt private tours for those seeking a more intimate experience. For budget-conscious explorers, there are also Egypt budget tours that deliver excellent value without compromising the experience.
Seamless Planning with Expert Guidance Whether you're a first-time visitor or returning to Egypt’s wonders, ETB Tours Egypt simplifies your planning with tailored Egypt travel packages. Add a few days in Cairo or the Red Sea to your itinerary to complete the adventure. To Contact Us: E-Mail: info@etbtours.com Mobile & WhatsApp: +20 10 67569955 - +201021100873 Address: 4 El Lebeny Axis, Nazlet Al Batran, Al Haram, Giza, Egypt
-
@ e6817453:b0ac3c39
2024-12-07 14:52:47The temporal semantics and temporal and time-aware knowledge graphs. We have different memory models for artificial intelligence agents. We all try to mimic somehow how the brain works, or at least how the declarative memory of the brain works. We have the split of episodic memory and semantic memory. And we also have a lot of theories, right?
Declarative Memory of the Human Brain
How is the semantic memory formed? We all know that our brain stores semantic memory quite close to the concept we have with the personal knowledge graphs, that it’s connected entities. They form a connection with each other and all those things. So far, so good. And actually, then we have a lot of concepts, how the episodic memory and our experiences gets transmitted to the semantic:
- hippocampus indexing and retrieval
- sanitization of episodic memories
- episodic-semantic shift theory
They all give a different perspective on how different parts of declarative memory cooperate.
We know that episodic memories get semanticized over time. You have semantic knowledge without the notion of time, and probably, your episodic memory is just decayed.
But, you know, it’s still an open question:
do we want to mimic an AI agent’s memory as a human brain memory, or do we want to create something different?
It’s an open question to which we have no good answer. And if you go to the theory of neuroscience and check how episodic and semantic memory interfere, you will still find a lot of theories, yeah?
Some of them say that you have the hippocampus that keeps the indexes of the memory. Some others will say that you semantic the episodic memory. Some others say that you have some separate process that digests the episodic and experience to the semantics. But all of them agree on the plan that it’s operationally two separate areas of memories and even two separate regions of brain, and the semantic, it’s more, let’s say, protected.
So it’s harder to forget the semantical facts than the episodes and everything. And what I’m thinking about for a long time, it’s this, you know, the semantic memory.
Temporal Semantics
It’s memory about the facts, but you somehow mix the time information with the semantics. I already described a lot of things, including how we could combine time with knowledge graphs and how people do it.
There are multiple ways we could persist such information, but we all hit the wall because the complexity of time and the semantics of time are highly complex concepts.
Time in a Semantic context is not a timestamp.
What I mean is that when you have a fact, and you just mentioned that I was there at this particular moment, like, I don’t know, 15:40 on Monday, it’s already awake because we don’t know which Monday, right? So you need to give the exact date, but usually, you do not have experiences like that.
You do not record your memories like that, except you do the journaling and all of the things. So, usually, you have no direct time references. What I mean is that you could say that I was there and it was some event, blah, blah, blah.
Somehow, we form a chain of events that connect with each other and maybe will be connected to some period of time if we are lucky enough. This means that we could not easily represent temporal-aware information as just a timestamp or validity and all of the things.
For sure, the validity of the knowledge graphs (simple quintuple with start and end dates)is a big topic, and it could solve a lot of things. It could solve a lot of the time cases. It’s super simple because you give the end and start dates, and you are done, but it does not answer facts that have a relative time or time information in facts . It could solve many use cases but struggle with facts in an indirect temporal context. I like the simplicity of this idea. But the problem of this approach that in most cases, we simply don’t have these timestamps. We don’t have the timestamp where this information starts and ends. And it’s not modeling many events in our life, especially if you have the processes or ongoing activities or recurrent events.
I’m more about thinking about the time of semantics, where you have a time model as a hybrid clock or some global clock that does the partial ordering of the events. It’s mean that you have the chain of the experiences and you have the chain of the facts that have the different time contexts.
We could deduct the time from this chain of the events. But it’s a big, big topic for the research. But what I want to achieve, actually, it’s not separation on episodic and semantic memory. It’s having something in between.
Blockchain of connected events and facts
I call it temporal-aware semantics or time-aware knowledge graphs, where we could encode the semantic fact together with the time component.I doubt that time should be the simple timestamp or the region of the two timestamps. For me, it is more a chain for facts that have a partial order and form a blockchain like a database or a partially ordered Acyclic graph of facts that are temporally connected. We could have some notion of time that is understandable to the agent and a model that allows us to order the events and focus on what the agent knows and how to order this time knowledge and create the chains of the events.
Time anchors
We may have a particular time in the chain that allows us to arrange a more concrete time for the rest of the events. But it’s still an open topic for research. The temporal semantics gets split into a couple of domains. One domain is how to add time to the knowledge graphs. We already have many different solutions. I described them in my previous articles.
Another domain is the agent's memory and how the memory of the artificial intelligence treats the time. This one, it’s much more complex. Because here, we could not operate with the simple timestamps. We need to have the representation of time that are understandable by model and understandable by the agent that will work with this model. And this one, it’s way bigger topic for the research.”
-
@ 2b24a1fa:17750f64
2025-05-06 07:35:01Eine Kolumne von Michael Sailer, jeden ersten Freitag bei Radio München, nachzulesen auf sailersblog.de.
https://soundcloud.com/radiomuenchen/belastigungen-35-das-ist-nicht-meine-regierung?
-
@ 34f1ddab:2ca0cf7c
2025-05-05 22:59:38Losing access to your cryptocurrency can feel like losing a part of your future. Whether it’s a forgotten password, a damaged seed backup, or simply one wrong transfer, the stress can be overwhelming. But there’s a silver lining — crypptrcver.com is here to help! With our expert-led recovery services, you can reclaim your lost Bitcoin and other cryptos safely and swiftly.
Why Trust Crypt Recver? 🤝 🛠️ Expert Recovery Solutions At Crypt Recver, we specialize in resolving some of the most complex wallet-related issues. Our team of skilled engineers has the tools and expertise to tackle:
Partially lost or forgotten seed phrases Extracting funds from outdated or invalid wallet addresses Recovering data from damaged hardware wallets Restoring coins from old or unsupported wallet formats You’re not just getting a service; you’re gaining a partner in your cryptocurrency journey.
🚀 Fast and Efficient Recovery We understand that time is critical in crypto recovery. Our optimized systems ensure that you can regain access to your funds quickly, aiming for speed without sacrificing security. With a 90%+ success rate, you can trust us to fight against the clock on your behalf.
🔒 Privacy is Our Priority Your confidentiality matters. Every recovery session is handled with the utmost care, ensuring all processes are encrypted and confidential. You can rest easy, knowing your sensitive information stays private.
💻 Advanced Technology Our proprietary tools and brute-force optimization techniques allow for maximum efficiency in recovery. No matter how challenging your case may be, our technology is designed to give you the best chance at getting your crypto back.
Our Recovery Services Include: 📈 Bitcoin Recovery: Lost access to your Bitcoin wallet? We help recover lost wallets, private keys, and passphrases. Transaction Recovery: Mistakes happen — whether it’s an incorrect wallet address or a lost password, let us handle the recovery. Cold Wallet Restoration: If your cold wallet is failing, we can safely extract your assets and migrate them into a secure, new wallet. Private Key Generation: Lost your private key? Don’t worry. Our experts can help you regain control using advanced methods — all while ensuring your privacy remains intact. ⚠️ What We Don’t Do While we can handle many scenarios, there are some limitations. For example, we cannot recover funds stored in custodial wallets, or cases where there is a complete loss of four or more seed words without any partial info available. We’re transparent about what’s possible, so you know what to expect.
Don’t Let Lost Crypto Hold You Back! ⏳ Did you know that 3 to 3.4 million BTC — nearly 20% of the total supply — are estimated to be permanently lost? Don’t become part of that statistic! Whether it’s due to a forgotten password, sending funds to the wrong address, or damaged drives, we can help you navigate through it all.
🛡️ Real-Time Dust Attack Protection Protecting your privacy goes beyond just recovery. Our services include dust attack protection, which keeps your activity anonymous and your funds secure. Our suite will shield your identity from unwanted tracking, ransomware, and phishing attempts.
🎉 Start Your Recovery Journey Today! Are you ready to reclaim your lost crypto? Don’t wait until it’s too late!
👉 Request Wallet Recovery Help Now! crypptrcver.com
📞 Need Immediate Assistance? Connect with Us! For real-time support or questions, reach out to our dedicated team on:
✉️ Telegram: Chat with Us on Telegram @crypptrcver 💬 WhatsApp: Message Us on WhatsApp Crypt Recver is your trusted partner in the world of cryptocurrency recovery. Let us turn your challenges into victories. Don’t hesitate — your crypto future starts now! 🚀✨
Act fast and secure your digital assets with Crypt Recver!Losing access to your cryptocurrency can feel like losing a part of your future. Whether it’s a forgotten password, a damaged seed backup, or simply one wrong transfer, the stress can be overwhelming. But there’s a silver lining — crypptrcver.com is here to help! With our expert-led recovery services, you can reclaim your lost Bitcoin and other cryptos safely and swiftly.
Why Trust Crypt Recver? 🤝\ 🛠️ Expert Recovery Solutions\ At Crypt Recver, we specialize in resolving some of the most complex wallet-related issues. Our team of skilled engineers has the tools and expertise to tackle:
Partially lost or forgotten seed phrases\ Extracting funds from outdated or invalid wallet addresses\ Recovering data from damaged hardware wallets\ Restoring coins from old or unsupported wallet formats\ You’re not just getting a service; you’re gaining a partner in your cryptocurrency journey.
🚀 Fast and Efficient Recovery\ We understand that time is critical in crypto recovery. Our optimized systems ensure that you can regain access to your funds quickly, aiming for speed without sacrificing security. With a 90%+ success rate, you can trust us to fight against the clock on your behalf.
🔒 Privacy is Our Priority\ Your confidentiality matters. Every recovery session is handled with the utmost care, ensuring all processes are encrypted and confidential. You can rest easy, knowing your sensitive information stays private.
💻 Advanced Technology\ Our proprietary tools and brute-force optimization techniques allow for maximum efficiency in recovery. No matter how challenging your case may be, our technology is designed to give you the best chance at getting your crypto back.
Our Recovery Services Include: 📈\ Bitcoin Recovery: Lost access to your Bitcoin wallet? We help recover lost wallets, private keys, and passphrases.\ Transaction Recovery: Mistakes happen — whether it’s an incorrect wallet address or a lost password, let us handle the recovery.\ Cold Wallet Restoration: If your cold wallet is failing, we can safely extract your assets and migrate them into a secure, new wallet.\ Private Key Generation: Lost your private key? Don’t worry. Our experts can help you regain control using advanced methods — all while ensuring your privacy remains intact.\ ⚠️ What We Don’t Do\ While we can handle many scenarios, there are some limitations. For example, we cannot recover funds stored in custodial wallets, or cases where there is a complete loss of four or more seed words without any partial info available. We’re transparent about what’s possible, so you know what to expect.
Don’t Let Lost Crypto Hold You Back! ⏳\ Did you know that 3 to 3.4 million BTC — nearly 20% of the total supply — are estimated to be permanently lost? Don’t become part of that statistic! Whether it’s due to a forgotten password, sending funds to the wrong address, or damaged drives, we can help you navigate through it all.
🛡️ Real-Time Dust Attack Protection\ Protecting your privacy goes beyond just recovery. Our services include dust attack protection, which keeps your activity anonymous and your funds secure. Our suite will shield your identity from unwanted tracking, ransomware, and phishing attempts.
🎉 Start Your Recovery Journey Today!\ Are you ready to reclaim your lost crypto? Don’t wait until it’s too late!
👉 Request Wallet Recovery Help Now! crypptrcver.com
📞 Need Immediate Assistance? Connect with Us!\ For real-time support or questions, reach out to our dedicated team on:
✉️ Telegram: Chat with Us on Telegram @crypptrcver\ 💬 WhatsApp: Message Us on WhatsApp\ Crypt Recver is your trusted partner in the world of cryptocurrency recovery. Let us turn your challenges into victories. Don’t hesitate — your crypto future starts now! 🚀✨
Act fast and secure your digital assets with Crypt Recver!
-
@ a39d19ec:3d88f61e
2024-11-21 12:05:09A state-controlled money supply can influence the development of socialist policies and practices in various ways. Although the relationship is not deterministic, state control over the money supply can contribute to a larger role of the state in the economy and facilitate the implementation of socialist ideals.
Fiscal Policy Capabilities
When the state manages the money supply, it gains the ability to implement fiscal policies that can lead to an expansion of social programs and welfare initiatives. Funding these programs by creating money can enhance the state's influence over the economy and move it closer to a socialist model. The Soviet Union, for instance, had a centralized banking system that enabled the state to fund massive industrialization and social programs, significantly expanding the state's role in the economy.
Wealth Redistribution
Controlling the money supply can also allow the state to influence economic inequality through monetary policies, effectively redistributing wealth and reducing income disparities. By implementing low-interest loans or providing financial assistance to disadvantaged groups, the state can narrow the wealth gap and promote social equality, as seen in many European welfare states.
Central Planning
A state-controlled money supply can contribute to increased central planning, as the state gains more influence over the economy. Central banks, which are state-owned or heavily influenced by the state, play a crucial role in managing the money supply and facilitating central planning. This aligns with socialist principles that advocate for a planned economy where resources are allocated according to social needs rather than market forces.
Incentives for Staff
Staff members working in state institutions responsible for managing the money supply have various incentives to keep the system going. These incentives include job security, professional expertise and reputation, political alignment, regulatory capture, institutional inertia, and legal and administrative barriers. While these factors can differ among individuals, they can collectively contribute to the persistence of a state-controlled money supply system.
In conclusion, a state-controlled money supply can facilitate the development of socialist policies and practices by enabling fiscal policies, wealth redistribution, and central planning. The staff responsible for managing the money supply have diverse incentives to maintain the system, further ensuring its continuation. However, it is essential to note that many factors influence the trajectory of an economic system, and the relationship between state control over the money supply and socialism is not inevitable.
-
@ 57d1a264:69f1fee1
2025-05-06 06:00:25Album art didn’t always exist. In the early 1900s, recorded music was still a novelty, overshadowed by sales of sheet music. Early vinyl records were vastly different from what we think of today: discs were sold individually and could only hold up to four minutes of music per side. Sometimes, only one side of the record was used. One of the most popular records of 1910, for example, was “Come, Josephine, in My Flying Machine”: it clocked in at two minutes and 39 seconds.
The invention of album art can get lost in the story of technological mastery. But among all the factors that contributed to the rise of recorded music, it stands as one of the few that was wholly driven by creators themselves. Album art — first as marketing material, then as pure creative expression — turned an audio-only medium into a multi-sensory experience.
This is the story of the people who made music visible.
originally posted at https://stacker.news/items/972642
-
@ a39d19ec:3d88f61e
2024-11-17 10:48:56This week's functional 3d print is the "Dino Clip".
Dino Clip
I printed it some years ago for my son, so he would have his own clip for cereal bags.
Now it is used to hold a bag of dog food close.
The design by "Sneaks" is a so called "print in place". This means that the whole clip with moving parts is printed in one part, without the need for assembly after the print.
The clip is very strong, and I would print it again if I need a "heavy duty" clip for more rigid or big bags. Link to the file at Printables
-
@ c9badfea:610f861a
2025-05-05 20:16:29- Install PocketPal (it's free and open source)
- Launch the app, open the menu, and navigate to Models
- Download one or more models (e.g. Phi, Llama, Qwen)
- Once downloaded, tap Load to start chatting
ℹ️ Experiment with different models and their quantizations (Q4, Q6, Q8, etc.) to find the most suitable one
-
@ 57d1a264:69f1fee1
2025-05-06 05:49:01I don’t like garlic. It’s not a dislike for the taste in the moment, so much as an extreme dislike for the way it stays with you—sometimes for days—after a particularly garlicky meal.
Interestingly enough, both of my brothers love garlic. They roast it by itself and keep it at the ready so they can have a very strong garlic profile in their cooking. When I prepare a dish, I don’t even see garlic on the ingredient list. I’ve cut it out of my life so completely that my brain genuinely skips over it in recipes. While my brothers are looking for ways to sneak garlic into everything they make, I’m subconsciously avoiding it altogether.
A few years back, when I was digging intensely into how design systems mature, I stumbled on the concept of a design system origin story. There are two extreme origin stories and an infinite number of possibilities between. On one hand you have the grassroots system, where individuals working on digital products are simply trying to solve their own daily problems. They’re frustrated with having to go cut and paste elements from past designs or with recreating the same layouts over and over, so they start to work more systematically. On the other hand, you have the top down system, where leadership is directing teams to take a more systematic approach, often forming a small partially dedicated core team to tackle some centralized assets and guidelines for all to follow. The influences in those early days bias a design system in interesting and impactful ways.
We’ve established that there are a few types of bias that are either intentionally or unintentionally embedded into our design systems. Acknowledging this is a great first step. But, what’s the impact of this? Does it matter?
I believe there are a few impacts design system biases, but there’s one that stands out. The bias in your design system makes some individuals feel the system is meant for them and others feel it’s not. This is a problem because, a design system cannot live up to it’s expected value until it is broadly in use. If individuals feel your design system is not for them, the won’t use it. And, as you know, it doesn’t matter how good your design system is if nobody is using it.
originally posted at https://stacker.news/items/972641
-
@ 6a6be47b:3e74e3e1
2025-05-05 19:49:17Hi frens, you may or may not have noticed I’ve been MIA for about three weeks. I wanted to take a moment to explain where I’ve been, and why I stepped back for a bit.
At the start of the year, I was determined to make the most of it-and I still am. I’ve been pouring myself into several projects, especially my painting, which has really taken off. I’m so grateful for your support; none of this would be possible without you. But here’s where things got tricky. I started feeling like I had to “earn” every moment of rest, and that mindset snowballed. My health took a hit, and then one of my other projects completely derailed.
The stress piled up, and I felt like I was on the verge of burning out.
Don’t get me wrong-I’ve always welcomed change, even when it’s tough. And this time was no exception. The project I’d been working on for months took a 180, and all that effort seemed to go down the drain. I was frustrated, but I kept pushing, even though I was running on fumes. By the time I “earned” a break, I was too exhausted to enjoy it.
I managed to post a piece about Holy Week on my blog (which I’m really proud of-check it out if you haven’t!), but after that, I was done. I realized I needed to stop, not just for a day, but for a real break. When I finally took a few days off with my husband, it was clear how much I needed it. I was burned out, plain and simple. Now, I’m still putting myself back together. Mentally, I feel drained, but I’m stronger than I was a few weeks ago. I know I’m lucky and privileged to be able to take this time, and I’m grateful for the perspective it’s given me. I also r realize that this isn’t just a “me” thing-most people have been here at some point. We keep going until there’s nothing left, then wonder why we feel like empty shells.
I’ll be back to drawing soon (knock on wood-don’t want to jinx it!). In the meantime, take care of yourselves, and godspeed.
-
@ c9badfea:610f861a
2025-05-05 19:34:45- Install Kiwix (it's free and open source)
- Download ZIM files from the Kiwix Library (you will find complete offline versions of Wikipedia, Stack Overflow, Bitcoin Wiki, DevDocs and many more)
- Open the downloaded ZIM files within the Kiwix app
ℹ️ You can also package any website using either Kiwix Zimit (online tool) or the Zimit Docker Container (for technical users)
ℹ️
.zim
is the file format used for packaged websites -
@ 57d1a264:69f1fee1
2025-05-06 05:37:29Design can’t be effective when squeezed into a decades-old process.
When the Agile Manifesto was inked in 2001, it was supposed to spark a revolution, and it did: by 2023, 71% of US companies were using Agile. The simple list of commitments to collaboration and adaptiveness branched into frameworks such as Scrum and Kanban.
“Agile” was about having a responsive mindset, not about which process you followed, but it became about which process you followed.
Agile was designed for engineering teams but spread to whole companies. Scaled frameworks emerged to coordinate Scrum teams, with a sprawling training and certification industry. In 2022, the enterprise Agile transformation industry was predicted to reach $142 billion by 2032.
originally posted at https://stacker.news/items/972640
-
@ 4ba8e86d:89d32de4
2024-11-14 09:17:14Tutorial feito por nostr:nostr:npub1rc56x0ek0dd303eph523g3chm0wmrs5wdk6vs0ehd0m5fn8t7y4sqra3tk poste original abaixo:
Parte 1 : http://xh6liiypqffzwnu5734ucwps37tn2g6npthvugz3gdoqpikujju525yd.onion/263585/tutorial-debloat-de-celulares-android-via-adb-parte-1
Parte 2 : http://xh6liiypqffzwnu5734ucwps37tn2g6npthvugz3gdoqpikujju525yd.onion/index.php/263586/tutorial-debloat-de-celulares-android-via-adb-parte-2
Quando o assunto é privacidade em celulares, uma das medidas comumente mencionadas é a remoção de bloatwares do dispositivo, também chamado de debloat. O meio mais eficiente para isso sem dúvidas é a troca de sistema operacional. Custom Rom’s como LineageOS, GrapheneOS, Iodé, CalyxOS, etc, já são bastante enxutos nesse quesito, principalmente quanto não é instalado os G-Apps com o sistema. No entanto, essa prática pode acabar resultando em problemas indesejados como a perca de funções do dispositivo, e até mesmo incompatibilidade com apps bancários, tornando este método mais atrativo para quem possui mais de um dispositivo e separando um apenas para privacidade. Pensando nisso, pessoas que possuem apenas um único dispositivo móvel, que são necessitadas desses apps ou funções, mas, ao mesmo tempo, tem essa visão em prol da privacidade, buscam por um meio-termo entre manter a Stock rom, e não ter seus dados coletados por esses bloatwares. Felizmente, a remoção de bloatwares é possível e pode ser realizada via root, ou mais da maneira que este artigo irá tratar, via adb.
O que são bloatwares?
Bloatware é a junção das palavras bloat (inchar) + software (programa), ou seja, um bloatware é basicamente um programa inútil ou facilmente substituível — colocado em seu dispositivo previamente pela fabricante e operadora — que está no seu dispositivo apenas ocupando espaço de armazenamento, consumindo memória RAM e pior, coletando seus dados e enviando para servidores externos, além de serem mais pontos de vulnerabilidades.
O que é o adb?
O Android Debug Brigde, ou apenas adb, é uma ferramenta que se utiliza das permissões de usuário shell e permite o envio de comandos vindo de um computador para um dispositivo Android exigindo apenas que a depuração USB esteja ativa, mas também pode ser usada diretamente no celular a partir do Android 11, com o uso do Termux e a depuração sem fio (ou depuração wifi). A ferramenta funciona normalmente em dispositivos sem root, e também funciona caso o celular esteja em Recovery Mode.
Requisitos:
Para computadores:
• Depuração USB ativa no celular; • Computador com adb; • Cabo USB;
Para celulares:
• Depuração sem fio (ou depuração wifi) ativa no celular; • Termux; • Android 11 ou superior;
Para ambos:
• Firewall NetGuard instalado e configurado no celular; • Lista de bloatwares para seu dispositivo;
Ativação de depuração:
Para ativar a Depuração USB em seu dispositivo, pesquise como ativar as opções de desenvolvedor de seu dispositivo, e lá ative a depuração. No caso da depuração sem fio, sua ativação irá ser necessária apenas no momento que for conectar o dispositivo ao Termux.
Instalação e configuração do NetGuard
O NetGuard pode ser instalado através da própria Google Play Store, mas de preferência instale pela F-Droid ou Github para evitar telemetria.
F-Droid: https://f-droid.org/packages/eu.faircode.netguard/
Github: https://github.com/M66B/NetGuard/releases
Após instalado, configure da seguinte maneira:
Configurações → padrões (lista branca/negra) → ative as 3 primeiras opções (bloquear wifi, bloquear dados móveis e aplicar regras ‘quando tela estiver ligada’);
Configurações → opções avançadas → ative as duas primeiras (administrar aplicativos do sistema e registrar acesso a internet);
Com isso, todos os apps estarão sendo bloqueados de acessar a internet, seja por wifi ou dados móveis, e na página principal do app basta permitir o acesso a rede para os apps que você vai usar (se necessário). Permita que o app rode em segundo plano sem restrição da otimização de bateria, assim quando o celular ligar, ele já estará ativo.
Lista de bloatwares
Nem todos os bloatwares são genéricos, haverá bloatwares diferentes conforme a marca, modelo, versão do Android, e até mesmo região.
Para obter uma lista de bloatwares de seu dispositivo, caso seu aparelho já possua um tempo de existência, você encontrará listas prontas facilmente apenas pesquisando por elas. Supondo que temos um Samsung Galaxy Note 10 Plus em mãos, basta pesquisar em seu motor de busca por:
Samsung Galaxy Note 10 Plus bloatware list
Provavelmente essas listas já terão inclusas todos os bloatwares das mais diversas regiões, lhe poupando o trabalho de buscar por alguma lista mais específica.
Caso seu aparelho seja muito recente, e/ou não encontre uma lista pronta de bloatwares, devo dizer que você acaba de pegar em merda, pois é chato para um caralho pesquisar por cada aplicação para saber sua função, se é essencial para o sistema ou se é facilmente substituível.
De antemão já aviso, que mais para frente, caso vossa gostosura remova um desses aplicativos que era essencial para o sistema sem saber, vai acabar resultando na perda de alguma função importante, ou pior, ao reiniciar o aparelho o sistema pode estar quebrado, lhe obrigando a seguir com uma formatação, e repetir todo o processo novamente.
Download do adb em computadores
Para usar a ferramenta do adb em computadores, basta baixar o pacote chamado SDK platform-tools, disponível através deste link: https://developer.android.com/tools/releases/platform-tools. Por ele, você consegue o download para Windows, Mac e Linux.
Uma vez baixado, basta extrair o arquivo zipado, contendo dentro dele uma pasta chamada platform-tools que basta ser aberta no terminal para se usar o adb.
Download do adb em celulares com Termux.
Para usar a ferramenta do adb diretamente no celular, antes temos que baixar o app Termux, que é um emulador de terminal linux, e já possui o adb em seu repositório. Você encontra o app na Google Play Store, mas novamente recomendo baixar pela F-Droid ou diretamente no Github do projeto.
F-Droid: https://f-droid.org/en/packages/com.termux/
Github: https://github.com/termux/termux-app/releases
Processo de debloat
Antes de iniciarmos, é importante deixar claro que não é para você sair removendo todos os bloatwares de cara sem mais nem menos, afinal alguns deles precisam antes ser substituídos, podem ser essenciais para você para alguma atividade ou função, ou até mesmo são insubstituíveis.
Alguns exemplos de bloatwares que a substituição é necessária antes da remoção, é o Launcher, afinal, é a interface gráfica do sistema, e o teclado, que sem ele só é possível digitar com teclado externo. O Launcher e teclado podem ser substituídos por quaisquer outros, minha recomendação pessoal é por aqueles que respeitam sua privacidade, como Pie Launcher e Simple Laucher, enquanto o teclado pelo OpenBoard e FlorisBoard, todos open-source e disponíveis da F-Droid.
Identifique entre a lista de bloatwares, quais você gosta, precisa ou prefere não substituir, de maneira alguma você é obrigado a remover todos os bloatwares possíveis, modifique seu sistema a seu bel-prazer. O NetGuard lista todos os apps do celular com o nome do pacote, com isso você pode filtrar bem qual deles não remover.
Um exemplo claro de bloatware insubstituível e, portanto, não pode ser removido, é o com.android.mtp, um protocolo onde sua função é auxiliar a comunicação do dispositivo com um computador via USB, mas por algum motivo, tem acesso a rede e se comunica frequentemente com servidores externos. Para esses casos, e melhor solução mesmo é bloquear o acesso a rede desses bloatwares com o NetGuard.
MTP tentando comunicação com servidores externos:
Executando o adb shell
No computador
Faça backup de todos os seus arquivos importantes para algum armazenamento externo, e formate seu celular com o hard reset. Após a formatação, e a ativação da depuração USB, conecte seu aparelho e o pc com o auxílio de um cabo USB. Muito provavelmente seu dispositivo irá apenas começar a carregar, por isso permita a transferência de dados, para que o computador consiga se comunicar normalmente com o celular.
Já no pc, abra a pasta platform-tools dentro do terminal, e execute o seguinte comando:
./adb start-server
O resultado deve ser:
daemon not running; starting now at tcp:5037 daemon started successfully
E caso não apareça nada, execute:
./adb kill-server
E inicie novamente.
Com o adb conectado ao celular, execute:
./adb shell
Para poder executar comandos diretamente para o dispositivo. No meu caso, meu celular é um Redmi Note 8 Pro, codinome Begonia.
Logo o resultado deve ser:
begonia:/ $
Caso ocorra algum erro do tipo:
adb: device unauthorized. This adb server’s $ADB_VENDOR_KEYS is not set Try ‘adb kill-server’ if that seems wrong. Otherwise check for a confirmation dialog on your device.
Verifique no celular se apareceu alguma confirmação para autorizar a depuração USB, caso sim, autorize e tente novamente. Caso não apareça nada, execute o kill-server e repita o processo.
No celular
Após realizar o mesmo processo de backup e hard reset citado anteriormente, instale o Termux e, com ele iniciado, execute o comando:
pkg install android-tools
Quando surgir a mensagem “Do you want to continue? [Y/n]”, basta dar enter novamente que já aceita e finaliza a instalação
Agora, vá até as opções de desenvolvedor, e ative a depuração sem fio. Dentro das opções da depuração sem fio, terá uma opção de emparelhamento do dispositivo com um código, que irá informar para você um código em emparelhamento, com um endereço IP e porta, que será usado para a conexão com o Termux.
Para facilitar o processo, recomendo que abra tanto as configurações quanto o Termux ao mesmo tempo, e divida a tela com os dois app’s, como da maneira a seguir:
Para parear o Termux com o dispositivo, não é necessário digitar o ip informado, basta trocar por “localhost”, já a porta e o código de emparelhamento, deve ser digitado exatamente como informado. Execute:
adb pair localhost:porta CódigoDeEmparelhamento
De acordo com a imagem mostrada anteriormente, o comando ficaria “adb pair localhost:41255 757495”.
Com o dispositivo emparelhado com o Termux, agora basta conectar para conseguir executar os comandos, para isso execute:
adb connect localhost:porta
Obs: a porta que você deve informar neste comando não é a mesma informada com o código de emparelhamento, e sim a informada na tela principal da depuração sem fio.
Pronto! Termux e adb conectado com sucesso ao dispositivo, agora basta executar normalmente o adb shell:
adb shell
Remoção na prática Com o adb shell executado, você está pronto para remover os bloatwares. No meu caso, irei mostrar apenas a remoção de um app (Google Maps), já que o comando é o mesmo para qualquer outro, mudando apenas o nome do pacote.
Dentro do NetGuard, verificando as informações do Google Maps:
Podemos ver que mesmo fora de uso, e com a localização do dispositivo desativado, o app está tentando loucamente se comunicar com servidores externos, e informar sabe-se lá que peste. Mas sem novidades até aqui, o mais importante é que podemos ver que o nome do pacote do Google Maps é com.google.android.apps.maps, e para o remover do celular, basta executar:
pm uninstall –user 0 com.google.android.apps.maps
E pronto, bloatware removido! Agora basta repetir o processo para o resto dos bloatwares, trocando apenas o nome do pacote.
Para acelerar o processo, você pode já criar uma lista do bloco de notas com os comandos, e quando colar no terminal, irá executar um atrás do outro.
Exemplo de lista:
Caso a donzela tenha removido alguma coisa sem querer, também é possível recuperar o pacote com o comando:
cmd package install-existing nome.do.pacote
Pós-debloat
Após limpar o máximo possível o seu sistema, reinicie o aparelho, caso entre no como recovery e não seja possível dar reboot, significa que você removeu algum app “essencial” para o sistema, e terá que formatar o aparelho e repetir toda a remoção novamente, desta vez removendo poucos bloatwares de uma vez, e reiniciando o aparelho até descobrir qual deles não pode ser removido. Sim, dá trabalho… quem mandou querer privacidade?
Caso o aparelho reinicie normalmente após a remoção, parabéns, agora basta usar seu celular como bem entender! Mantenha o NetGuard sempre executando e os bloatwares que não foram possíveis remover não irão se comunicar com servidores externos, passe a usar apps open source da F-Droid e instale outros apps através da Aurora Store ao invés da Google Play Store.
Referências: Caso você seja um Australopithecus e tenha achado este guia difícil, eis uma videoaula (3:14:40) do Anderson do canal Ciberdef, realizando todo o processo: http://odysee.com/@zai:5/Como-remover-at%C3%A9-200-APLICATIVOS-que-colocam-a-sua-PRIVACIDADE-E-SEGURAN%C3%87A-em-risco.:4?lid=6d50f40314eee7e2f218536d9e5d300290931d23
Pdf’s do Anderson citados na videoaula: créditos ao anon6837264 http://eternalcbrzpicytj4zyguygpmkjlkddxob7tptlr25cdipe5svyqoqd.onion/file/3863a834d29285d397b73a4af6fb1bbe67c888d72d30/t-05e63192d02ffd.pdf
Processo de instalação do Termux e adb no celular: https://youtu.be/APolZrPHSms
-
@ a10260a2:caa23e3e
2024-11-10 04:35:34nostr:npub1nkmta4dmsa7pj25762qxa6yqxvrhzn7ug0gz5frp9g7p3jdscnhsu049fn added support for classified listings (NIP-99) about a month ago and recently announced this update that allows for creating/editing listings and blog posts via the dashboard.
In other words, listings created on the website are also able to be viewed and edited on other Nostr apps like Amethyst and Shopstr. Interoperability FTW.
I took some screenshots to give you an idea of how things work.
The home page is clean with the ability to search for profiles by name, npub, or Nostr address (NIP-05).
Clicking login allows signing in with a browser extension.
The dashboard gives an overview of the amount of notes posted (both short and long form) and products listed.
Existing blog posts (i.e. long form notes) are synced.
Same for product listings. There’s a nice interface to create new ones and preview them before publishing.
That’s all for now. As you can see, super slick stuff!
Bullish on Cypher.
So much so I had to support the project and buy a subdomain. 😎
https://bullish.cypher.space
originally posted at https://stacker.news/items/760592
-
@ 35f3a26c:92ddf231
2024-11-09 17:10:57"Files" by Google new feature
"Files" by Google added a "feature"... "Smart Search", you can toggle it to OFF and it is highly recommended to do so.
Toggle the Smart Search to OFF, otherwise, google will search and index every picture, video and document in your device, no exceptions, anything you have ever photographed and you forgot, any document you have downloaded or article, etc...
How this could affect you?
Google is actively combating child abuse and therefore it has built in its "AI" a very aggressive algorithm searching of material that "IT THINKS" is related, therefore the following content could be flagged:
- [ ] Pictures of you and your children in the beach
- [ ] Pictures or videos which are innocent in nature but the "AI" "thinks" are not
- [ ] Articles you may have save for research to write your next essay that have links to flagged information or sites
The results:
- [ ] Your google account will be canceled
- [ ] You will be flagged as a criminal across the digital world
You think this is non sense? Think again: https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html
How to switch it off:
- Open files by Google
- Tap on Menu -> Settings
- Turn OFF Smart Search
But you can do more for your privacy and the security of your family
- Stop using google apps, if possible get rid off of Google OS and use Graphene OS
- Go to Settings -> Apps
- Search for Files by Google
- Unistall the app, if you can't disable it
- Keep doing that with most Google apps that are not a must if you have not switched already to GrapheneOS
Remember, Google keeps advocating for privacy, but as many others have pointed out repeatedly, they are the first ones lobbying for the removal of your privacy by regulation and draconian laws, their hypocrisy knows no limits
Recommendation:
I would assume you have installed F-Droid in your android, or Obtainium if you are more advanced, if so, consider "Simple File Manager Pro" by Tibor Kaputa, this dev has a suite of apps that are basic needs and the best feature in my opinion is that not one of his apps connect to the internet, contacts, gallery, files, phone, etc.
Note As most people, we all love the convenience of technology, it makes our lives easier, however, our safety and our family safety should go first, between technology being miss-used and abused by corporations and cyber-criminals data mining and checking for easy targets to attack for profit, we need to keep our guard up. Learning is key, resist the use of new tech if you do not understand the privacy trade offs, no matter how appealing and convenient it looks like. .
Please leave your comments with your favorite FOSS Files app!
-
@ 3bf0c63f:aefa459d
2024-11-07 14:56:17The case against edits
Direct edits are a centralizing force on Nostr, a slippery slope that should not be accepted.
Edits are fine in other, more specialized event kinds, but the
kind:1
space shouldn't be compromised with such a push towards centralization, becausekind:1
is the public square of Nostr, where all focus should be on decentralization and censorship-resistance.- Why?
Edits introduce too much complexity. If edits are widespread, all clients now have to download dozens of extra events at the same time while users are browsing a big feed of notes which are already coming from dozens of different relays using complicated outbox-model-based querying, then for each event they have to open yet another subscription to these relays -- or perform some other complicated batching of subscriptions which then requires more complexity on the event handling side and then when associating these edits with the original events. I can only imagine this will hurt apps performance, but it definitely raises the barrier to entry and thus necessarily decreases Nostr decentralization.
Some clients may be implemneted in way such that they download tons of events and then store them in a local databases, from which they then construct the feed that users see. Such clients may make edits potentially easier to deal with -- but this is hardly an answer to the point above, since such clients are already more complex to implement in the first place.
- What do you have against complex clients?
The point is not to say that all clients should be simple, but that it should be simple to write a client -- or at least as simple as physically possible.
You may not be thinking about it, but if you believe in the promise of Nostr then we should expect to see Nostr feeds in many other contexts other than on a big super app in a phone -- we should see Nostr notes being referenced from and injected in unrelated webpages, unrelated apps, hardware devices, comment sections and so on. All these micro-clients will have to implement some complicated edit-fetching logic now?
- But aren't we already fetching likes and zaps and other things, why not fetch edits too?
Likes, zaps and other similar things are optional. It's perfectly fine to use Nostr without seeing likes and/or zaps -- and, believe me, it does happen quite a lot. The point is basically that likes or zaps don't affect the content of the main post at all, while edits do.
- But edits are optional!
No, they are not optional. If edits become widespread they necessarily become mandatory. Any client that doesn't implement edits will be displaying false information to its users and their experience will be completely broken.
- That's fine, as people will just move to clients that support edits!
Exactly, that is what I expect to happen too, and this is why I am saying edits are a centralizing force that we should be fighting against, not embracing.
If you understand that edits are a centralizing force, then you must automatically agree that they aren't a desirable feature, given that if you are reading this now, with Nostr being so small, there is a 100% chance you care about decentralization and you're not just some kind of lazy influencer that is only doing this for money.
- All other social networks support editing!
This is not true at all. Bluesky has 10x more users than Nostr and doesn't support edits. Instagram doesn't support editing pictures after they're posted, and doesn't support editing comments. Tiktok doesn't support editing videos or comments after they're posted. YouTube doesn't support editing videos after they're posted. Most famously, email, the most widely used and widespread "social app" out there, does not support edits of any kind. Twitter didn't support edits for the first 15 years of its life, and, although some people complained, it didn't hurt the platform at all -- arguably it benefitted it.
If edits are such a straightforward feature to add that won't hurt performance, that won't introduce complexity, and also that is such an essential feature users could never live without them, then why don't these centralized platforms have edits on everything already? There must be something there.
- Eventually someone will implement edits anyway, so why bother to oppose edits now?
Once Nostr becomes big enough, maybe it will be already shielded from such centralizing forces by its sheer volume of users and quantity of clients, maybe not, we will see. All I'm saying is that we shouldn't just push for bad things now just because of a potential future in which they might come.
- The market will decide what is better.
The market has decided for Facebook, Instagram, Twitter and TikTok. If we were to follow what the market had decided we wouldn't be here, and you wouldn't be reading this post.
- OK, you have convinced me, edits are not good for the protocol. But what do we do about the users who just want to fix their typos?
There are many ways. The annotations spec, for example, provides a simple way to append things to a note without being a full-blown edit, and they fall back gracefully to normal replies in clients that don't implement the full annotations spec.
Eventually we could have annotations that are expressed in form of simple (human-readable?) diffs that can be applied directly to the post, but fall back, again, to comments.
Besides these, a very simple idea that wasn't tried yet on Nostr yet is the idea that has been tried for emails and seems to work very well: delaying a post after the "submit" button is clicked and giving the user the opportunity to cancel and edit it again before it is actually posted.
Ultimately, if edits are so necessary, then maybe we could come up with a way to implement edits that is truly optional and falls back cleanly for clients that don't support them directly and don't hurt the protocol very much. Let's think about it and not rush towards defeat.
-
@ dd3548d4:cedd4a2c
2025-05-06 05:27:25Twelve Grounds | Dvādaśa Bhūmayaḥ | द्वादश भूमय
1 प्रस्थानी | Prasthānī | The Stage of Setting Out | A 2 विचारणी | Vicāraṇī | The Stage of Exploration | B 3 परिणीता | Pariṇītā | The Stage of Culmination | C 4 सुदुर्मेधा | Sudurmedhā | The Stage of Profound Wisdom | D 5 अभिनिष्क्रमणी | Abhiniṣkramaṇī | The Stage of Ascension | E 6 अभिमुखी | Abhimukhī | The Stage of Direct Approach | F 7 दुर्निवारणी | Durnivāraṇī | The Stage of Irresistibility | AA 8 अचला | Acalā | The Stage of Immovability | BB 9 साधुमती | Sādhumatī | The Stage of Pure Wisdom | CC 10 धर्ममेघा | Dharmameghā | The Stage of the Dharma Cloud | DD 11 निश्चयावस्था | Niścayāvasthā | The Stage of Certainty | EE 12 सर्वार्थसिद्धि | Sarvārthasiddhi | The Stage of Perfect Fulfillment | FF
each 12 1/2 x 16 1/8 [ inches ] | Saunders Waterford 300g/sq m
Homage to unconfined vastness, the primordial completeness of the three kayas.
-
@ 6e64b83c:94102ee8
2025-05-05 16:50:13Nostr-static is a powerful static site generator that transforms long-form Nostr content into beautiful, standalone websites. It makes your content accessible to everyone, even those not using Nostr clients. For more information check out my previous blog post How to Create a Blog Out of Nostr Long-Form Articles
What's New in Version 0.7?
RSS and Atom Feeds
Version 0.7 brings comprehensive feed support with both RSS and Atom formats. The system automatically generates feeds for your main content, individual profiles, and tag-specific pages. These feeds are seamlessly integrated into your site's header, making them easily discoverable by feed readers and content aggregators.
This feature bridges the gap between Nostr and traditional web publishing, allowing your content to reach readers who prefer feed readers or automated content distribution systems.
Smart Content Discovery
The new tag discovery system enhances your readers' experience by automatically finding and recommending relevant articles from the Nostr network. It works by:
- Analyzing the tags in your articles
- Fetching popular articles from Nostr that share these tags
- Using configurable weights to rank these articles based on:
- Engagement metrics (reactions, reposts, replies)
- Zap statistics (amount, unique zappers, average zap size)
- Content quality signals (report penalties)
This creates a dynamic "Recommended Articles" section that helps readers discover more content they might be interested in, all while staying within the Nostr ecosystem.
See the new features yourself by visiting our demo at: https://blog.nostrize.me
-
@ e0a8cbd7:f642d154
2025-05-06 03:29:12分散型プロトコルNostr上でWeb bookmarkを見たり書いたりする「Nostr Web Bookmark Trend」を試してみました。
NostrのWeb Bookmarkingは「nip-B0 Web Bookmarking· nostr-protocol/nips · GitHub」で定義されています。
WEBブラウザの拡張による認証(NIP-07)でログインしました。
create new web bookmark(新規ブックマーク作成)を開くとこんな感じ。
URL入力部分において、https:// が外に出ているので、URLのhttps:// 部分を消して入力しないといけないのがちょっと面倒。↓
1個、投稿してみました。
アカウント名をクリックするとそのユーザが登録したbookmark一覧が表示されます。
以上、Nostr Web Bookmark Trendについてでした。
なお、本記事は「Nostr NIP-23 マークダウンエディタ」のテストのため、「NostrでWeb bookmark - あたしンちのおとうさんの独り言」と同じ内容を投稿したものです。 -
@ a367f9eb:0633efea
2024-11-05 08:48:41Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?
As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
Open-source matters When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
Open-source is for your friends, and enemies In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
Limiting open-source threatens our own advancement If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.
The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?
As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
Open-source matters
When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
Open-source is for your friends, and enemies
In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
Limiting open-source threatens our own advancement
If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.
The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
Originally published on the website of the Consumer Choice Center.
-
@ a296b972:e5a7a2e8
2025-05-05 22:45:01Zur Gründung der Bundesrepublik Deutschland wurde infolge der Auswirkungen des 2. Weltkriegs auf einem Teil des ehemaligen Deutschen Reiches (nicht des 3. Reiches!) auf Initiative der westlichen Alliierten, federführend die USA als stärkste Kraft, eine demokratische Grundordnung erarbeitet, die wir als das Grundgesetz für die Bundesrepublik Deutschland kennen und schätzen gelernt haben. Da man zum damaligen Zeitpunkt, im Gegensatz zu heute, noch sehr genau mit der Sprache war, hat das Wort „für“ größere Bedeutung, als ihm heute zugesprochen wird. Hätte der unter westlich-alliierter Besatzung stehende Rumpf des Deutschen Reiches eigenständig eine Verfassung erstellen können, wäre es nicht Grundgesetz (das laut Definition einen provisorischen Charakter hat) genannt worden, sondern eben Verfassung. Und hätte diese Verfassung eigenständig erarbeitet werden können, hätte sie geheißen: Verfassung der Bundesrepublik Deutschland.
Es heißt zum Beispiel auch: Costituzione della Repubblica Italiana. also Konstitution der Republik Italien, und nicht Costituzione per La Repubblica Italiana.
Es ist nachvollziehbar, dass die Bedenken der westlichen Alliierten aufgrund der Nazi-Zeit so groß waren, dass man den „Deutschen“ nicht zutraute, selbständig eine Verfassung zu erstellen.
Zum vorbeugenden Schutz, es sollte verunmöglicht werden, dass ein Regime noch einmal in der Lage sei, die Macht zu ergreifen, wurde als Kontrollinstanz der Verfassungsschutz gegründet. Dieser ist dem Innenministerium gegenüber weisungsgebunden. Die jüngste Aussage, auf den letzten Metern der Innenministerin Faeser, der Verfassungsschutz sei selbständig, ist eine manipulative Beschreibung, die davon ablenken soll, dass das Innenministerium dem Verfassungsschutz sehr wohl übergeordnet ist. Das Wort „selbständig“ soll Eigenständigkeit vorgaukeln, hat aber in der Hierarchie keinerlei Bedeutung.
Im Jahre 1949 herrschte ein anderer Zeitgeist. Werte wie Ehrlichkeit, Redlichkeit und Anständigkeit hatten noch eine andere Bedeutung als heute. Politiker waren noch von einem anderen Schlag und hatten weitgehend den Anspruch zum Wohle des Volkes zu entscheiden und zu handeln. Diese Werte reichten noch mindestens bis in das Agieren des Bundeskanzlers Helmut Schmidt hinein.
Niemand konnte sich deshalb zum damaligen Zeitpunkt vorstellen, dass dieser eigentlich als Kontrollinstanz gedachte Verfassungsschutz einmal von der Politik missbraucht werden könnte, um oppositionelle Kräfte auszuschalten zu versuchen, wie es mit der Einstufung der AfD als gesichert rechtsextrem geschehen ist. Rechtlich hat das noch keine Konsequenzen, aber es geht in erster Linie darum, dem Image der AfD zu schaden, um weiteren Zulauf zu verhindern. Diese Art von Durchtriebenheit kam in den Gedanken und dem Ehrgefühl der damals verantwortlichen Politiker noch nicht vor.
Die ehemaligen Volksparteien, man kann auch sagen, die Alt-Parteien, sehen ihre Felle schon seit einiger Zeit davonschwimmen. Die Opposition hat derzeit die Zustimmung einer ehemaligen Volkspartei überholt und ist sogar stärkste Kraft geworden. Sie repräsentiert aktuell rund 10 Millionen der Wähler. Tendenz steigend. Und das die folglich auch gesichert rechtsextrem gewählt haben, oder gar gesichert rechtsextrem sind, wird ihnen vielleicht nicht besonders schmeicheln.
Parallel dazu haben die Alt-Parteien die Medienlandschaft gekapert und versuchen mit Einschränkungen der Meinungsfreiheit, sofern sich Kritik gegen sie richtet und durch selbstermächtigte Entscheidung über das, was Wahrheit und Lüge ist, unliebsame Stimmen mundtot zu machen, um unter allen Umständen an der Macht zu bleiben.
Diese Vorgehensweise widerspricht dem demokratischen Verständnis, das aus dem, wenn auch „nur“ Grundgesetz, statt Verfassung, hervorgeht und die Nachkriegsgenerationen im besten Sinne beeinflusst und demokratisch geprägt hat.
Aus dieser Sicht können die Aktivitäten der Alt-Parteien nur als Angriff auf die Demokratie, wie sie diese Generationen verstehen, gesehen werden.
Daher führt jeder Angriff der Alt-Parteien auf die Demokratie dazu, dass die Opposition immer mehr an Stimmen gewinnt und wohl weiterhin gewinnen wird.
Es erschließt sich nicht, warum die Alt-Parteien nicht auf die denkbar einfachste Lösung kommen, Vertrauen in ihre Politik zurückzugewinnen, in dem sie eine Politik machen würden, die dem Willen der Bürger entspricht. Mit dem Gegenteil machen sich die Volksvertreter zu Vertretern ohne den Rückhalt vom Volk, und man muss sich fragen, wessen Interessen sie derzeit wirklich vertreten. Bestenfalls die eigenen, schlimmstenfalls die des global agierenden Tiefen Staates, der ihnen ins Ohr flüstert, was sie zu tun haben.
Mit jeder vernunftbegabten Entscheidung, die dem Willen des Souveräns entspräche, würden sie die Opposition zunehmend schwächen. Da dies nicht geschieht, kann man nur zu der Schlussfolgerung kommen, dass sich hier auch selbstzerstörerische, suizidale Kräfte festgesetzt haben. Es ist wie eine Sucht, von der man nicht mehr loslassen kann.
Solange die Alt-Parteien nicht in der Lage sind, die Unzufriedenheit in der Bevölkerung wahr- und ernst zu nehmen, werden sie die Opposition stärken und zu immer rigideren Maßnahmen greifen müssen, um ihre Macht zu erhalten und sich damit immer mehr von demokratischen Verhältnissen entfernen, und zwar genau in die Richtung vor der die Alt-Parteien in ihrer ideologischen Verirrung warnen.
Seitens der Opposition gibt es in der Gesamtschau keine Anzeichen dafür, dass die Demokratie abgeschafft werden soll, im Gegenteil, es wird für mehr Bürgerbeteiligung plädiert, was ein sicheres Merkmal für demokratische Absichten ist.
Aus Sicht der Alt-Parteien macht die Brandmauer Sinn, weil sie sie vor ihrem eigenen Machtverlust schützt. Der Fall der Berliner Mauer sollte ihnen eigentlich eine Warnung sein.
Fairerweise darf nicht unterschlagen werden, dass es in der Opposition einige Verirrte gibt, wobei noch interessant wäre zu erfahren, welche davon als V-Männer des Verfassungsschutzes eingeschleust wurden. Diese jedoch zum Anlass zu nehmen, die Opposition unter Generalverdacht zu stellen, steht einem demokratischen Handeln diametral entgegen.
Das Grundgesetz wird so nicht geschützt, sondern bis kurz vor der Sollbruchstelle verbogen.
Die Einstufung der Opposition als gesichert rechts-extrem beruht auf einem mutmaßlich 1000 Seiten starken Papier, das offensichtlich nur ein erlesener Kreis zu sehen bekommen soll. Dazu gehört nicht die Bevölkerung, die sicher nur einmal mehr nicht zu Teilen verunsichert werden soll. Und selbstverständlich schon gar nicht diejenigen, die es betrifft, nämlich die Opposition.
Eine eindeutige Fragwürdigkeit der Aktivitäten des Verfassungsschutzes wäre schwerer festzustellen, wenn es gleichwohl Parteien gäbe, die als gesichert links-extrem oder zumindest als links-extremer Verdachtsfall eingestuft würden. Nicht ganz unberechtigte Gründe hierfür könnten schon gefunden werden, wenn der politische Wille es wollte.
Auch die seltsam-umstrittene Installierung des Präsidenten des Bundesamtes für Verfassungsschutz (genau genommen für Grundgesetzschutz) lässt Fragen offen.
Generell müsste es eine unabhängige Überprüfung geben, ob die Gewaltenteilung in Deutschland noch gewährleistet ist, da es durch das augenscheinliche Zusammenspiel in der Richterschaft, der Gesetzgebung und der vierten Gewalt, den Medien, Anlass zu Zweifel gibt.
Diese Zweifel sind nicht demokratiegefährdend, sondern im Gegenteil, es ist demokratische Pflicht, den Verantwortlichen kritisch auf die Finger zu schauen, ob im Sinne des Souveräns entschieden und gehandelt wird. Zweifel könnte man dadurch ausräumen, in dem eindeutig bewiesen würde, das alles seine Richtigkeit hat.
Das wäre vornehmlich die Aufgabe der Alt-Medien, die derzeit durch Totalversagen glänzen, weil alles mit allem zusammenhängt, jeder jeden kennt und man es sich über Jahre so eingerichtet hat, dass man gerne unter sich bleibt und Pöstchen-Hüpfen von einem Lager ins andere spielt.
Vielleicht ist es sogar nötig, dass zur unabhängigen Überprüfung, die Alliierten, inklusive Russland, noch einmal, nach rund 80 Jahren, auf den Plan gerufen werden müssen, um sozusagen eine Zwischenbilanz zu ziehen, inwieweit sich das einst etablierte, demokratische System bewährt hat, und ob es derzeit noch im ursprünglichen Sinne umgesetzt und gelebt wird. Es ist anzunehmen, dass hier ein gewaltiges Optimierungspotenzial zum Vorschein kommen könnte.
Viele Bürger in Deutschland haben den Wunsch, wieder in einer Demokratie zu leben, die ihre Namen auch verdient hat. Sie wollen wieder frei ihre Meinung jeglicher Art aussprechen können, miteinander diskutieren, auch einmal Unsinn reden, ohne, dass sie der Blockwart gleich bei einem Denunzierungsportal anschwärzt, oder sie Gefahr laufen, dass ihr Konto gekündigt wird, oder sie morgens um 6 Uhr Besuch bekommen, der noch nicht einmal frische Semmeln mitbringt.
Dieser Artikel wurde mit dem Pareto-Client geschrieben
* *
(Bild von pixabay)
-
@ b154080c:00027cc7
2025-05-06 03:01:47Introduction
In the ancient times of Israel, masculinity found its true embodiment in the courageous story of Daniel. Amidst the foreign land of Babylon, Daniel stood firm in his convictions, showcasing strength, and dedication to his beliefs.
Despite living in a culture that sought to diminish his faith, Daniel refused to bow before idols or false deities. His defiance challenged societal expectations, revealing a masculinity that transcended worldly norms. Rooted in his unshakable belief in the one true God, Daniel's resolve remained unyielding. Facing the wrath of the king, Daniel fearlessly stood before Nebuchadnezzar, humbly declaring his allegiance to God alone. Cast into a blazing furnace as punishment, Daniel emerged unharmed. God's angel shielded him from the scorching flames, proving that his faith made him invincible. Witnessing this display of masculinity, Nebuchadnezzar acknowledged the greatness of Daniel's God, bringing about a profound transformation.
Daniel's story serves as a testament to the essence of masculinity—a resolute dedication to one's convictions, the courage to defy societal expectations, and a commitment to truth. His faith and devotion inspire generations, exemplifying the power of masculine conviction.
There have been countless instances throughout history where acts of courage have taken place on a spectrum. Although both men and women can display such acts, history has shown that resolve, courage, and bravery have predominantly resided within the realm of masculinity. The Apostle Paul himself concluded the book of 1 Corinthians by saying, "Be watchful, stand firm in the faith, act like men, be strong. Let all that you do be done in love" (16:13-14). By combining this passage with the numerous accounts of provision, battle, sacrifice, and honor, it becomes evident that God has designed inherent and very important differences within the male gender.
The Bible presents us with inspiring examples of both courageous women, such as Deborah, Rahab, and Esther, and valiant men, including Joshua, Gideon, Samson, David, Jonathan, Nehemiah, the Prophets, the twelve Apostles, and above all, Jesus Himself. While these accounts acknowledge the remarkable contributions of women, they predominantly highlight the male figures who exemplify strength, boldness, courage, and a resolute sense of responsibility. Throughout its pages, the Bible paints a vivid picture of masculinity's profound impact and enduring significance which we must embrace.
Jesus’ Masculinity
Jesus exhibited remarkable courage throughout many of his acts, and it is through his expression of masculinity that this courage shines even brighter. Jesus' masculinity played a crucial role in enabling him to display great bravery and determination in fulfilling his mission. However, it's important to note that Jesus redefines masculinity beyond physical strength or dominance, embracing resilience, self-sacrifice, and unyielding conviction as its defining qualities.
Jesus' courage stemmed from his deep understanding of his purpose and his unshakable faith in his Father's plan. He fearlessly challenged the religious authorities of his time, calling out hypocrisy and speaking truth to power. Despite facing opposition and hostility, Jesus stood firm in his convictions, undeterred by the threats and ridicule he encountered. Jesus' embodiment of masculinity highlights the transformative power it can have when rooted in love and compassion.
Modern Culture Poisoning the Church
It is important to realize the true masculinity of Jesus and the example that he has set for us in this regard. Unfortunately, I often see a tendency nowadays to downplay Jesus' masculinity and instead depict Him in a more feminized manner.
In both our culture and the modern church, there is a tendency to present a version of Jesus that deviates from the biblical portrayal. Perhaps you've come across people who refer to Jesus as their "best friend" or even draw comparisons between their relationship with Him and that of a "boyfriend.” This is in fact very unbiblical. The Bible never presents our love for God using such romantic or erotic language. While the men depicted in Scripture certainly loved God, they were never portrayed as being desperate for Him or romantically in love with Him. People are often taught a very shallow and weak portrayal of Him.
In the United States, particularly in the context of flourishing Protestantism, the shift from considering the community as a whole to focusing on the individual has led to a rise in strong individualistic beliefs which has resulted in a diminished sense of community within the Catholic Church. When the focal point of Catholicism becomes "Jesus and me," it opens the door to a mindset of being "spiritual" rather than "religious.” Attending church becomes a matter of personal choice, and faith no longer necessarily influences or intersects with areas such as business or politics. The sole emphasis becomes on one's personal relationship with Christ, prioritizing individual salvation over communal or global redemption. The vision of the kingdom of God taking shape on earth also becomes less urgent, as the emphasis shifts towards a faith centered on transcendence, emotions, and sentiment, rather than tangible actions.
The perception of Jesus' masculinity has been negatively impacted by the trend of feminizing Him, which has contributed to a decline in the courage displayed by men today. This shift can be attributed to various factors that have influenced societal perspectives.
In contrast to the promises of Jesus, which include suffering, trials, and pain, it is often only presented to them that Christianity is the solution to these hardships. Instead of acknowledging the reality of challenges, the contemporary portrayal of Christianity tends to market it as the antidote to suffering and pain. It is important to recognize and reflect upon the significant difference between how Jesus called His disciples and the prevailing emphasis on personal relationships with Him today. Instead of inviting them to have a personal connection, He simply said, "Follow me." Understanding this distinction is crucial in our understanding of Jesus' call to discipleship. "Follow me" implies a sense of purpose, a shared mission or goal to pursue. This contrast highlights the divergence between the original intent of discipleship and the way it is often portrayed around me nowadays.
I want to emphasize that I am by no means denying the significance of having a personal relationship with Christ. On the contrary, I am simply highlighting the importance of recognizing that personal relationships, including our relationship with Christ, require more than just superficial connections. They demand a deep sense of faith, trust, and communion with Him. Drawing inspiration from the courageous example of Jesus, who fearlessly confronted societal norms and spoke truth to power, our relationship with Him can empower us to embrace courage in our own lives. Just as Jesus fearlessly faced opposition, persecution, and ultimately sacrificed Himself for the sake of others, our connection with Him can embolden us to stand up for what is right, to live out our faith boldly, and to face life's challenges with strength. It is not a casual or complacent association but a courageous and transformative bond that empowers us to live out our faith with conviction and to impact the world around us positively.
As modern sermons take center stage, it's become apparent that there is a tendency to downplay the contrasts found in the teachings of the Bible. As mentions of heaven and hell, sin and life, grace and justice, as well as the analogies involving battles and soldiers for Christ have always been very prevalent, they have become way less common nowadays. We hear fewer calls for Catholics to embrace their crosses and passionately commit themselves to the cause of the gospel and the well-being of others. Instead, the spotlight has shifted towards how the gospel can serve as a tool for personal growth and fulfillment, focusing on self-realization. The gospel is often presented as a therapeutic treatment rather than a heroic challenge. The emphasis lies on the rewards rather than the obstacles, creating the idea of all gain, no pain (lol).
The rise of praise and worship music has also brought about significant changes in people's perception of Christ. While traditional hymns focused on singing about God, emphasizing His greatness, power, and distinctiveness, praise and worship music takes a different approach. It presents God as a close companion, an intimate presence by our side, emphasizing His love and care for us. This shift in emphasis, while not inherently negative, certainly plays a substantial role in shaping our understanding of Christ's nature and relationship with us.
Jesus is the Epitome of Masculinity
I believe Jesus stands as the epitome of masculinity, offering an unrivaled example for men to emulate. Through His life and teachings, He reveals the true essence of what it means to be a man. He leads with courage, facing challenges head-on without hesitation. His fearlessness shines through as He confronts opposition and stands firm in His convictions. Moreover, His love is not self-serving but sacrificial, displayed vividly through His ultimate act of giving His own life for the sake of others. And in the face of adversity, His resolve remains unshakeable, inspiring men to stand strong in their beliefs and principles. Jesus, in His entirety, embodies the essence of true masculinity, setting an unparalleled standard that us men must aspire to.
Around me, I’m often seeing a tendency to shy away from addressing challenging subjects with resolute conviction. Rather than speaking with clarity and certainty, there is a preference for using vague language and ambiguous statements to navigate sensitive issues. In stark contrast, Jesus stood firmly and fearlessly, fearlessly proclaiming His truth. His words shook the foundations of societal norms, demanding radical commitment from His followers. True boldness lies in the courage to speak truth, even when faced with opposition and adversity.
Boldness is a very masculine characteristic. While some may argue that boldness is not exclusive to gender, the Bible primarily associates this characteristic with men. On the other hand, the beauty of women is highlighted through the importance of a gentle and quiet spirit, which also very much holds great value in the eyes of God. 1 Peter 3:4 addressing woman and wives, "Let your adorning be the hidden person of the heart with the imperishable beauty of a gentle and quiet spirit, which in God's sight is very precious." This reminds us that inner qualities such as a gentle and tranquil demeanor also hold significant worth and are highly esteemed.
As Jesus exemplified true boldness, courageously speaking God's truth regardless of the consequences. His courage serves as the ultimate model of masculinity, inspiring men to fearlessly pursue God's will. Jesus exemplified bravery, rooted in His deep reverence for God. Unlike the fear of man, which arises from sin, Jesus' bravery stemmed from His love for God. His resolute posture and authoritative responses to godless men demonstrated a masculinity untainted by timidity. Jesus taught us to lead with courage, grounded in reverence for God and faith in His sovereignty.
In a culture where love is often misrepresented, Jesus' sacrificial love stands as the true definition. Love, as demonstrated by Jesus, goes beyond superficial feelings; it entails sacrificial commitment. Jesus willingly laid down His life for His bride, the Church, showcasing the essence of true masculinity. Men are called to sacrificially love their wives, mirroring Christ's example. This selfless love forms the foundation for men to protect, nurture, and fight for those entrusted to their care.
Christ's resolve was the driving force behind the cross, demonstrating His commitment to fulfill His mission. His choice to embrace the cross, knowing the suffering and wrath He would endure, showcases resolute masculinity. In history, heroic moments of perseverance are predominantly marked by male resolve. The biological advantage provided by testosterone further supports men's capacity for enduring resolve. Jesus' resolve to save His people from sin teaches men to stand firm in the face of challenges, abiding in their commitment to their calling.
Restoring the Church's Boldness and Reclaiming Biblical Masculinity and Femininity
The landscape of the Church has undergone a significant transformation, moving away from its historic expression of Christianity. We've witnessed a shift from powerful, convicting sermons to soft, TED-talk style infotainment. Classic hymns highlighting doctrine, sacrifice, and piety have been replaced by emotionally driven love songs that resemble romantic ballads. It's clear that the local church has undergone a real emasculation.
This departure from biblical foundations has contributed to a great deal of confusion within the Church, particularly concerning the understanding of biblical manhood and womanhood. The increasing push for egalitarianism has led to women fighting for leadership roles, while men find themselves adrift without clear guidance regarding their responsibilities in marriage, the church, and the family. I believe this confusion and distortion of gender roles to be the enemy's central strategy for our generation. By infiltrating the Church with a heightened emphasis on feminine emotion, the enemy has left us unprepared for moments requiring masculine boldness, fearlessness, sacrifice, and resolve.
We must acknowledge that there is a difference between a surface-level expression of faith and the profound conviction displayed by those facing intense trials. The challenges and hardships that people face in the midst of adversity provide a profound glimpse into the strength and genuineness of their faith. These trials are a powerful testimony to their commitment and courage. Throughout history, numerous Christians have faced unimaginable suffering, even enduring torture, dismemberment, and martyrdom, all because of their devotion to Christ. Their remarkable sacrifices inspire us and remind us of the immense cost of following Jesus. Yet, the trend of timidity displayed by the present-day Church, yielding to government overreach or even complying with laws that endorse sexual sin contrary to biblical teachings, will come at a significant cost.
It is high time for the Church to reclaim its boldness and restore the biblical understanding of masculinity and femininity. We must reject the watered-down version of Christianity that has spread throughout our culture and embrace a faith rooted in conviction and sacrifice. By understanding and embracing the unique roles and responsibilities of men and women as outlined in Scripture, we can restore clarity and purpose to our families, churches, and communities. Let us rise above the societal pressures, rekindle the fire of biblical truth, and stand firm in our commitment to Christ, no matter the cost.
As we progress, it becomes clear that the importance of strong, virtuous Catholic men is growing. This should not catch us off guard. The feminist movement of the 21st century is truly toxic. It goes way beyond advocating for the rightful appreciation of women; it seeks to establish female dominance. Moreover, its influence knows no boundaries. Like the LGBTQ community, its aim is to permeate every aspect of public, personal, and spiritual life. We must not only be alarmed by this trend but also prepare ourselves to stand firmly against it. We need biblically grounded shepherds and faithful women who can discern the subtle infiltration of an effeminate culture and guard against it.
Let us not forget that Catholicism is not egalitarian. While men and women are equally valued before the cross, our roles and responsibilities differ. In marriage, Christianity follows a complementarian model, where the husband leads with sacrificial love, and the wife respects and supports him. People are too sensitive about the word “patriarchy” nowadays. In terms of leadership, the Church holds a patriarchal stance. At the same time, patriarchy, like any other system, is not immune to the potential for sinful expressions. However, when approached with sacrificial love, adherence to biblical order, and a commitment to honoring God, the structure of patriarchy - as well as areas such as marriage, fatherhood, and heterosexuality - can yield to way more goodness. We should strive for a church culture that aligns with the gender-culture outlined in God’s Word: gentle, safe, and encouraging, while also strong, bold, and committed to upholding biblical order and fulfilling the mission entrusted to us. This balance allows the church to fully embody the presence of Christ, enabling His people to confidently advance alongside our great Lord.
We must prepare ourselves for an increasing need for men who embrace biblical masculinity and women who faithfully embody femininity. It is crucial not to overlook the pervasive influence of an effeminate culture and the agenda of distorted ideologies. By embracing the distinct roles and responsibilities that God has given to both men and women, we cultivate a church culture that mirrors the beauty of Christ and empowers His people to wholeheartedly pursue His mission with courage. We must recognize the urgency to embrace and embody biblical masculinity in the face of cultural challenges and shifting ideologies. Equipped with the truth of God's Word, we can navigate the complexities of the world and fulfill our God-given roles with great faith. Let us rise as men who boldly embrace our calling, standing firm in the face of challenges, and wholeheartedly pursuing lives of holiness and service to God and His Church. May we stand united, guided by His Word, and ready to face the battles ahead with strength, grace, and resolved faith.
From Nashville with love,
Suhail Saqan
This was inspired by The Imitation of Christ. Read here.
-
@ 9349d012:d3e98946
2024-11-05 00:42:37Chef's notes
2 cups pureed pumpkin 2 cups white sugar 3 eggs, beaten 1/2 cup olive oil 1 tablespoon cinnamon 1 1/2 teaspoon baking powder 1 teaspoon baking soda 1/2 teaspoon salt 2 1/4 cups flour 1/4 cup chopped pecans
Preheat oven to 350 degrees Fahrenheit. Coat a bread pan with olive oil. Mix all ingredients minus nuts, leaving the flour for last, and adding it in 2 parts. Stir until combined. Pour the batter into the bread pan and sprinkle chopped nuts down the middle of the batter, lenghtwise. Bake for an hour and ten minutes or until a knife insert in the middle of the pan comes out clean. Allow bread to cool in the pan for five mintes, then use knife to loosen the edges of the loaf and pop out of the pan. Rest on a rack or plate until cool enough to slice. Spread bread slices with Plugra butter and serve.
https://cookeatloveshare.com/pumpkin-bread/
Details
- ⏲️ Prep time: 30 minutes
- 🍳 Cook time: 1 hour and ten minutes
- 🍽️ Servings: 6
Ingredients
- See Chef's Notes
Directions
- See Chef's Notes
-
@ c9badfea:610f861a
2025-05-05 22:36:34- Install SherpaTTS (it's free and open source)
- Launch the app and download the first AI model for your language (see recommendations below)
- Tap the + icon below the language selection to add more models
- Enjoy offline text-to-speech synthesis
Model Recommendations
- English:
en_US-ryan-medium
- Chinese:
zh-CN-huayan-medium
- German:
de_DE-thorsten-medium
- Spanish:
es_ES-sharvard-medium
- Portuguese:
pt_BR-faber-medium
- French:
fr_FR-tom-medium
- Dutch:
nl_BE-rdh-medium
- Russian:
ru_RU-dmitri-medium
- Arabic:
ar-JO-kareem-medium
- Romanian:
ro_RO-mihai-medium
- Bulgarian:
bg-cv
- Turkish:
tr_TR-fahrettin-medium
ℹ️ An internet connection is only needed for the initial download
ℹ️ You can use TTS Util to read text and files aloud
-
@ 6538925e:571e55c3
2025-05-05 20:00:48It’s been a little while since we released a major design update, so we’re really excited to get this new version of the app into your hands. Here’s a breakdown of all the main updates included in Fountain 1.2:
#### Library Design Update
-
New content-type filters at the top of the page make it easier to navigate between podcasts and music in your library.
-
Recently Played is now the default view in your library, so it’s easier to jump back into podcasts you’ve already started.
-
The Music filter now makes it easier to find saved tracks and albums, and it also gives you a list of all the artists whose music you’ve saved.
-
We’ve refreshed the design of the content cards to make it easier to see how much time is remaining on episodes you’ve already started.
#### Content Pages Design Update
-
All of the different content pages have undergone an extensive redesign, including shows, episodes, artists, albums, tracks, clips and playlists
-
We’ve replaced the tab layout we were using on the content pages with one scrollable page, making it easier to access features like chapters and tracklists
-
We’ve sanitised the formatting of show notes too, and if there is no activity for a given episode, we now display the expanded show notes
#### Episode Summaries
Ever looked at a 4-hour Lex Fridman episode and wished you could just read a high-level summary? We certainly have, so we did something about it.
-
Every episode page now has a Summary button above the show notes.
-
Simply pay 500 sats to unlock a summary, or upgrade to Fountain Premium for $2.99/month to enjoy unlimited summaries.
-
Summaries and transcripts now come as a bundle — two for the price of one!
-
Thanks to major improvements, they’re now faster, cheaper, and more accurate than ever before.
#### Playback Improvements
We’ve completely rebuilt our audio engine from the ground up. Playback is now more robust and reliable — especially for music. Here are some of the key enhancements in Fountain 1.2:
-
Tracks now load and play instantly when tapped.
-
When playing a collection of tracks (e.g. from an artist, album, or playlist), you can now skip seamlessly between them.
-
We’ve replaced the scrollable player page with full-screen modals to make it easier to access show notes, comments, transcripts, chapters, tracklists, and your queue.
-
The new Smart Resume feature rewinds the episode by 5 seconds when you hit pause, so you don’t miss a beat.
-
You can now skip forward or backward by 60 seconds for faster navigation through episodes.
Other Bug Fixes & Improvements
-
Rebuilt payment stats for more complete and reliable transaction records.
-
Refreshed the design of the Settings pages for better usability.
-
Added new episode notification preferences in Settings.
-
Fixed several playback issues that were causing crashes or freezes.
-
Updated lock screen display and controls for livestreams.
-
Fixed issue where the next item in the queue paused unexpectedly.
-
Resolved playback stuttering on Android during livestreams.
-
Fixed disappearing playback controls on the lock screen.
-
Fixed playback speed not updating correctly.
-
Resolved issue where played episodes couldn’t be replayed.
-
Fixed playback not resuming correctly when listening in the car.
-
Synced car playback position with the device.
-
Fixed persistent car display refresh issue.
-
Fixed volume control via car controls.
-
Resolved issue with headphone controls after playing a transcript.
-
Fixed disappearing metadata on the lock screen.
-
Fixed bug where downloaded episodes stopped in airplane mode but showed as playing.
We would love to hear how you’re finding Fountain 1.2. Please submit your thoughts and feedback via the main menu in the app and we will take it on board as we continue to improve the app.
If you want to help test new features out before they get released, you can join Fountain Beta on Telegram. All iOS and Android users welcome.
-
-
@ 90c656ff:9383fd4e
2025-05-05 15:30:26In recent years, Bitcoin has often been compared to gold, earning the nickname “digital gold.” This comparison arises because both forms of value share key characteristics, such as scarcity, durability, and global acceptance. However, Bitcoin also represents a technological innovation that redefines the concept of money and investment, standing out as a modern and efficient alternative to physical gold.
One of the main reasons Bitcoin is compared to gold is its programmed scarcity. While gold is a naturally limited resource whose supply depends on mining, Bitcoin has a maximum cap of 21 million units, defined in its code. This cap protects Bitcoin from inflation, unlike traditional currencies that can be created without limit by central banks.
This scarcity gives Bitcoin lasting value, similar to gold, as the limited supply helps preserve purchasing power over time. As demand for Bitcoin grows, its reduced availability reinforces its role as a store of value.
Another feature that brings Bitcoin closer to gold is durability. While gold is resistant to corrosion and can be stored for centuries, Bitcoin is a digital asset protected by advanced cryptography and stored on the blockchain. An immutable and decentralized ledger.
Moreover, Bitcoin is far easier to transport than gold. Moving physical gold involves high costs and security risks, making transport particularly difficult for international transactions. Bitcoin, on the other hand, can be sent digitally anywhere in the world in minutes, with low fees and no intermediaries. This technological advantage makes Bitcoin more effective in a globalized and digital world.
Security is another trait that Bitcoin and gold share. Gold is difficult to counterfeit, making it a reliable store of value. Similarly, Bitcoin uses cryptographic protocols that ensure secure transactions and protect against fraud.
In addition, all Bitcoin transactions are recorded on the blockchain, offering a level of transparency that physical gold does not provide. Anyone can review transactions on the network, increasing trust and traceability.
Historically, gold has been used as a hedge against inflation and economic crises. During times of instability, investors turn to gold as a way to preserve their wealth. Bitcoin is emerging as a digital alternative with the same purpose.
In countries with high inflation or political instability, Bitcoin has been used as a safeguard against the devaluation of local currencies. Its decentralized nature prevents governments from directly confiscating or controlling the asset, providing greater financial freedom to users.
Despite its similarities with gold, Bitcoin still faces challenges. Its volatility is much higher, which can cause short-term uncertainty. However, many experts argue that this volatility is typical of new assets and tends to decrease over time as adoption grows and the market matures.
Another challenge is regulation. While gold is globally recognized as a financial asset, Bitcoin still faces resistance from governments and financial institutions, which seek ways to control and regulate it.
In summary, Bitcoin - often called "digital gold" - offers a new form of value that combines the best characteristics of gold with the efficiency and innovation of digital technology. Its programmed scarcity, cryptographic security, portability, and resistance to censorship make it a viable alternative for preserving wealth and conducting transactions in the modern world.
Despite its volatility, Bitcoin is establishing itself as both a store of value and a hedge against economic crises. As such, it represents not just an evolution of the financial system but also a symbol of the shift toward a decentralized and global digital economy.
Thank you very much for reading this far. I hope everything is well with you, and sending a big hug from your favorite Bitcoiner maximalist from Madeira. Long live freedom!
-
@ 09fbf8f3:fa3d60f0
2024-11-02 08:00:29> ### 第三方API合集:
免责申明:
在此推荐的 OpenAI API Key 由第三方代理商提供,所以我们不对 API Key 的 有效性 和 安全性 负责,请你自行承担购买和使用 API Key 的风险。
| 服务商 | 特性说明 | Proxy 代理地址 | 链接 | | --- | --- | --- | --- | | AiHubMix | 使用 OpenAI 企业接口,全站模型价格为官方 86 折(含 GPT-4 )| https://aihubmix.com/v1 | 官网 | | OpenAI-HK | OpenAI的API官方计费模式为,按每次API请求内容和返回内容tokens长度来定价。每个模型具有不同的计价方式,以每1,000个tokens消耗为单位定价。其中1,000个tokens约为750个英文单词(约400汉字)| https://api.openai-hk.com/ | 官网 | | CloseAI | CloseAI是国内规模最大的商用级OpenAI代理平台,也是国内第一家专业OpenAI中转服务,定位于企业级商用需求,面向企业客户的线上服务提供高质量稳定的官方OpenAI API 中转代理,是百余家企业和多家科研机构的专用合作平台。 | https://api.openai-proxy.org | 官网 | | OpenAI-SB | 需要配合Telegram 获取api key | https://api.openai-sb.com | 官网 |
持续更新。。。
推广:
访问不了openai,去
低调云
购买VPN。官网:https://didiaocloud.xyz
邀请码:
w9AjVJit
价格低至1元。
-
@ 90c656ff:9383fd4e
2025-05-05 15:22:38Bitcoin has emerged as a modern option for storing value, often compared to traditional assets like gold. Its ability to resist inflation, combined with features such as scarcity, decentralization, and security, makes it a promising tool for preserving wealth in times of economic uncertainty.
A store of value is an asset that maintains its purchasing power over time, protecting wealth from devaluation. Historically, assets like gold and real estate have played this role because they are relatively scarce and in constant demand.
However, fiat currencies have proven less effective as stores of value due to inflation. Governments and central banks often increase the money supply, which can reduce the purchasing power of currencies. In this context, Bitcoin stands out as an alternative.
Bitcoin: Programmed Scarcity
The main feature that makes Bitcoin a potential store of value is its limited supply. Only 21 million bitcoins will ever be created, a cap established in its code. This programmed scarcity contrasts with fiat currencies, which can be printed without limits by governments, leading to inflation.
The creation of Bitcoin is also controlled by events known as halvings, which cut the mining reward in half approximately every four years. This makes Bitcoin increasingly scarce over time, boosting its potential for appreciation.
Bitcoin offers a solution to the problem of inflation, as its fixed supply prevents governments or centralized institutions from manipulating its quantity.
01 - Decentralization and Immutability: Operating on a decentralized network, Bitcoin is immune to political decisions or central bank interventions. No authority can change the protocol to "print" more bitcoins. 02 - Transparency of Supply: All transactions and newly created bitcoins are recorded on the blockchain or timechain, ensuring full transparency. 03 - Purchasing Power Protection: With limited supply and growing demand, Bitcoin has shown a tendency to appreciate over time, acting as a hedge against inflation in several economies.
Bitcoin is often called “digital gold” due to its similarities to the precious metal as a store of value:
01 - Portability: Bitcoin is easier to transfer and store than gold, being digitally accessible anywhere in the world. 02 - Divisibility: Each bitcoin can be divided into up to 100 million units called satoshis, allowing for transactions of any value. 03 - Security: While gold requires physical storage and is subject to theft, Bitcoin can be stored in secure digital wallets.
These qualities make Bitcoin a more flexible and accessible option for storing value in an increasingly digital world.
Despite its promise, Bitcoin still faces barriers to being widely accepted as a store of value:
01 - Volatility: Bitcoin's price has historically shown large fluctuations, which can discourage investors seeking safety. However, many believe that as adoption grows, volatility will decrease. 02 - Regulation: Some governments have implemented measures to restrict or regulate Bitcoin use, which may impact its acceptance as a store of value. 03 - Cultural Adaptation: As a new and digital asset, Bitcoin still needs to earn the trust of people accustomed to physical stores of value like gold.
Bitcoin has proven to be a particularly useful store of value in economies facing financial crises or hyperinflation. Countries like Venezuela, Argentina, and Zimbabwe, which experienced sharp currency devaluation, have seen growing adoption of Bitcoin as a way to preserve purchasing power.
Moreover, its global accessibility allows people in countries without easy access to traditional financial markets to use Bitcoin as an alternative.
In summary, Bitcoin has unique characteristics that make it a promising candidate as a store of value in an increasingly digital world affected by fiat currency inflation. Its programmed scarcity, resistance to manipulation, and global accessibility offer a modern solution for preserving wealth.
Although challenges like volatility and regulation still need to be addressed, Bitcoin has already proven to be an effective tool for protecting assets, especially in economically unstable environments. Over time, and with growing adoption, Bitcoin may solidify its place as one of the primary stores of value in the 21st century.
Thank you very much for reading this far. I hope everything is well with you, and sending a big hug from your favorite Bitcoiner maximalist from Madeira. Long live freedom!
-
@ 06639a38:655f8f71
2024-11-01 22:32:51One year ago I wrote the article Why Nostr resonates in Dutch and English after I visited the Bitcoin Amsterdam 2023 conference and the Nostrdam event. It got published at bitcoinfocus.nl (translated in Dutch). The main reason why I wrote that piece is that I felt that my gut feeling was tellinng me that Nostr is going to change many things on the web.
After the article was published, one of the first things I did was setting up this page on my website: https://sebastix.nl/nostr-research-and-development. The page contains this section (which I updated on 31-10-2024):
One metric I would like to highlight is the number of repositories on Github. Compared to a year ago, there are already more than 1130 repositories now on Github tagged with Nostr. Let's compare this number to other social media protocols and decentralized platforms (24-10-2024):
- Fediverse: 522
- ATProto: 159
- Scuttlebot: 49
- Farcaster: 202
- Mastodon: 1407
- ActivityPub: 444
Nostr is growing. FYI there are many Nostr repositories not hosted on Github, so the total number of Nostr reposities is higher. I know that many devs are using their own Git servers to host it. We're even capable of setting up Nostr native Git repositories (for example, see https://gitworkshop.dev/repos). Eventually, Nostr will make Github (and other platforms) absolute.
Let me continue summarizing my personal Nostr highlights of last year.
Organising Nostr meetups
This is me playing around with the NostrDebug tool showing how you can query data from Nostr relays. Jurjen is standing behind me. He is one of the people I've met this year who I'm sure I will have a long-term friendship with.OpenSats grant for Nostr-PHP
In December 2023 I submitted my application for a OpenSats grant for the further development of the Nostr-PHP helper library. After some months I finally got the message that my application was approved... When I got the message I was really stoked and excited. It's a great form of appreciation for the work I had done so far and with this grant I get the opportunity to take the work to another higher level. So please check out the work done for so far:Meeting Dries
One of my goosebumps moments I had in 2022 when I saw that the founder and tech lead of Drupal Dries Buytaert posted 'Nostr, love at first sight' on his blog. These types of moments are very rare moment where two different worlds merge where I wouldn't expect it. Later on I noticed that Dries would come to the yearly Dutch Drupal event. For me this was a perfect opportunity to meet him in person and have some Nostr talks. I admire the work he is doing for Drupal and the community. I hope we can bridge Nostr stuff in some way to Drupal. In general this applies for any FOSS project out there.
Here is my recap of that Drupal event.Attending Nostriga
A conference where history is made and written. I felt it immediately at the first sessions I attended. I will never forget the days I had at Nostriga. I don't have the words to describe what it brought to me.
I also pushed myself out of my comfort zone by giving a keynote called 'POSSE with Nostr - how we pivot away from API's with one of Nostr superpowers'. I'm not sure if this is something I would do again, but I've learned a lot from it.
You can find the presentation here. It is recorded, but I'm not sure if and when it gets published.Nostr billboard advertisement
This advertisment was shown on a billboard beside the A58 highway in The Netherlands from September 2nd till September 16th 2024. You can find all the assets and more footage of the billboard ad here: https://gitlab.com/sebastix-group/nostr/nostr-ads. My goal was to set an example of how we could promote Nostr in more traditional ways and inspire others to do the same. In Brazil a fundraiser was achieved to do something similar there: https://geyser.fund/project/nostrifybrazil.
Volunteering at Nostr booths growNostr
This was such a great motivating experience. Attending as a volunteer at the Nostr booth during the Bitcoin Amsterdam 2024 conference. Please read my note with all the lessons I learned here.
The other stuff
- The Nostr related blog articles I wrote past year:
- Run a Nostr relay with your own policies (02-04-2024)
- Why social networks should be based on commons (03-01-2024)
- How could Drupal adopt Nostr? (30-12-2023)
- Nostr integration for CCHS.social (21-12-2023)
- https://ccns.nostrver.se
CCNS stands for Community Curated Nostr Stuff. At the end of 2023 I started to build this project. I forked an existing Drupal project of mine (https://cchs.social) to create a link aggregation website inspired by stacker.news. At the beginning of 2024 I also joined the TopBuilder 2024 contest which was a productive period getting to know new people in the Bitcoin and Nostr space. - https://nuxstr.nostrver.se
PHP is not my only language I use to build stuff. As a fullstack webdeveloper I also work with Javascript. Many Nostr clients are made with Javascript frameworks or other more client-side focused tools. Vuejs is currently my Javascript framework I'm the most convenient with. With Vuejs I started to tinker around with Nuxt combined with NDK and so I created a starter template for Vue / Nuxt developers. - ZapLamp
This is a neat DIY package from LNbits. Powered by an Arduino ESP32 dev board it was running a 24/7 livestream on zap.stream at my office. It flashes when you send a zap to the npub of the ZapLamp. - https://nosto.re
Since the beginning when the Blossom spec was published by @hzrd49 and @StuartBowman I immediately took the opportunity to tinker with it. I'm also running a relay for transmitting Blossom Nostr eventswss://relay.nosto.re
. - Relays I maintain
I really enjoy to tinker with different relays implementations. Relays are the fundamental base layer to let Nostr work.
I'm still sharing my contributions on https://nostrver.se/ where I publish my weekly Nostr related stuff I worked on. This website is built with Drupal where I use the Nostr Simple Publish and Nostr long-form content NIP-23 modules to crosspost the notes and long-form content to the Nostr network (like this piece of content you're reading).
The Nostr is the people
Just like the web, the web is people: https://www.youtube.com/watch?v=WCgvkslCzTo
the people on nostr are some of the smartest and coolest i’ve ever got to know. who cares if it doesn’t take over the world. It’s done more than i could ever ask for. - @jb55
Here are some Nostriches who I'm happy to have met and who influenced my journey in Nostr in a positive way.
- Jurjen
- Bitpopart
- Arjen
- Jeroen
- Alex Gleason
- Arnold Lubach
- Nathan Day
- Constant
- fiatjaf
- Sync
Coming year
Generally I will continue doing what I've done last year. Besides the time I spent on Nostr stuff, I'm also very busy with Drupal related work for my customers. I hope I can get the opportunity to work on a paid client project related to Nostr. It will be even better when I can combine my Drupal expertise with Nostr for projects paid by customers.
Building a new Nostr application
When I look at my Nostr backlog where I just put everything in with ideas and notes, there are quite some interesting concepts there for building new Nostr applications. Filtering out, I think these three are the most exciting ones:
- nEcho, a micro app for optimizing your reach via Nostr (NIP-65)
- Nostrides.cc platform where you can share Nostr activity events (NIP-113)
- A child-friendly video web app with parent-curated content (NIP-71)
Nostr & Drupal
When working out a new idea for a Nostr client, I'm trying to combine my expertises into one solution. That's why I also build and maintain some Nostr contrib modules for Drupal.
- Nostr Simple Publish
Drupal module to cross-post notes from Drupal to Nostr - Nostr long-form content NIP-23
Drupal module to cross-post Markdown formatted content from Drupal to Nostr - Nostr internet identifier NIP-05
Drupal module to setup Nostr internet identifier addresses with Drupal. - Nostr NDK
Includes the Javascript library Nostr Dev Kit (NDK) in a Drupal project.
One of my (very) ambitious goals is to build a Drupal powered Nostr (website) package with the following main features:
- Able to login into Drupal with your Nostr keypair
- Cross-post content to the Nostr network
- Fetch your Nostr content from the Nostr content
- Serve as a content management system (CMS) for your Nostr events
- Serve as a framework to build a hybrid Nostr web application
- Run and maintain a Nostr relay with custom policies
- Usable as a feature rich progressive web app
- Use it as a remote signer
These are just some random ideas as my Nostr + Drupal backlog is way longer than this.
Nostr-PHP
With all the newly added and continues being updated NIPs in the protocol, this helper library will never be finished. As the sole maintainer of this library I would like to invite others to join as a maintainer or just be a contributor to the library. PHP is big on the web, but there are not many PHP developers active yet using Nostr. Also PHP as a programming language is really pushing forward keeping up with the latest innovations.
Grow Nostr outside the Bitcoin community
We are working out a submission to host a Nostr stand at FOSDEM 2025. If approved, it will be the first time (as far as I know) that Nostr could be present at a conference outside the context of Bitcoin. The audience at FOSDEM is mostly technical oriented, so I'm really curious what type of feedback we will receive.
Let's finish this article with some random Nostr photos from last year. Cheers!
-
@ 502ab02a:a2860397
2025-05-06 01:06:54เมื่อวานนี้เรารู้เรื่อง Meat Free Monday ของท่านเซอร์พอลกันแล้ว วันนี้เรามาต่อเนื่องกับแสงสีกันอีกนิดครับ
Anne Hathaway กับ The EVERY Company จากเจ้าหญิง The Princess Diaries สู่ราชินีโปรตีนไร้ไก่ ในยุคที่ความเปลี่ยนแปลงไม่เคาะประตู แต่มาถีบประตูบ้านเข้าใส่เลย บรรดาเซเล็บฮอลลีวูดก็ไม่ได้อยู่เฉยๆ แค่ใส่กระโปรงขึ้นพรมแดงหรือไปเล่นหนังรางวัลเท่านั้น แต่บางคน เช่น Anne Hathaway ก้าวเข้ามาอยู่เบื้องหลังระบบอาหารโลกในรูปแบบที่เงียบ…แต่เขย่าพื้นโลกได้เหมือนกัน
ใช่จ้ะ... Anne Hathaway คนเดียวกับที่เฮียเคยเห็นใน The Devil Wears Prada หรือเจ้าหญิง Mia ใน Princess Diaries ได้ลงทุนแบบเอาจริงกับบริษัทเทคโนโลยีอาหารสุดล้ำที่ชื่อว่า The EVERY Company ซึ่งพัฒนาโปรตีนจากไข่ที่ไม่ใช้ไก่ ไม่ต้องมีฟาร์ม ไม่ต้องฟัก และไม่ต้องฟูมฟักศีลธรรมให้สั่นไหวด้วยการฆ่าสัตว์เลยแม้แต่นิดเดียว
ทบทวนกันจากสัปดาห์ที่แล้วครับ The EVERY Company เดิมชื่อ Clara Foods ก่อตั้งในกลางยุค 2010s ณ ใจกลางซิลิคอนวัลเลย์ แหล่งรวมจินตนาการและเงิน VC ที่ไม่รู้จะเอาไปลงกับอะไรดี บริษัทนี้ไม่ได้เลี้ยงไก่ ไม่ได้ใช้ถาดไข่ แต่ใช้ “Precision Fermentation” ซึ่งก็คือการเอา DNA ที่เป็นรหัสของไข่ขาว (Ovalbumin) ไปใส่ในจุลินทรีย์สายพันธุ์เฉพาะ แล้วเลี้ยงด้วยน้ำตาลในถังหมัก สุดท้ายมันก็ผลิตโปรตีนออกมาที่มีโครงสร้าง “เหมือนของจริงเป๊ะ”
มันคือการทำไข่จาก “ยีสต์” ไม่ใช่จาก “แม่ไก่”
การลงทุนของ Anne ครั้งนี้ ถือเป็น B2B investment ครั้งแรกในชีวิต และดูจะไม่ใช่แค่ลงทุนแบบเซ็นเช็คเฉยๆ แล้วไปจิบไวน์ที่คานส์นะ เพราะเธอให้สัมภาษณ์ว่า
“It’s clear that our food system needs a change. EVERY is one part of a beautiful future.” (มันชัดเจนว่าระบบอาหารเราต้องเปลี่ยนแปลง EVERY คือส่วนหนึ่งของอนาคตที่งดงามนั้น)
แม้ไม่มีการเปิดเผยจำนวนเงินหรือสัดส่วนหุ้นอย่างเป็นทางการ แต่สื่ออุตสาหกรรมอาหารอย่าง Food Navigator USA ก็ชี้ว่า เธอเป็นอีกแรงที่ช่วยดันชื่อเสียงของบริษัทให้ “กลายเป็นไวรัล” ในหมู่นักลงทุน และอาจเปิดประตูสู่การยอมรับของตลาดผู้บริโภคได้ในวงกว้าง
The EVERY Company ไม่ได้หวังจะแค่ขาย “ไข่” ในซูเปอร์มาร์เก็ตแบบธรรมดาๆ แต่เริ่ม “แทรกซึม” จากเบื้องหลังในอุตสาหกรรมอาหารอย่างแยบยล จะเปลี่ยนระบบอาหารโดยใช้แนวทาง “B2B only” คือผลิตขายให้เชฟ ร้านอาหาร และโรงงานเท่านั้น (ยังไม่วางจำหน่ายปลีกทั่วไป เพราะบางตัวอย่าง EVERY Egg ยังรอรับรอง GRAS จาก FDA)
เคยมีการร่วมงานกับเชฟมิชลิน Dominique Crenn จากร้าน Atelier Crenn ในนครซานฟรานซิสโก ซึ่งเธอยืนยันว่า EVERY Egg เป็นตัวเปลี่ยนเกม ที่ทำให้เธอสามารถสร้างเมนูใหม่ที่ “ไร้สัตว์” ได้โดยไม่เสียความสร้างสรรค์หรือรสชาติ
แม้ Anne จะไม่มีตำแหน่งในบอร์ดบริหาร แต่การที่เธอลงทุนใน The EVERY Company ทำให้หลายองค์กรจับตามอง และส่งผลต่อภาพลักษณ์ของ “โปรตีนสังเคราะห์จากจุลินทรีย์” ให้ดูหรู ดูสะอาด และเหมาะกับสายสุขภาพ-มังสวิรัติ ทั้งที่จริงๆ แล้ว มันคือ “ผลิตภัณฑ์โรงงานที่ผ่านกระบวนการขั้นสูงมาก”
The EVERY Company ยังได้รับเงินลงทุนจากกลุ่มใหญ่อย่าง Temasek (กองทุนจากสิงคโปร์) และ Prosperity7 Ventures ที่เป็นแขนของ Saudi Aramco ซึ่งก็เป็นอีกประเด็นน่าคิด…ว่าพลังทุนที่อยู่เบื้องหลัง “ไข่ไม่ใช้ไก่” นี้ มาจากหลายทิศหลายทาง ทั้ง Silicon Valley, ตะวันออกกลาง และ Hollywood
การที่ Anne Hathaway ลงทุนใน EVERY ไม่ใช่เรื่องผิด แต่เป็นหมุดหมายที่น่าสนใจในยุคที่ “อาหาร” ไม่ใช่แค่เรื่องอิ่มท้อง แต่กลายเป็นการเมือง วิทยาศาสตร์ เศรษฐกิจ และการสร้างภาพลักษณ์ของคนดังในคราวเดียวกัน
ไข่ที่ไม่มีไก่ โปรตีนที่ไม่มีฟาร์ม…ดูเหมือนจะสวยงาม แต่ถ้าเรามองให้ลึก บางครั้งอาหารก็อาจไม่ใช่เรื่อง “ของกิน” อย่างเดียวอีกต่อไป
เรื่องบางเรื่อง คนในบางสังคม คนในบางกลุ่มเท่านั้น ที่จะมีโอกาสเลือกก่อน กี่ครั้งต่อกี่ครั้ง ก็แบบนี้ครับ #pirateketo #กูต้องรู้มั๊ย #ม้วนหางสิลูก #siamstr
-
@ 7460b7fd:4fc4e74b
2025-05-05 14:49:02PR 32359:取消 OP_RETURN 字节限制提案深入分析
提案概述及代码变更内容
提案背景与意图:比特币核心当前对交易中的 OP_RETURN 输出(数据载体输出)有严格限制:默认最多允许单个 OP_RETURN 输出,且其
scriptPubKey
大小不超过 83 字节(约80字节数据加上OP_RETURN和Pushdata前缀)groups.google.com。这一标准规则旨在轻度阻碍链上存储大量任意数据,鼓励将非金融数据以“更无害”的方式存入链上(比如用OP_RETURN而非可花费的UTXO输出)groups.google.com。然而随着时间推移,这一限制并未阻止用户将数据写入区块链,反而促使开发者设计各种变通方案绕过限制。例如,近期 Citrea Clementine 协议(闪电网络相关项目)因为OP_RETURN容量不足,而改用不可花费的Taproot输出来存储所需数据groups.google.com。这样的做法导致大量小额UTXO留存在UTXO集,对全节点造成负担,被视为比使用OP_RETURN更有害的副作用github.com。基于此背景,Bitcoin Core 开发者 Peter Todd(与 Chaincode 实验室的 Antoine Poinsot 等人)提出了 PR #32359,意在解除OP_RETURN的字节大小限制,以消除这种“适得其反”的限制策略groups.google.comgithub.com。**代码变更要点:**该PR主要修改了与标准交易校验和策略配置相关的代码,包括移除
script/standard.cpp
中对OP_RETURN输出大小和数量的检查,以及删除策略配置选项-datacarrier
和-datacarriersize
github.com。具体而言:-
取消OP_RETURN大小限制:删除了判断OP_RETURN数据长度是否超过 MAX_OP_RETURN_RELAY(83字节)的标准性检查。此后,交易中的OP_RETURN输出脚本长度将不再被固定上限限制,只要满足区块重量等共识规则即可(理论上可嵌入远大于83字节的数据)github.comgroups.google.com。PR说明中明确提到移除了这些限制的执行代码github.com。相应地,
-datacarriersize
配置参数被删除,因为其存在意义(设置OP_RETURN字节上限)已不复存在github.com。此前-datacarriersize
默认为83,当用户调高该值时节点可接受更大数据载体输出;而现在代码中已无此参数,节点将无条件接受任意大小的OP_RETURN输出。 -
移除OP_RETURN输出数量限制:原先比特币核心默认策略还规定每笔交易最多只有一个OP_RETURN输出是标准的,多于一个即视为非标准交易(拒绝中继)bitcoin.stackexchange.com。该PR同样意在取消此“任意”限制groups.google.com。修改中移除了对
nDataOut
(OP_RETURN输出计数)的检查,即允许一笔交易包含多个OP_RETURN输出而仍被视作标准交易。之前的代码若检测到nDataOut > 1
会返回“multi-op-return
”的拒绝原因github.com;PR删除了这一段逻辑,相应的功能测试也更新或移除了对“multi-op-return”非标准原因的断言github.com。 -
保留标准形式要求:值得注意的是,OP_RETURN输出的形式要求仍保留。PR描述中强调“数据载体输出的形式仍保持标准化:脚本以单个 OP_RETURN 开头,后跟任意数量的数据推字节;不允许非数据类的其他脚本操作码”github.com。也就是说,虽然大小和数量限制解除了,但OP_RETURN脚本内容只能是纯数据,不能夹带其他执行opcode。这保证了这些输出依然是“不可花费”的纯数据输出,不会改变它们对UTXO集的影响(不会增加UTXO)。
综上,PR #32359 的核心改动在策略层面放宽了对 OP_RETURN 的限制,删除了相关配置和检查,使节点默认接受任意大小、任意数量的 OP_RETURN 数据输出。同时维持其基本形式(OP_RETURN+数据)以确保此变更不会引入其它类型的非标准交易格式。
改动层级:策略规则 vs 共识规则
该提案属于策略层(policy-level)的更改,而非共识层规则的更改。也就是说,它影响的是节点对交易的_中继、存储和打包_策略,而不改变交易或区块在链上的有效性判定。OP_RETURN字节上限和数量限制从一开始就是标准性约束(Standardness),并非比特币共识协议的一部分groups.google.com。因此,移除这些限制不会导致旧节点与新节点产生区块共识分歧。具体理由如下:
-
无共识规则变动:原有的83字节上限只是节点默认_拒绝转发/挖矿_超限交易的规则,但如果矿工强行将超83字节的OP_RETURN交易打包进区块,所有遵循共识规则的节点(包括未升级的旧节点)依然会接受该区块。因为共识层并没有“OP_RETURN大小不得超过83字节”的规定github.com。正如开发者所指出的,现行的OP_RETURN限制属于“standardness rules”,其约束可以被轻易绕过,并不影响交易的最终有效性github.com。Peter Todd 在评论中强调,为真正禁止链上发布任意数据,必须修改比特币的共识协议,而这在现实中几乎不可能实施github.com。
-
**旧节点兼容性:**由于没有引入新的脚本opcode或共识验证规则,旧版本节点即使不升级,仍然会承认包含大OP_RETURN输出的区块为有效。换言之,不存在分叉风险。旧节点唯一的区别是仍会按照老策略拒绝中继此类交易,但一旦交易被打包进区块,它们仍会接受github.com。正因如此,这一提议不会引发硬分叉,只是改变默认策略。
-
**策略可自行定制:**另外,正如PR作者所言,这纯粹是默认策略的调整,用户依然可以选择运行修改版的软件继续实施先前的限制。例如,Peter Todd提到有替代实现(如 Bitcoin Knots)可以继续强制这些限制github.com。因此,这并非要“强制”所有节点解除限制,而是主流软件默认策略的演进。
需要澄清的是,有反对者担心解除限制可能扩大攻击面(下文详述),但这些都是针对节点资源和网络层面的影响,而非共识层安全性问题。总的来看,PR #32359 是策略层改进,与先前如RBF默认开启、逐渐弱化非标准交易限制等改变类似,其出发点在于网络行为而非协议规则本身。
对闪电网络节点和交易验证的影响
对链上验证的影响:由于这是策略层变更,交易和区块的验证规则并未改变,因此运行旧版本 Bitcoin Core 的闪电网络节点在共识上不会出现任何问题。闪电网络全节点通常依赖比特币全节点来跟踪链上交易,它们关心的是交易确认和共识有效性。解除OP_RETURN限制并不会使旧节点拒绝新区块,因而不会造成闪电通道关闭交易或HTLC交易在旧节点上验签失败等情况。换句话说,不升级Bitcoin Core软件的LN节点仍可正常参与链上共识,无需担心链上交易验证兼容性。
对节点中继和资源的影响:主要影响在于网络传播和资源占用。如果闪电网络节点所连接的Bitcoin Core没有升级,它将不会中继或存储那些含有超大OP_RETURN的未确认交易(因为旧版本视之为非标准交易)。这可能导致未升级节点的内存池与升级节点不一致:某些在新版节点中合法存在的交易,在旧版中被拒之门外。不过这通常不影响闪电网络的运行,因为闪电通道相关交易本身不会包含OP_RETURN数据输出。此外,当这些交易被矿工打包进区块后,旧节点依然会接收到区块并处理。所以,即便LN节点的后端Bitcoin Core未升级,最坏情形只是它在交易未打包时可能感知不到这些“大数据”交易,但这通常无碍于闪电网络功能(闪电网络主要关心的是通道交易的确认情况)。
升级的好处和必要性:从闪电网络生态来看,放宽OP_RETURN限制反而可能带来一些正面作用。正如前述,已有闪电网络周边项目因为83字节限制不足,转而使用不可花费输出存储数据groups.google.com。例如 Antoine Poinsot 在邮件列表中提到的 Clementine 协议,将某些watchtower挑战数据存进Taproot输出,因为OP_RETURN容量不够groups.google.com。解除限制后,此类应用完全可以改用更友好的OP_RETURN输出来存储数据,不再制造永久占据UTXO集的“垃圾”UTXOgithub.com。因此,闪电网络的watchtower、跨链桥等组件若需要在链上写入证据数据,将可直接利用更大的OP_RETURN输出,网络整体效率和健壮性都会提升。
需要注意的是,如果PR最终被合并并广泛部署,闪电网络节点运营者应该升级其Bitcoin Core后端以跟上新的默认策略。升级后,其节点将和大多数网络节点一样中继和接受大OP_RETURN交易,确保自己的内存池和网络同步,不会漏掉一些潜在相关交易(尽管目前来看,这些交易对LN通道本身并无直接关联)。总之,从兼容性看不升级没有致命问题,但从网络参与度和功能上看,升级是有益的。
潜在的间接影响:反对者提出,解除限制可能导致区块和内存池充斥更多任意数据,从而推高链上手续费、影响闪电通道关闭时所需的手续费估计。例如,如果大量大OP_RETURN交易占据区块空间,链上拥堵加剧,LN通道关闭需要支付更高费用才能及时确认。这其实是一般性拥堵问题,并非LN特有的兼容性问题。支持者则认为,这正是自由市场作用的体现,使用链上空间就该竞争付费github.com。无论如何,闪电网络作为二层方案,其优势在于减少链上交互频率,链上手续费市场的变化对LN有影响但不改变其运行逻辑。LN节点只需确保其Bitcoin Core正常运行、及时跟上链上状态即可。
开发者讨论焦点:支持与反对观点
PR #32359 在开发者社区引发了激烈讨论,支持者和反对者针锋相对,各自提出了有力的论据。以下总结双方主要观点:
-
支持方观点:
-
当前限制无效且适得其反:支持者强调83字节上限并未阻止人们在链上存数据,反而促使更有害的行为。Peter Todd指出,很多协议改用不可花费UTXO或在
scriptsig
中藏数据来绕过OP_RETURN限制,结果增加了UTXO集膨胀,这是限制OP_RETURN带来的反效果github.com。与其如此,不如移除限制,让数据都写入可被丢弃的OP_RETURN输出,避免UTXO污染github.comgithub.com。正如一位支持者所言:“与其让尘埃UTXO永远留在UTXO集合,不如使用可证明不可花费的输出(OP_RETURN)”github.com。 -
**限制易被绕过,增添维护负担:**由于有些矿工或服务商(如MARA Slipstream私有广播)本就接受大OP_RETURN交易,这一限制对有心者来说形同虚设github.com。同时,存在维护这个限制的代码和配置选项,增加了节点实现复杂度。Todd认为,与其让Bitcoin Core承担维护“低效甚至有害”的限制,不如干脆取消,有需要的人可以使用其他软件实现自己的政策github.com。他提到有替代的Bitcoin Knots节点可自行过滤“垃圾”交易,但没必要要求Bitcoin Core默认坚持这些无效限制github.com。
-
尊重自由市场,拥抱链上数据用例:部分支持者从理念上认为,比特币区块空间的使用应交由手续费市场决定,而不应由节点软件做人为限制。著名开发者 Jameson Lopp 表示,是时候承认“有人就是想用比特币做数据锚定”,我们应当提供更优方式满足这种需求,而不是一味阻碍github.com。他认为用户既然愿意付费存数据,就说明这种行为对他们有价值,矿工也有动力处理;网络层不应进行过度的“父爱”式管制github.com。对于反对者所称“大数据交易会挤占区块、抬高手续费”,Lopp直言“这本来就是区块空间市场运作方式”,愿付高费者得以优先确认,无可厚非github.com。
-
统一与简化策略:还有支持者指出,既然限制容易绕过且逐渐没人遵守,那保留它只会造成节点之间策略不一致,反而增加网络复杂性。通过取消限制,所有核心节点一致地接受任意大小OP_RETURN,可避免因为策略差异导致的网络孤块或中继不畅(尽管共识不受影响,但策略不一致会带来一些网络层问题)。同时删除相关配置项,意味着简化用户配置,减少困惑和误用。Peter Todd在回应保留配置选项的建议时提到,Bitcoin Core在Full-RBF功能上也曾移除过用户可选项,直接默认启用,因为现实证明矿工最终都会朝盈利的方向调整策略,节点自行设置反而无济于事github.com。他以RBF为例:在Core开启默认Full-RBF之前,矿工几乎已经100%自行采用了RBF策略,因此保留开关意义不大github.com。类比来看,数据交易也是如此:如果有利可图,矿工终会打包,无论节点是否选择不转发。
-
反对方观点:
-
去除限制会放松对垃圾交易的防线:反对者担心,一旦解除OP_RETURN限制,链上将出现更多纯粹存储数据的“垃圾”交易,给网络带来DoS攻击和资源消耗风险。开发者 BrazyDevelopment 详细描述了可能被加剧的攻击向量github.com:首先,“Flood-and-Loot”攻击——攻击者构造带有巨大OP_RETURN数据的低价值交易(符合共识规则,多笔交易可达数MB数据),疯狂填充各节点的内存池。github.com这样会占满节点内存和带宽,延迟正常交易的传播和确认,并推高手续费竞争。github.com虽然节点有
maxmempool
大小限制和最低中继费率等机制,但这些机制基于常规交易行为调校,面对异常海量的数据交易可能捉襟见肘github.com。其次,“RBF替换循环”攻击——攻击者可以利用无需额外费用的RBF替换,不断发布和替换包含大OP_RETURN的数据交易,在内存池中反复占据空间却不被确认,从而扰乱手续费市场和内存池秩序github.com。反对者认为,移除大小上限将使上述攻击更廉价、更容易实施github.com。他们主张即便要放宽,也应设定一个“高但合理”的上限(例如100KB),或在内存池压力大时动态调整限制,以保护较小资源节点的运行github.com。 -
用户丧失自定义策略的权利:一些开发者反对彻底删掉
-datacarrier
和-datacarriersize
选项。他们认为即使大势所趋是接受更多数据,也应保留用户自主选择的空间。正如开发者 BitcoinMechanic 所言:“矿工接受大数据交易不代表用户就不能选择自己的内存池装些什么”github.com。目前用户可以通过配置将-datacarrier
设为0(不中继OP_RETURN交易)或者调低-datacarriersize
来严格限制自己节点的策略。直接去除这些选项,会让那些出于各种考虑(如运营受限资源节点、防范垃圾数据)的用户失去控制权。从这个角度看,反对者认为限制应该由用户 opt-in 地解除,而不是一刀切放开。开发者 Retropex 也表示:“如果矿工想要更大的数据载体交易,他们完全可以自行调整这些设置…没有理由剥夺矿工和节点运营者做选择的权利”github.com。 -
此改动非必要且不符合部分用户利益:有反对意见认为当前83字节其实已经能覆盖绝大多数合理应用需求,更大的数据上链并非比特币设计初衷。他们担心放开限制会鼓励把比特币区块链当作任意数据存储层,偏离“点对点电子现金”主线,可能带来长期的链膨胀问题。这一阵营有人将此争议上升为理念之争:是坚持比特币作为金融交易为主,还是开放成为通用数据区块链?有评论形容这场拉锯“有点类似2017年的扩容之争”,虽然本质不同(一个是共识层区块大小辩论,一个是策略层数据使用辩论),但双方观点分歧同样明显99bitcoins.com99bitcoins.com。一些反对者(如Luke Dashjr等)长期主张减少非必要的数据上链,此次更是明确 Concept NACK。Luke-Jr 认为,其实完全可以通过引入地址格式变化等办法来识别并限制存数据的交易,而不需要动用共识层改动github.com(虽然他也承认这会非常激进和不现实,但以此反驳“除了改共识无计可施”的观点)。总之,反对者倾向于维持现状:代码里已有的限制无需移除,至少不应在无压倒性共识下贸然改变github.com。
-
社区共识不足:许多开发者在GitHub上给出了“Concept NACK”(概念上不支持)的评价。一位参与者感叹:“又来?两年前讨论过的理由现在依然适用”github.com。在PR的Review日志中,可以看到反对此提案的活跃贡献者数量明显多于支持者github.com。例如,反对阵营包括 Luke-Jr、BitcoinMechanic、CryptoGuida、1ma 等众多开发者和社区成员,而支持此提案的核心开发者相对少一些(包括Jameson Lopp、Sjöors、Sergio Demian Lerner等)github.com。这种意见分裂显示出社区对取消OP_RETURN限制尚未达成广泛共识。一些反对者还担忧这么大的改动可能引发社区矛盾,甚至有人夸张地提到可能出现新的链分叉风险99bitcoins.com99bitcoins.com(虽然实际上由于不涉及共识,硬分叉风险很小,但社区内部分歧确实存在)。
综上,支持者聚焦于提高链上效率、顺应实际需求和减轻UTXO负担,认为解除限制利大于弊;而反对者强调网络稳健、安全和用户自主,担心轻易放开会招致滥用和攻击。双方在GitHub上的讨论异常热烈,很多评论获得了数十个👍或👎表态,可见整个社区对此议题的关注度之高github.comgithub.com。
PR当前状态及后续展望
截至目前(2025年5月初),PR #32359 仍处于开放讨论阶段,并未被合并。鉴于该提案在概念上收到了众多 NACK,缺乏开发者间的明确共识,短期内合并的可能性不大。GitHub 上的自动统计显示,给予“Concept NACK”的评审者数量显著超过“Concept ACK”的数量github.com。这表明在Bitcoin Core维护者看来,社区对是否采纳此改动存在明显分歧。按照 Bitcoin Core 一贯的谨慎作风,当一个提案存在较大争议时,通常会被搁置或要求进一步修改、讨论,而不会仓促合并。
目前,该PR正等待进一步的评审和讨论。有开发者提出了替代方案或折中思路。例如,Bitcoin Core维护者 instagibbs 提交了相关的 PR #32406,提议仅取消默认的OP_RETURN大小上限(等效于将
-datacarriersize
默认提高到极大),但保留配置选项,从而在不牺牲用户选择权的情况下实现功能开放github.com。这表明部分反对者并非完全拒绝放宽限制,而是希望以更温和的方式推进。PR #32359 与这些提案互相冲突,需要协调出统一的方案github.com。另外,也有开发者建议在测试网上模拟大OP_RETURN交易的攻击场景,以评估风险、说服怀疑者github.com。审议状态总结:综合来看,PR #32359 尚未接近合并,更谈不上被正式接受进入下一个Bitcoin Core版本。它既没有被关闭(拒绝),也没有快速进入最终review/merge阶段,而是停留在激烈讨论中。目前Bitcoin Core的维护者并未给出明确的合并时间表,反而是在鼓励社区充分讨论其利弊。未来的走向可能有几种:要么提案经过修改(例如保留配置项、增加安全机制等)逐渐赢得共识后合并,要么维持搁置等待更明确的社区信号。此外,不排除开发者转而采用渐进路线——例如先在测试网络取消限制试验,或先提高上限值而非彻底移除,以观察效果。也有可能此提案最终会因共识不足而长期悬而不决。
总之,OP_RETURN字节限制之争体现了比特币开发中策略层决策的审慎和平衡:需要在创新开放与稳健保守之间找到折衷。PR #32359 所引发的讨论仍在持续,它的意义在于促使社区重新审视链上数据存储的策略取舍。无论最终结果如何,这一讨论本身对比特币的发展具有积极意义,因为它让开发者和社区更加清晰地权衡了比特币作为数据载体和价值载体的定位。我们将持续关注该提案的进展,以及围绕它所展开的进一步测试和论证。github.comgroups.google.com
引用来源:
-
Bitcoin Core PR #32359 提案内容github.comgithub.com及开发者讨论(Peter Todd评论github.comgithub.com等)
-
Bitcoin Dev 邮件列表讨论帖:《Relax OP_RETURN standardness restrictions》groups.google.comgroups.google.com
-
GitHub 开发者评论摘录:支持意见(Jameson Loppgithub.com等)与反对意见(BitcoinMechanicgithub.com、BrazyDevelopmentgithub.com等)
-
Bitcoin Core PR 评论自动统计(Concept ACK/NACK 汇总)github.com
-
-
@ 3f770d65:7a745b24
2024-10-29 17:38:20Amber
Amber is a Nostr event signer for Android that allows users to securely segregate their private key (nsec) within a single, dedicated application. Designed to function as a NIP-46 signing device, Amber ensures your smartphone can sign events without needing external servers or additional hardware, keeping your private key exposure to an absolute minimum. This approach aligns with the security rationale of NIP-46, which states that each additional system handling private keys increases potential vulnerability. With Amber, no longer do users need to enter their private key into various Nostr applications.
Amber is supported by a growing list of apps, including Amethyst, 0xChat, Voyage, Fountain, and Pokey, as well as any web application that supports NIP-46 NSEC bunkers, such as Nostr Nests, Coracle, Nostrudel, and more. With expanding support, Amber provides an easy solution for secure Nostr key management across numerous platforms.
Amber supports both native and web-based Nostr applications, aiming to eliminate the need for browser extensions or web servers. Key features include offline signing, multiple account support, and NIP-46 compatibility, and includes a simple UI for granular permissions management. Amber is designed to support signing events in the background, enhancing flexibility when you select the "remember my choice" option, eliminating the need to constantly be signing events for applications that you trust. You can download the app from it's GitHub page, via Obtainium or Zap.store.
To log in with Amber, simply tap the "Login with Amber" button or icon in a supported application, or you can paste the NSEC bunker connection string directly into the login box. For example, use a connection string like this: bunker://npub1tj2dmc4udvgafxxxxxxxrtgne8j8l6rgrnaykzc8sys9mzfcz@relay.nsecbunker.com.
Citrine
Citrine is a Nostr relay built specifically for Android, allowing Nostr clients on Android devices to seamlessly send and receive events through a relay running directly on their smartphone. This mobile relay setup offers Nostr users enhanced flexibility, enabling them to manage, share, and back up all their Nostr data locally on their device. Citrine’s design supports independence and data security by keeping data accessible and under user control.
With features tailored to give users greater command over their data, Citrine allows easy export and import of the database, restoration of contact lists in case of client malfunctions, and detailed relay management options like port configuration, custom icons, user management, and on-demand relay start/stop. Users can even activate TOR access, letting others connect securely to their Nostr relay directly on their phone. Future updates will include automatic broadcasting when the device reconnects to the internet, along with content resolver support to expand its functionality.
Once you have your Citrine relay fully configured, simply add it to the Private and Local relay sections in Amethyst's relay configuration.
Pokey
Pokey for Android is a brand new, real-time notification tool for Nostr. Pokey allows users to receive live updates for their Nostr events and enabling other apps to access and interact with them. Designed for seamless integration within a user's Nostr relays, Pokey lets users stay informed of activity as it happens, with speed and the flexibility to manage which events trigger notifications on their mobile device.
Pokey currently supports connections with Amber, offering granular notification settings so users can tailor alerts to their preferences. Planned features include broadcasting events to other apps, authenticating to relays, built-in Tor support, multi-account handling, and InBox relay management. These upcoming additions aim to make Pokey a fantastic tool for Nostr notifications across the ecosystem.
Zap.store
Zap.store is a permissionless app store powered by Nostr and your trusted social graph. Built to offer a decentralized approach to app recommendations, zap.store enables you to check if friends like Alice follow, endorse, or verify an app’s SHA256 hash. This trust-based, social proof model brings app discovery closer to real-world recommendations from friends and family, bypassing centralized app curation. Unlike conventional app stores and other third party app store solutions like Obtainium, zap.store empowers users to see which apps their contacts actively interact with, providing a higher level of confidence and transparency.
Currently available on Android, zap.store aims to expand to desktop, PWAs, and other platforms soon. You can get started by installing Zap.store on your favorite Android device, and install all of the applications mentioned above.
Android's openness goes hand in hand with Nostr's openness. Enjoy exploring both expanding ecosystems.
-
@ c9badfea:610f861a
2025-05-06 00:36:40- Install Image Toolbox (it's free and open source)
- Open the app, then go to the Tools tab
- Select Checksum Tools
- Navigate to the Compare tab
- Choose the SHA-256 algorithm
- Pick the file to verify
- Enter the expected hash into the Checksum To Compare field
- A "Match!" message confirms successful verification
-
@ c1e9ab3a:9cb56b43
2025-05-05 14:25:28Introduction: The Power of Fiction and the Shaping of Collective Morality
Stories define the moral landscape of a civilization. From the earliest mythologies to the modern spectacle of global cinema, the tales a society tells its youth shape the parameters of acceptable behavior, the cost of transgression, and the meaning of justice, power, and redemption. Among the most globally influential narratives of the past half-century is the Star Wars saga, a sprawling science fiction mythology that has transcended genre to become a cultural religion for many. Central to this mythos is the arc of Anakin Skywalker, the fallen Jedi Knight who becomes Darth Vader. In Star Wars: Episode III – Revenge of the Sith, Anakin commits what is arguably the most morally abhorrent act depicted in mainstream popular cinema: the mass murder of children. And yet, by the end of the saga, he is redeemed.
This chapter introduces the uninitiated to the events surrounding this narrative turn and explores the deep structural and ethical concerns it raises. We argue that the cultural treatment of Darth Vader as an anti-hero, even a role model, reveals a deep perversion in the collective moral grammar of the modern West. In doing so, we consider the implications this mythology may have on young adults navigating identity, masculinity, and agency in a world increasingly shaped by spectacle and symbolic narrative.
Part I: The Scene and Its Context
In Revenge of the Sith (2005), the third episode of the Star Wars prequel trilogy, the protagonist Anakin Skywalker succumbs to fear, ambition, and manipulation. Convinced that the Jedi Council is plotting against the Republic and desperate to save his pregnant wife from a vision of death, Anakin pledges allegiance to Chancellor Palpatine, secretly the Sith Lord Darth Sidious. Upon doing so, he is given a new name—Darth Vader—and tasked with a critical mission: to eliminate all Jedi in the temple, including its youngest members.
In one of the most harrowing scenes in the film, Anakin enters the Jedi Temple. A group of young children, known as "younglings," emerge from hiding and plead for help. One steps forward, calling him "Master Skywalker," and asks what they are to do. Anakin responds by igniting his lightsaber. The screen cuts away, but the implication is unambiguous. Later, it is confirmed through dialogue and visual allusion that he slaughtered them all.
There is no ambiguity in the storytelling. The man who will become the galaxy’s most feared enforcer begins his descent by murdering defenseless children.
Part II: A New Kind of Evil in Youth-Oriented Media
For decades, cinema avoided certain taboos. Even films depicting war, genocide, or psychological horror rarely crossed the line into showing children as victims of deliberate violence by the protagonist. When children were harmed, it was by monstrous antagonists, supernatural forces, or offscreen implications. The killing of children was culturally reserved for historical atrocities and horror tales.
In Revenge of the Sith, this boundary was broken. While the film does not show the violence explicitly, the implication is so clear and so central to the character arc that its omission from visual depiction does not blunt the narrative weight. What makes this scene especially jarring is the tonal dissonance between the gravity of the act and the broader cultural treatment of Star Wars as a family-friendly saga. The juxtaposition of child-targeted marketing with a central plot involving child murder is not accidental—it reflects a deeper narrative and commercial structure.
This scene was not a deviation from the arc. It was the intended turning point.
Part III: Masculinity, Militarism, and the Appeal of the Anti-Hero
Darth Vader has long been idolized as a masculine icon. His towering presence, emotionless control, and mechanical voice exude power and discipline. Military institutions have quoted him. He is celebrated in memes, posters, and merchandise. Within the cultural imagination, he embodies dominance, command, and strategic ruthlessness.
For many young men, particularly those struggling with identity, agency, and perceived weakness, Vader becomes more than a character. He becomes an archetype: the man who reclaims power by embracing discipline, forsaking emotion, and exacting vengeance against those who betrayed him. The emotional pain that leads to his fall mirrors the experiences of isolation and perceived emasculation that many young men internalize in a fractured society.
The symbolism becomes dangerous. Anakin's descent into mass murder is portrayed not as the outcome of unchecked cruelty, but as a tragic mistake rooted in love and desperation. The implication is that under enough pressure, even the most horrific act can be framed as a step toward a noble end.
Part IV: Redemption as Narrative Alchemy
By the end of the original trilogy (Return of the Jedi, 1983), Darth Vader kills the Emperor to save his son Luke and dies shortly thereafter. Luke mourns him, honors him, and burns his body in reverence. In the final scene, Vader's ghost appears alongside Obi-Wan Kenobi and Yoda—the very men who once considered him the greatest betrayal of their order. He is welcomed back.
There is no reckoning. No mention of the younglings. No memorial to the dead. No consequence beyond his own internal torment.
This model of redemption is not uncommon in Western storytelling. In Christian doctrine, the concept of grace allows for any sin to be forgiven if the sinner repents sincerely. But in the context of secular mass culture, such redemption without justice becomes deeply troubling. The cultural message is clear: even the worst crimes can be erased if one makes a grand enough gesture at the end. It is the erasure of moral debt by narrative fiat.
The implication is not only that evil can be undone by good, but that power and legacy matter more than the victims. Vader is not just forgiven—he is exalted.
Part V: Real-World Reflections and Dangerous Scripts
In recent decades, the rise of mass violence in schools and public places has revealed a disturbing pattern: young men who feel alienated, betrayed, or powerless adopt mythic narratives of vengeance and transformation. They often see themselves as tragic figures forced into violence by a cruel world. Some explicitly reference pop culture, quoting films, invoking fictional characters, or modeling their identities after cinematic anti-heroes.
It would be reductive to claim Star Wars causes such events. But it is equally naive to believe that such narratives play no role in shaping the symbolic frameworks through which vulnerable individuals understand their lives. The story of Anakin Skywalker offers a dangerous script:
- You are betrayed.
- You suffer.
- You kill.
- You become powerful.
- You are redeemed.
When combined with militarized masculinity, institutional failure, and cultural nihilism, this script can validate the darkest impulses. It becomes a myth of sacrificial violence, with the perpetrator as misunderstood hero.
Part VI: Cultural Responsibility and Narrative Ethics
The problem is not that Star Wars tells a tragic story. Tragedy is essential to moral understanding. The problem is how the culture treats that story. Darth Vader is not treated as a warning, a cautionary tale, or a fallen angel. He is merchandised, celebrated, and decontextualized.
By separating his image from his actions, society rebrands him as a figure of cool dominance rather than ethical failure. The younglings are forgotten. The victims vanish. Only the redemption remains. The merchandise continues to sell.
Cultural institutions bear responsibility for how such narratives are presented and consumed. Filmmakers may intend nuance, but marketing departments, military institutions, and fan cultures often reduce that nuance to symbol and slogan.
Conclusion: Reckoning with the Stories We Tell
The story of Anakin Skywalker is not morally neutral. It is a tale of systemic failure, emotional collapse, and unchecked violence. When presented in full, it can serve as a powerful warning. But when reduced to aesthetic dominance and easy redemption, it becomes a tool of moral decay.
The glorification of Darth Vader as a cultural icon—divorced from the horrific acts that define his transformation—is not just misguided. It is dangerous. It trains a generation to believe that power erases guilt, that violence is a path to recognition, and that final acts of loyalty can overwrite the deliberate murder of the innocent.
To the uninitiated, Star Wars may seem like harmless fantasy. But its deepest myth—the redemption of the child-killer through familial love and posthumous honor—deserves scrutiny. Not because fiction causes violence, but because fiction defines the possibilities of how we understand evil, forgiveness, and what it means to be a hero.
We must ask: What kind of redemption erases the cries of murdered children? And what kind of culture finds peace in that forgetting?
-
@ 1739d937:3e3136ef
2024-10-29 16:57:08This update marks a major milestone for the project. I know, with certainty, that MLS messaging over Nostr is going to work. That might sound a little crazy after so many months working on the project, and I was pretty confident, but until you’ve got running code, it’s all conjecture.
Late last week, I released a video of a working prototype of White Noise that shows the full flow; creating groups, inviting other users to join those groups, accepting invites, and sending messages back-and-forth. I’m thrilled that I’ve gotten this far but also appalled that it’s taken so long and disgusted at the state of the code in the app (I’ve been told I have unrelenting standards 😅).
If you missed the video last week...
nostr:note125cuk0zetc7sshw52v5zaq9apq3rq7e2x587tr2c96t7z7sjs59svwv0fj
What's Next?
In this update, I want to cover a few things about how I'm planning to proceed and how I’m splitting code out of the app into libraries that will help other developers implement MLS messaging in their own Nostr clients.
First off, many of you know that I've been building White Noise as a Rust app using the Tauri framework. The OpenMLS implementation is also written in Rust (with bindings for many other languages). So, when you hear me talking about library code, think Rust crates for now.
The first library, called openmls-nostr, is an extension/abstraction on top of the openmls implementation of the MLS spec that helps Nostr clients interact more easily with that implementation in a way that feels native to Nostr. Mostly this will be helping developers interact with MLS primitives and ensure that they’re creating, validating, and serializing these objects in the right way at the right times.
The second isn’t a new library as a big contribution to the already excellent rust-nostr library from nostr:npub1drvpzev3syqt0kjrls50050uzf25gehpz9vgdw08hvex7e0vgfeq0eseet. The methods that will go in rust-nostr are highly abstracted and based specifically on the requirements of NIP-104. Mostly this will be helping developers to take those MLS primitives and publish or query them as Nostr events at the right times and to/from the right relays.
Most of this code was originally written directly in the White Noise library so this week I've started to pull code for both of those libraries out and move it to its new home. While I’ve been at it, I've been writing some tests and trying to document things.
An unfortunate offshoot of this is that the usable builds of White Noise are going to take a touch longer. I promise it’s still a very high priority but at this point I need to clean a few things up based on what I've learned thus far.
Another thing that is slowing down release is that; behind the scenes of the dev work, I’ve been battling with Apple for nearly 2 months now to get a proper developer team set up so that we can publish the app via TestFlight for MacOS and iOS. I’ve also been recently learning the intricacies of Android publishing (oh my dear god there are so many devices, OS versions, etc.).
With that in mind, if you know anyone who can help get me up to speed on CI/CD, release pipelines, and multi-platform distribution please hit me up. I would love to learn more and hopefully shortcut some of the pain.
Thanks again so much for all the support over the last few months! It means a lot to me and is a huge part of what is keeping me going on this. 🙏
-
@ a19caaa8:88985eaf
2025-05-06 00:14:49これって更新したらタイムラインに流れちゃう?それはいやかも テスト
-
@ 3bf0c63f:aefa459d
2024-10-26 14:18:23kind:1
maximalism and the future of other stuff and Nostr decentralizationThese two problems exist on Nostr today, and they look unrelated at first:
- People adding more stuff to
kind:1
notes, such as making them editable, or adding special corky syntax thas has to be parsed and rendered in complicated UIs; - The discovery of "other stuff" content (i.e. long-form articles, podcasts, calendar events, livestreams etc) is hard due to the fact that most people only use microblogging clients and they often don't appear there for them.
Point 2 above has 3 different solutions:
- a. Just publish everything as
kind:1
notes; - b. Publish different things as different kinds, but make microblogging clients fetch all the event kinds from people you follow, then render them natively or use NIP-31, or NIP-89 to point users to other clients that would render them better;
- c. Publish different things as different kinds, and reference them in
kind:1
notes that would act as announcements to these other events, also relying on NIP-31 and NIP-89 for displaying references and recommending other clients.
Solution a is obviously very bad, so I won't address it.
For a while I have believed solution b was the correct one, and many others seem to tacitly agree with it, given that some clients have been fetching more and more event kinds and going out of their way to render them in the same feed where only
kind:1
notes were originally expected to be.I don't think clients doing that is necessarily bad, but I do think this have some centralizing effects on the protocol, as it pushes clients to become bigger and bigger, raising the barrier to entry into the
kind:1
realm. And also in the past I have talked about the fact that I disliked that some clients would display my long-form articles as if they were normalkind:1
notes and just dump them into the feeds of whoever was following me: nostr:nevent1qqsdk90k9k30vtzwpj6grxys9mvsegu5kkwd4jmpyhlmtjnxet2rvggprpmhxue69uhhyetvv9ujumn0wdmksetjv5hxxmmdqy8hwumn8ghj7mn0wd68ytnddaksygpm7rrrljungc6q0tuh5hj7ue863q73qlheu4vywtzwhx42a7j9n5hae35cThese and other reasons have made me switch my preference to solution c, as it gives the most flexibility to the publisher: whoever wants to announce stuff so it can be discovered can, whoever doesn't don't have to. And it allows microblogging clients the freedom to render just render tweets and having a straightforward barrier between what they can render and what is just a link to an external app or webapp (of course they can always opt to render the referenced content in-app if they want).
It also makes the case for microapps more evident. If all microblogging clients become superapps that can render recipe events perfectly why would anyone want to use a dedicated recipes app? I guess there are still reasons, but blurring the line between content kinds in superapps would definitely remove some of the reasons and eventually kill all the microapps.
That brings us back to point 1 above (the overcomplication of
kind:1
events): if solution c is what we're going to, that makeskind:1
events very special in Nostr, and not just another kind among others. Microblogging clients become the central plaza of Nostr, thus protecting their neutrality and decentralization much more important. Having a lot of clients with different userbases, doing things in slightly different ways, is essential for that decentralization.It's ok if Nostr ends up having just 2 recipe-sharing clients, but it must have dozens of microblogging clients -- and maybe not even full-blown microblogging clients, but other apps that somehow deal with
kind:1
events in multiple ways. It's ok if implementing a client for public audio-rooms is very hard and complicated, but at the same time it should be very simple to write a client that can render akind:1
note referencing an audio-room and linking to that dedicated client.I hope you got my point and agreed because this article is ended.
- People adding more stuff to
-
@ c9badfea:610f861a
2025-05-05 23:29:08- Install RTranslator (it's free and open source)
- Launch the app, allow notifications, and enter a random name (used only for Walkie-Talkie/Conversations)
- Wait for the translation AI models to download
- Enjoy offline translations
ℹ️ Internet is only needed for the initial download
ℹ️ The app uses the system's TTS engine (consider open source TTS engines like SherpaTTS for de-googled phones)
-
@ ec42c765:328c0600
2024-10-21 07:42:482024年3月
フィリピンのセブ島へ旅行。初海外。
Nostrに投稿したらこんなリプライが
nostr:nevent1qqsff87kdxh6szf9pe3egtruwfz2uw09rzwr6zwpe7nxwtngmagrhhqc2qwq5
nostr:nevent1qqs9c8fcsw0mcrfuwuzceeq9jqg4exuncvhas5lhrvzpedeqhh30qkcstfluj
(ビットコイン関係なく普通の旅行のつもりで行ってた。というか常にビットコインのこと考えてるわけではないんだけど…)
そういえばフィリピンでビットコイン決済できるお店って多いのかな?
海外でビットコイン決済ってなんかかっこいいな!
やりたい!
ビットコイン決済してみよう! in セブ島
BTCMap でビットコイン決済できるところを探す
本場はビットコインアイランドと言われてるボラカイ島みたいだけど
セブにもそれなりにあった!
なんでもいいからビットコイン決済したいだけなので近くて買いやすい店へ
いざタピオカミルクティー屋!
ちゃんとビットコインのステッカーが貼ってある!
つたない英語とGoogle翻訳を使ってビットコイン決済できるか店員に聞いたら
店員「ビットコインで支払いはできません」
(えーーーー、なんで…ステッカー貼ってあるやん…。)
まぁなんか知らんけどできないらしい。
店員に色々質問したかったけど質問する英語力もないのでする気が起きなかった
結局、せっかく店まで足を運んだので普通に現金でタピオカミルクティーを買った
タピオカミルクティー
話題になってた時も特に興味なくて飲んでなかったので、これが初タピオカミルクティーになった
法定通貨の味がした。
どこでもいいからなんでもいいから
海外でビットコイン決済してみたい
ビットコイン決済させてくれ! in ボラカイ島
ビットコインアイランドと呼ばれるボラカイ島はめちゃくちゃビットコイン決済できるとこが多いらしい
でもやめてしまった店も多いらしい
でも300もあったならいくつかはできるとこあるやろ!
nostr:nevent1qqsw0n6utldy6y970wcmc6tymk20fdjxt6055890nh8sfjzt64989cslrvd9l
行くしかねぇ!
ビットコインアイランドへ
フィリピンの国内線だぁ
``` 行き方: Mactan-Cebu International Airport ↓飛行機 Godofredo P. Ramos Airport (Caticlan International Airport, Boracay Airport) ↓バスなど Caticlan フェリーターミナル ↓船 ボラカイ島
料金: 飛行機(受託手荷物付き) 往復 21,000円くらい 空港~ボラカイ島のホテルまで(バス、船、諸経費) 往復 3,300円くらい (klookからSouthwest Toursを利用)
このページが色々詳しい https://smaryu.com/column/d/91761/ ```
空港おりたらSouthwestのバスに乗る
事前にネットで申し込みをしている場合は5番窓口へ
港!
船!(めっちゃ速い)
ボラカイついた!
ボラカイ島の移動手段
セブの移動はgrabタクシーが使えるがボラカイにはない。
ネットで検索するとトライシクルという三輪タクシーがおすすめされている。
(トライシクル:開放的で風がきもちいい)
トライシクルの欠点はふっかけられるので値切り交渉をしないといけないところ。
最初に300phpくらいを提示され、行き先によるけど150phpくらいまでは下げられる。
これはこれで楽しい値切り交渉だけど、個人的にはトライシクルよりバスの方が気楽。
Hop On Hop Off バス:
https://www.hohoboracay.com/pass.php
一日乗り放題250phpなので往復や途中でどこか立ち寄ったりを考えるとお得。
バスは現金が使えないので事前にどこかでカードを買うか車内で買う。
私は何も知らずに乗って車内で乗務員さんから現金でカードを買った。
バスは狭い島内を数本がグルグル巡回してるので20~30分に1本くらいは来るイメージ。
逆にトライシクルは待たなくても捕まえればすぐに乗れるところがいいところかもしれない。
現実
ボラカイ島 BTC Map
BTC決済できるとこめっちゃある
さっそく店に行く!
「bitcoin accepted here」のステッカーを見つける!
店員にビットコイン支払いできるか聞く!
できないと言われる!
もう一軒行く
「bitcoin accepted here」のステッカーを見つける
店員にビットコイン支払いできるか聞く
できないと言われる
5件くらいは回った
全部できない!
悲しい
で、ネットでビットコインアイランドで検索してみると
旅行日の一か月前くらいにアップロードされた動画があったので見てみた
要約 - ビットコイン決済はpouch.phというスタートアップ企業がボラカイ島の店にシステムを導入した - ビットコインアイランドとすることで観光客が10%~30%増加つまり数百~千人程度のビットコインユーザーが来ると考えた - しかし実際には3~5人だった - 結果的に200の店舗がビットコイン決済を導入しても使われたのはごく一部だった - ビットコイン決済があまり使われないので店員がやり方を忘れてしまった - 店は関心を失いpouchのアプリを消した
https://youtu.be/uaqx6794ipc?si=Afq58BowY1ZrkwaQ
なるほどね~
しゃあないわ
聖地巡礼
動画内でpouchのオフィスだったところが紹介されていた
これは半年以上前の画像らしい
現在はオフィスが閉鎖されビットコインの看板は色あせている
おもしろいからここに行ってみよう!となった
で行ってみた
看板の色、更に薄くなってね!?
記念撮影
これはこれで楽しかった
場所はこの辺
https://maps.app.goo.gl/WhpEV35xjmUw367A8
ボラカイ島の中心部の結構いいとこ
みんな~ビットコイン(の残骸)の聖地巡礼、行こうぜ!
最後の店
Nattoさんから情報が
なんかあんまりネットでも今年になってからの情報はないような…https://t.co/hiO2R28sfO
— Natto (@madeofsoya) March 22, 2024
ここは比較的最近…?https://t.co/CHLGZuUz04もうこれで最後だと思ってダメもとで行ってみた なんだろうアジア料理屋さん?
もはや信頼度0の「bitcoin accepted here」
ビットコイン払いできますか?
店員「できますよ」
え?ほんとに?ビットコイン払いできる?
店員「できます」
できる!!!!
なんかできるらしい。
適当に商品を注文して
印刷されたQRコードを出されたので読み取る
ここでスマートに決済できればよかったのだが結構慌てた
自分は英語がわからないし相手はビットコインがわからない
それにビットコイン決済は日本で1回したことがあるだけだった
どうもライトニングアドレスのようだ
送金額はこちらで指定しないといけない
店員はフィリピンペソ建ての金額しか教えてくれない
何sats送ればいいのか分からない
ここでめっちゃ混乱した
でもウォレットの設定変えればいいと気付いた
普段円建てにしているのをフィリピンペソ建てに変更すればいいだけだった
設定を変更したら相手が提示している金額を入力して送金
送金は2、3秒で完了した
やった!
海外でビットコイン決済したぞ!
ログ
PORK CHAR SIU BUN とかいうやつを買った
普通にめっちゃおいしかった
なんかビットコイン決済できることにビビッて焦って一品しか注文しなかったけどもっと頼めばよかった
ここです。みなさん行ってください。
Bunbun Boracay
https://maps.app.goo.gl/DX8UWM8Y6sEtzYyK6
めでたしめでたし
以下、普通の観光写真
セブ島
ジンベエザメと泳いだ
スミロン島でシュノーケリング
市場の路地裏のちょっとしたダウンタウン?スラム?をビビりながら歩いた
ボホール島
なんか変な山
メガネザル
現地の子供が飛び込みを披露してくれた
ボラカイ島
ビーチ
夕日
藻
ボラカイ島にはいくつかビーチがあって宿が多いところに近い南西のビーチ、ホワイトビーチは藻が多かった(時期によるかも)
北側のプカシェルビーチは全然藻もなく、水も綺麗でめちゃくちゃよかった
プカシェルビーチ
おわり!
-
@ 41959693:3888319c
2025-05-05 10:58:49Die Schnelllebigkeit der Moderne tilgt in unserer Wahrnehmung die zeitlichen Abstände von Ereignissen. Es bleibt kaum Gelegenheit ein Thema im Rückspiegel zu erfassen, zu durchdenken, zu rezipieren – schon buhlt der nächste Augenblick um Aufmerksamkeit.
Mögen an die Leipziger Buchmesse Ende März mittlerweile nur noch Visitenkarten und Selfies oder manch signiertes Exemplar im Buchregal erinnern; es lohnt sich doch, die größte deutsche Besuchermesse der Buchbranche nun rückblickend zu betrachten. Gerade da ihr Event-Charakter von Jahr zu Jahr zunimmt, ist die Frage reizvoll, welche beständigen Themen dort wie präsentiert wurden, in diesem Fall natürlich „Frieden“.
An allen vier Messetagen hielt ich die Augen auf, wo sich Autoren, Verlage und allgemeine die Institutionen der Branche dazu äußerten. Vorgreifend muss gesagt werden, dass Krieg und Frieden heute nicht mehr zwingend primäre Themen sind, sondern oft begleitend mitbehandelt werden. Für Zerstörung und Elend scheint dabei immer die große Bühne bereitet zu werden; einvernehmliche Koexistenz wird gewöhnlich mit leiser Klaviatur gespielt. Dennoch fand ich vier Bücher, welche direkt das Thema ansprachen:
- Die norwegische Soziologin und Publizistin Linn Stalsberg stellte ihr Werk „War is contempt for life. An essay on peace“ (ISBN: 978-8282262736, Res Publica, 2024) vor, in welchem sie auf all die Menschen eingeht, die für Pazifismus, Gewaltlosigkeit und Anti-Militarismus einstehen. Ihrer Meinung nach haben wir unzählige Berichte über Kriegs-, kaum jedoch Geschichten von Friedenshelden. Die deutsche Ausgabe soll im schweizerischen Kommode-Verlag im September diesen Jahres erscheinen.
-
„Die Evolution der Gewalt. Warum wir Frieden wollen, aber Kriege führen“, geschrieben von Kai Michel, Harald Meller und Carel van Schaik, wurde für den Preis der Leipziger Buchmesse in der Kategorie Sachbuch/Essayistik nominiert. Die Autoren vertreten u. a. die Thesen, dass Krieg kein essentieller Gesellschaftsprozess ist, wie es einige geflügelte Worte oder ideologische Äußerungen vermuten lassen – und, dass die Erfolge der Bellizisten nur 1 % der Weltgeschichte ausmachen (ISBN: 978-3423284387, dtv, 2024).
-
Der Publizist und taz-Redakteur Pascal Beucker gibt mit „Pazifismus – ein Irrweg?“ (ISBN 978-3170434325, Kohlhammer, 2024) eine Übersicht über die Geschichte des Pazifismus, geht auf in Vergessenheit geratene Hintergründe der einzelnen Motivationen und Bewegungen ein und wagt Zukunftsprognosen über die Erfolgschancen gewaltloser Auseinandersetzungen.
- Neben den Sachbüchern fand sich auch ein Vertreter aus dem Bereich der Belletristik: Rüdiger Heins und Michael Landgraf gaben eine Anthologie mit Friedenstexten, Prosa und Lyrik, heraus: „365 Tage Frieden“ (ISBN: 978-3930758951 Edition Maya, 2025), verspricht der Titel, für den zahlreiche Autoren auf ganz unterschiedliche Art und Weise träumten, sich erinnerten, uns mahnten und weiterhin hoffen.
Ansonsten fanden sich viele kleinere Veranstaltungsformate und Verlage, welche ihr Scherflein beitragen wollten, dabei aber sehr unscharf blieben und beispielsweise im Forum Offene Gesellschaft u. a. Grenzoffenheit, Toleranz und Inklusion behandelten. Inwieweit diese, in jüngster Vergangenheit doch recht stark medial forcierten Themen die zentrale Friedensfrage ergründen und stützen, ist vermutlich Auslegungssache.
Unsere Definition von Frieden mag nicht sonderlich scharf sein; jeder zieht darin andere Grenzen und wird sich situativ nach eigenem Gusto verhalten. Dennoch bewegt uns eine Grundschwingung, führt uns ein Sehnen in eine höhere, gemeinsame Richtung. Diese Erkenntnis ist es wert, durch den Lauf der Geschichte bewahrt zu werden, durch aktuelle Zeugnisse ebenso wie durch die Gedanken derer, die vor uns waren.
Ich habe beschlossen diese Bücher für Sie, werte Leser, in den nächsten Wochen nach und nach zu rezensieren.
Dieser Beitrag wurde mit dem Pareto-Client geschrieben.
Not yet on Nostr and want the full experience? Easy onboarding via Start.
Artikel findet man auch auf Telegram unter:
-
@ 8947a945:9bfcf626
2024-10-17 08:06:55สวัสดีทุกคนบน Nostr ครับ รวมไปถึง watchersและ ผู้ติดตามของผมจาก Deviantart และ platform งานศิลปะอื่นๆนะครับ
ตั้งแต่ต้นปี 2024 ผมใช้ AI เจนรูปงานตัวละครสาวๆจากอนิเมะ และเปิด exclusive content ให้สำหรับผู้ที่ชื่นชอบผลงานของผมเป็นพิเศษ
ผมโพสผลงานผมทั้งหมดไว้ที่เวบ Deviantart และค่อยๆสร้างฐานผู้ติดตามมาเรื่อยๆอย่างค่อยเป็นค่อยไปมาตลอดครับ ทุกอย่างเติบโตไปเรื่อยๆของมัน ส่วนตัวผมมองว่ามันเป็นพิร์ตธุรกิจออนไลน์ ของผมพอร์ตนึงได้เลย
เมื่อวันที่ 16 กย.2024 มีผู้ติดตามคนหนึ่งส่งข้อความส่วนตัวมาหาผม บอกว่าชื่นชอบผลงานของผมมาก ต้องการจะขอซื้อผลงาน แต่ขอซื้อเป็น NFT นะ เสนอราคาซื้อขายต่อชิ้นที่สูงมาก หลังจากนั้นผมกับผู้ซื้อคนนี้พูดคุยกันในเมล์ครับ
นี่คือข้อสรุปสั่นๆจากการต่อรองซื้อขายครับ
(หลังจากนี้ผมขอเรียกผู้ซื้อว่า scammer นะครับ เพราะไพ่มันหงายมาแล้ว ว่าเขาคือมิจฉาชีพ)
- Scammer รายแรก เลือกผลงานที่จะซื้อ เสนอราคาซื้อที่สูงมาก แต่ต้องเป็นเวบไซต์ NFTmarket place ที่เขากำหนดเท่านั้น มันทำงานอยู่บน ERC20 ผมเข้าไปดูเวบไซต์ที่ว่านี้แล้วรู้สึกว่ามันดูแปลกๆครับ คนที่จะลงขายผลงานจะต้องใช้ email ในการสมัครบัญชีซะก่อน ถึงจะผูก wallet อย่างเช่น metamask ได้ เมื่อผูก wallet แล้วไม่สามารถเปลี่ยนได้ด้วย ตอนนั้นผมใช้ wallet ที่ไม่ได้ link กับ HW wallet ไว้ ทดลองสลับ wallet ไปๆมาๆ มันทำไม่ได้ แถมลอง log out แล้ว เลข wallet ก็ยังคาอยู่อันเดิม อันนี้มันดูแปลกๆแล้วหนึ่งอย่าง เวบนี้ค่า ETH ในการ mint 0.15 - 0.2 ETH … ตีเป็นเงินบาทนี่แพงบรรลัยอยู่นะครับ
-
Scammer รายแรกพยายามชักจูงผม หว่านล้อมผมว่า แหม เดี๋ยวเขาก็มารับซื้องานผมน่า mint งานเสร็จ รีบบอกเขานะ เดี๋ยวเขารีบกดซื้อเลย พอขายได้กำไร ผมก็ได้ค่า gas คืนได้ แถมยังได้กำไรอีก ไม่มีอะไรต้องเสีนจริงมั้ย แต่มันเป้นความโชคดีครับ เพราะตอนนั้นผมไม่เหลือทุนสำรองที่จะมาซื้อ ETH ได้ ผมเลยต่อรองกับเขาตามนี้ครับ :
-
ผมเสนอว่า เอางี้มั้ย ผมส่งผลงานของผมแบบ low resolution ให้ก่อน แลกกับให้เขาช่วยโอน ETH ที่เป็นค่า mint งานมาให้หน่อย พอผมได้ ETH แล้ว ผมจะ upscale งานของผม แล้วเมล์ไปให้ ใจแลกใจกันไปเลย ... เขาไม่เอา
- ผมเสนอให้ไปซื้อที่ร้านค้าออนไลน์ buymeacoffee ของผมมั้ย จ่ายเป็น USD ... เขาไม่เอา
- ผมเสนอให้ซื้อขายผ่าน PPV lightning invoice ที่ผมมีสิทธิ์เข้าถึง เพราะเป็น creator ของ Creatr ... เขาไม่เอา
- ผมยอกเขาว่างั้นก็รอนะ รอเงินเดือนออก เขาบอก ok
สัปดาห์ถัดมา มี scammer คนที่สองติดต่อผมเข้ามา ใช้วิธีการใกล้เคียงกัน แต่ใช้คนละเวบ แถมเสนอราคาซื้อที่สูงกว่าคนแรกมาก เวบที่สองนี้เลวร้ายค่าเวบแรกอีกครับ คือต้องใช้เมล์สมัครบัญชี ไม่สามารถผูก metamask ได้ พอสมัครเสร็จจะได้ wallet เปล่าๆมาหนึ่งอัน ผมต้องโอน ETH เข้าไปใน wallet นั้นก่อน เพื่อเอาไปเป็นค่า mint NFT 0.2 ETH
ผมบอก scammer รายที่สองว่า ต้องรอนะ เพราะตอนนี้กำลังติดต่อซื้อขายอยู่กับผู้ซื้อรายแรกอยู่ ผมกำลังรอเงินเพื่อมาซื้อ ETH เป็นต้นทุนดำเนินงานอยู่ คนคนนี้ขอให้ผมส่งเวบแรกไปให้เขาดูหน่อย หลังจากนั้นไม่นานเขาเตือนผมมาว่าเวบแรกมันคือ scam นะ ไม่สามารถถอนเงินออกมาได้ เขายังส่งรูป cap หน้าจอที่คุยกับผู้เสียหายจากเวบแรกมาให้ดูว่าเจอปัญหาถอนเงินไม่ได้ ไม่พอ เขายังบลัฟ opensea ด้วยว่าลูกค้าขายงานได้ แต่ถอนเงินไม่ได้
Opensea ถอนเงินไม่ได้ ตรงนี้แหละครับคือตัวกระตุกต่อมเอ๊ะของผมดังมาก เพราะ opensea อ่ะ ผู้ใช้ connect wallet เข้ากับ marketplace โดยตรง ซื้อขายกันเกิดขึ้น เงินวิ่งเข้าวิ่งออก wallet ของแต่ละคนโดยตรงเลย opensea เก็บแค่ค่า fee ในการใช้ platform ไม่เก็บเงินลูกค้าไว้ แถมปีนี้ค่า gas fee ก็ถูกกว่า bull run cycle 2020 มาก ตอนนี้ค่า gas fee ประมาณ 0.0001 ETH (แต่มันก็แพงกว่า BTC อยู่ดีอ่ะครับ)
ผมเลยเอาเรื่องนี้ไปปรึกษาพี่บิท แต่แอดมินมาคุยกับผมแทน ทางแอดมินแจ้งว่ายังไม่เคยมีเพื่อนๆมาปรึกษาเรื่องนี้ กรณีที่ผมทักมาถามนี่เป็นรายแรกเลย แต่แอดมินให้ความเห็นไปในทางเดียวกับสมมุติฐานของผมว่าน่าจะ scam ในเวลาเดียวกับผมเอาเรื่องนี้ไปถามในเพจ NFT community คนไทนด้วย ได้รับการ confirm ชัดเจนว่า scam และมีคนไม่น้อยโดนหลอก หลังจากที่ผมรู้ที่มาแล้ว ผมเลยเล่นสงครามปั่นประสาท scammer ทั้งสองคนนี้ครับ เพื่อดูว่าหลอกหลวงมิจฉาชีพจริงมั้ย
โดยวันที่ 30 กย. ผมเลยปั่นประสาน scammer ทั้งสองรายนี้ โดยการ mint ผลงานที่เขาเสนอซื้อนั่นแหละ ขึ้น opensea แล้วส่งข้อความไปบอกว่า
mint ให้แล้วนะ แต่เงินไม่พอจริงๆว่ะโทษที เลย mint ขึ้น opensea แทน พอดีบ้านจน ทำได้แค่นี้ไปถึงแค่ opensea รีบไปซื้อล่ะ มีคนจ้องจะคว้างานผมเยอะอยู่ ผมไม่คิด royalty fee ด้วยนะเฮ้ย เอาไปขายต่อไม่ต้องแบ่งกำไรกับผม
เท่านั้นแหละครับ สงครามจิตวิทยาก็เริ่มขึ้น แต่เขาจนมุม กลืนน้ำลายตัวเอง ช็อตเด็ดคือ
เขา : เนี่ยอุส่ารอ บอกเพื่อนในทีมว่าวันจันทร์ที่ 30 กย. ได้ของแน่ๆ เพื่อนๆในทีมเห็นงานผมแล้วมันสวยจริง เลยใส่เงินเต็มที่ 9.3ETH (+ capture screen ส่งตัวเลขยอดเงินมาให้ดู)ไว้รอโดยเฉพาะเลยนะ ผม : เหรอ ... งั้น ขอดู wallet address ที่มี transaction มาให้ดูหน่อยสิ เขา : 2ETH นี่มัน 5000$ เลยนะ ผม : แล้วไง ขอดู wallet address ที่มีการเอายอดเงิน 9.3ETH มาให้ดูหน่อย ไหนบอกว่าเตรียมเงินไว้มากแล้วนี่ ขอดูหน่อย ว่าใส่ไว้เมื่อไหร่ ... เอามาแค่ adrress นะเว้ย ไม่ต้องทะลึ่งส่ง seed มาให้ เขา : ส่งรูปเดิม 9.3 ETH มาให้ดู ผม : รูป screenshot อ่ะ มันไม่มีความหมายหรอกเว้ย ตัดต่อเอาก็ได้ง่ายจะตาย เอา transaction hash มาดู ไหนว่าเตรียมเงินไว้รอ 9.3ETH แล้วอยากซื้องานผมจนตัวสั่นเลยไม่ใช่เหรอ ถ้าจะส่ง wallet address มาให้ดู หรือจะช่วยส่ง 0.15ETH มาให้ยืม mint งานก่อน แล้วมากดซื้อ 2ETH ไป แล้วผมใช้ 0.15ETH คืนให้ก็ได้ จะซื้อหรือไม่ซื้อเนี่ย เขา : จะเอา address เขาไปทำไม ผม : ตัดจบ รำคาญ ไม่ขายให้ละ เขา : 2ETH = 5000 USD เลยนะ ผม : แล้วไง
ผมเลยเขียนบทความนี้มาเตือนเพื่อนๆพี่ๆทุกคนครับ เผื่อใครกำลังเปิดพอร์ตทำธุรกิจขาย digital art online แล้วจะโชคดี เจอของดีแบบผม
ทำไมผมถึงมั่นใจว่ามันคือการหลอกหลวง แล้วคนโกงจะได้อะไร
อันดับแรกไปพิจารณาดู opensea ครับ เป็นเวบ NFTmarketplace ที่ volume การซื้อขายสูงที่สุด เขาไม่เก็บเงินของคนจะซื้อจะขายกันไว้กับตัวเอง เงินวิ่งเข้าวิ่งออก wallet ผู้ซื้อผู้ขายเลย ส่วนทางเวบเก็บค่าธรรมเนียมเท่านั้น แถมค่าธรรมเนียมก็ถูกกว่าเมื่อปี 2020 เยอะ ดังนั้นการที่จะไปลงขายงานบนเวบ NFT อื่นที่ค่า fee สูงกว่ากันเป็นร้อยเท่า ... จะทำไปทำไม
ผมเชื่อว่า scammer โกงเงินเจ้าของผลงานโดยการเล่นกับความโลภและความอ่อนประสบการณ์ของเจ้าของผลงานครับ เมื่อไหร่ก็ตามที่เจ้าของผลงานโอน ETH เข้าไปใน wallet เวบนั้นเมื่อไหร่ หรือเมื่อไหร่ก็ตามที่จ่ายค่า fee ในการ mint งาน เงินเหล่านั้นสิ่งเข้ากระเป๋า scammer ทันที แล้วก็จะมีการเล่นตุกติกต่อแน่นอนครับ เช่นถอนไม่ได้ หรือซื้อไม่ได้ ต้องโอนเงินมาเพิ่มเพื่อปลดล็อค smart contract อะไรก็ว่าไป แล้วคนนิสัยไม่ดีพวกเนี้ย ก็จะเล่นกับความโลภของคน เอาราคาเสนอซื้อที่สูงโคตรๆมาล่อ ... อันนี้ไม่ว่ากัน เพราะบนโลก NFT รูปภาพบางรูปที่ไม่ได้มีความเป็นศิลปะอะไรเลย มันดันขายกันได้ 100 - 150 ETH ศิลปินที่พยายามสร้างตัวก็อาจจะมองว่า ผลงานเรามีคนรับซื้อ 2 - 4 ETH ต่องานมันก็มากพอแล้ว (จริงๆมากเกินจนน่าตกใจด้วยซ้ำครับ)
บนโลกของ BTC ไม่ต้องเชื่อใจกัน โอนเงินไปหากันได้ ปิดสมุดบัญชีได้โดยไม่ต้องเชื่อใจกัน
บบโลกของ ETH "code is law" smart contract มีเขียนอยู่แล้ว ไปอ่าน มันไม่ได้ยากมากในการทำความเข้าใจ ดังนั้น การจะมาเชื่อคำสัญญาจากคนด้วยกัน เป็นอะไรที่ไม่มีเหตุผล
ผมไปเล่าเรื่องเหล่านี้ให้กับ community งานศิลปะ ก็มีทั้งเสียงตอบรับที่ดี และไม่ดีปนกันไป มีบางคนยืนยันเสียงแข็งไปในทำนองว่า ไอ้เรื่องแบบเนี้ยไม่ได้กินเขาหรอก เพราะเขาตั้งใจแน่วแน่ว่างานศิลป์ของเขา เขาไม่เอาเข้ามายุ่งในโลก digital currency เด็ดขาด ซึ่งผมก็เคารพมุมมองเขาครับ แต่มันจะดีกว่ามั้ย ถ้าเราเปิดหูเปิดตาให้ทันเทคโนโลยี โดยเฉพาะเรื่อง digital currency , blockchain โดนโกงทีนึงนี่คือหมดตัวกันง่ายกว่าเงิน fiat อีก
อยากจะมาเล่าให้ฟังครับ และอยากให้ช่วยแชร์ไปให้คนรู้จักด้วย จะได้ระวังตัวกัน
Note
- ภาพประกอบ cyber security ทั้งสองนี่ของผมเองครับ ทำเอง วางขายบน AdobeStock
- อีกบัญชีนึงของผม "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw กำลังค่อยๆเอาผลงานจากโลกข้างนอกเข้ามา nostr ครับ ตั้งใจจะมาสร้างงานศิลปะในนี้ เพื่อนๆที่ชอบงาน จะได้ไม่ต้องออกไปหาที่ไหน
ผลงานของผมครับ - Anime girl fanarts : HikariHarmony - HikariHarmony on Nostr - General art : KeshikiRakuen - KeshikiRakuen อาจจะเป็นบัญชี nostr ที่สามของผม ถ้าไหวครับ
-
@ 502ab02a:a2860397
2025-05-05 03:39:58โครงการ “Meat Free Monday” หรือ “จันทร์ไร้เนื้อ” เริ่มต้นในปี 2009 โดยพอล แมคคาร์ทนีย์และลูกสาวสองคนของเขา มีเป้าหมายในการลดการบริโภคเนื้อสัตว์เพื่อสุขภาพของมนุษย์และสุขภาพของโลกอย่างน้อยสัปดาห์ละหนึ่งวัน โดยอ้างถึงเหตุผลด้านสุขภาพและสโครงการ “Meat Free Monday” หรือ “จันทร์ไร้เนื้อ” เริ่มต้นในปี 2009 โดยพอล แมคคาร์ทนีย์และลูกสาวสองคนของเขา มีเป้าหมายในการลดการบริโภคเนื้อสัตว์เพื่อสุขภาพของมนุษย์และสุขภาพของโลกอย่างน้อยสัปดาห์ละหนึ่งวัน โดยอ้างถึงเหตุผลด้านสุขภาพและสิ่งแวดล้อม
https://youtu.be/ulVFWJqXNg0?si=eMs-CxtPE1kjljLD เซอร์พอล แมคคาร์ทนีย์ได้ผลิตภาพยนตร์สั้นเรื่อง "One Day a Week" เพื่อส่งเสริมโครงการ MFM โดยเน้นถึงผลกระทบของการบริโภคเนื้อสัตว์ต่อสิ่งแวดล้อมและสุขภาพของมนุษย์ ภาพยนตร์นี้มีการปรากฏตัวของบุคคลที่มีชื่อเสียง เช่น วูดี้ ฮาร์เรลสัน และเอ็มมา สโตน
MFM ได้รับการสนับสนุนจากโรงเรียนมากกว่า 3,000 แห่งในสหราชอาณาจักร รวมถึงหน่วยงานการศึกษาท้องถิ่น เช่น เอดินบะระและทราฟฟอร์ด โดยมีการจัดทำชุดข้อมูลสำหรับโรงเรียนเพื่อส่งเสริมให้นักเรียนมีสุขภาพดีขึ้นและเป็นพลเมืองที่มีความรับผิดชอบต่อโลก แนวคิดนี้ได้รับการสนับสนุนจากองค์กรต่าง ๆ เช่น ProVeg International ซึ่งมีบทบาทในการส่งเสริมอาหารจากพืชในโรงเรียนผ่านโครงการ “School Plates” โดยให้คำปรึกษาเกี่ยวกับเมนูอาหาร คำแนะนำด้านโภชนาการ และการฝึกอบรมการทำอาหารจากพืช รวมถึงมีการสนับสนุนจากหน่วยงานท้องถิ่นและองค์กรต่างๆ ทั่วโลก เช่น เมืองเกนต์ในเบลเยียมที่มีการจัดวันพฤหัสบดีปลอดเนื้อสัตว์อย่างเป็นทางการ และเมืองเซาเปาโลในบราซิลที่มีการสนับสนุนจากสมาคมมังสวิรัติของบราซิล
อย่างไรก็ตาม มีข้อกังวลเกี่ยวกับการแทรกแซงขององค์กรเหล่านี้ในระบบการศึกษา โดยเฉพาะในโรงเรียนระดับ K-12 ที่มีการนำเสนออาหารจากพืชในวันจันทร์ โดยอ้างถึงประโยชน์ด้านสุขภาพและสิ่งแวดล้อม การดำเนินการดังกล่าวอาจส่งผลต่อการรับรู้ของเด็ก ๆ เกี่ยวกับเนื้อสัตว์ และอาจนำไปสู่การเปลี่ยนแปลงพฤติกรรมการบริโภคในระยะยาว
นอกจากนี้ ยังมีการวิพากษ์วิจารณ์เกี่ยวกับการใช้คำว่า “Meat Free” ซึ่งอาจสื่อถึงการขาดบางสิ่งบางอย่าง ProVeg UK แนะนำให้หลีกเลี่ยงการใช้คำนี้และใช้คำอื่นที่เน้นถึงความเป็นมิตรกับสิ่งแวดล้อมแทน
ในขณะที่การส่งเสริมการบริโภคอาหารจากพืชมีข้อดีในด้านสุขภาพและสิ่งแวดล้อม แต่การดำเนินการที่มีลักษณะเป็นการบังคับหรือแทรกแซงในระบบการศึกษาโดยไม่มีการให้ข้อมูลที่ครบถ้วนและเป็นกลาง อาจนำไปสู่การล้างสมองและการเปลี่ยนแปลงพฤติกรรมการบริโภคโดยไม่รู้ตัว ดังนั้น การส่งเสริมการบริโภคอาหารจากพืชควรเป็นไปอย่างโปร่งใส ให้ข้อมูลที่ครบถ้วน และเคารพสิทธิ์ในการเลือกของแต่ละบุคคล โดยเฉพาะในกลุ่มเด็กและเยาวชน
ProVeg UK ได้เสนอให้เปลี่ยนชื่อจาก “Meat-Free Monday” เป็น “Planet-Friendly Days” โดยให้เหตุผลว่าคำว่า “Meat-Free” อาจสื่อถึงการขาดบางสิ่งบางอย่าง และอาจทำให้ผู้บริโภคมองว่าเมนูดังกล่าวไม่น่าสนใจ การใช้คำว่า “Planet-Friendly” ช่วยเน้นถึงความเป็นมิตรกับสิ่งแวดล้อมและส่งเสริมการบริโภคอาหารจากพืชในแง่บวกมากขึ้น
นอกจากนี้ทาง ProVeg UK ดำเนินโครงการ “School Plates” เพื่อช่วยโรงเรียนในการปรับปรุงเมนูอาหารให้มีความยั่งยืนมากขึ้น โดยให้คำปรึกษาเกี่ยวกับเมนูอาหาร คำแนะนำด้านโภชนาการ และการฝึกอบรมการทำอาหารจากพืช นอกจากนี้ยังมีการจัดกิจกรรมเสริม เช่น การแข่งขันออกแบบโปสเตอร์และชั้นเรียนทำอาหารจากพืช เพื่อส่งเสริมการเรียนรู้เกี่ยวกับระบบอาหารและกระตุ้นให้เด็ก ๆ ลองชิมเมนูใหม่ ๆ
ในส่วนของ ProVeg International ก็ยังสนับสนุนโครงการระดับชาติ เช่น “National School Meals Week” ในปี 2020 โดยให้คำแนะนำแก่โรงเรียนในการจัดเมนูอาหารที่ลดการบริโภคเนื้อสัตว์ และเสนอให้ใช้คำที่เน้นถึงรสชาติหรือส่วนผสมหลักของเมนู เลี่ยงการใช้คำว่า “meat-free” หรือ “vegan” ซึ่งอาจมีผลกระทบต่อการเลือกเมนูของผู้บริโภค แล้วยังมีการส่งเสริมการบริโภคอาหารจากพืชผ่านกิจกรรมต่าง ๆ เช่น “Veggie Challenge” ซึ่งเป็นโปรแกรมออนไลน์ฟรีที่สอนการทำอาหารจากพืช พร้อมให้คำแนะนำและสูตรอาหารรายวัน เพื่อช่วยให้ผู้เข้าร่วมสามารถเปลี่ยนแปลงพฤติกรรมการบริโภคได้อย่างยั่งยืน
จากข้อมูลข้างต้น แสดงให้เห็นว่า ProVeg International มีบทบาทในการส่งเสริมการบริโภคอาหารจากพืชผ่านหลายช่องทาง โดยเฉพาะในระบบการศึกษาและโครงการอาหารโรงเรียน เพื่อสนับสนุนการเปลี่ยนแปลงพฤติกรรมการบริโภคในระยะยาว
คำถามคือ ทำไมต้องเริ่มจากโรงเรียน? ก็เพราะเด็กยังไม่รู้จัก “ฟังร่างกายตัวเอง” ยังเชื่อในสิ่งที่ครู พ่อแม่ หรือคนในทีวีบอก ถ้าเด็กถูกสอนว่า “เนื้อวัวทำลายโลก” และ “ไก่ หมู คือปีศาจ” เด็กคนนั้นจะโตมาโดยมอง “ของจริง” เป็นของแปลก และมอง “ของปลอม” เป็นพระเอก
วันนี้ห้ามกินเนื้อ พรุ่งนี้อาจห้ามพูดถึงเนื้อ และวันหนึ่ง...เขาอาจห้ามเราผลิตเนื้อจริงเลยก็ได้
อย่าเข้าใจผิดว่าเฮียต่อต้านผักนะ เฮียชอบผักที่ขึ้นเองตามธรรมชาติ แต่เฮียไม่ชอบ "ผักที่มากับนโยบาย" ไม่ชอบ "จานอาหารที่ถูกกำหนดด้วยวาระซ่อนเร้น"
Meat Free Monday อาจเป็นแค่หนึ่งวันในสัปดาห์ แต่ถ้าเราไม่ตั้งคำถาม มันอาจกลายเป็นชีวิตทั้งชีวิตที่ถูกออกแบบไว้ล่วงหน้า
ที่น่าสนใจอีกอย่างนึงคือ MFM เป็นองค์กรไม่แสวงหาผลกำไรที่ดำเนินการภายใต้การสนับสนุนของ Charities Aid Foundation (หมายเลขทะเบียนการกุศล 268369) อย่างไรก็ตาม "ไม่มีข้อมูลสาธารณะเกี่ยวกับจำนวนเงินทุนหรือแหล่งเงินทุนที่เฉพาะเจาะจงที่สนับสนุนโครงการนี้"
#pirateketo #กูต้องรู้มั๊ย #ม้วนหางสิลูก #siamstr
-
@ 069c4a92:ce7df3ed
2025-05-05 20:57:23The Flash USDT Generator is responsible for producing and storing Flash USDT tokens with a flexible daily limit. It ensures a consistent supply of tokens for transactions while managing distribution securely and efficiently. Our Flash USDT offers a unique, temporary cryptocurrency that lasts for 90 days before expiring. This innovative solution allows for seamlesstrading and transferring of funds. Notably, the Flash Bitcoin and USDT can be transferred to 12 different wallets, including prominent platforms like Binance and Trust Wallet.
Features Adjustable Daily Limit: Customizable settings allow for modifying the token generation cap. Automated Distribution: Ensures quick and secure token transfers to user wallets. Minimal Latency: Prevents blockchain congestion issues for seamless transfers. Real-Time Transaction Verification: Each transaction is immediately verified on the respective blockchain explorer to confirm accuracy and validity. Temporary Nature: One of the most distinctive characteristics of USDT FLASH Generator is that it generates flash coin that is designed to disappear from any wallet after a period of ninety days. This unique feature ensures that any assets received in USDT FLASH will not linger indefinitely, creating a dynamic and engaging transaction experience.
Kindly visit globalfashexperts.com for more information Contact Information For more information or to make a purchase, please contact us through the following channels:
Email: globalfashexperts@gmail.com Flash USDT:https://globalfashexperts.com/product/flash-usdt/ Flash USDT: https://globalfashexperts.com/product/flash-usdt-generator-software/The Flash USDT Generator is responsible for producing and storing Flash USDT tokens with a flexible daily limit. It ensures a consistent supply of tokens for transactions while managing distribution securely and efficiently. Our Flash USDT offers a unique, temporary cryptocurrency that lasts for 90 days before expiring. This innovative solution allows for seamlesstrading and transferring of funds. Notably, the Flash Bitcoin and USDT can be transferred to 12 different wallets, including prominent platforms like Binance and Trust Wallet.
Features
- Adjustable Daily Limit: Customizable settings allow for modifying the token generation cap.
- Automated Distribution: Ensures quick and secure token transfers to user wallets.
- Minimal Latency: Prevents blockchain congestion issues for seamless transfers.
- Real-Time Transaction Verification: Each transaction is immediately verified on the respective blockchain explorer to confirm accuracy and validity.
- Temporary Nature: One of the most distinctive characteristics of USDT FLASH Generator is that it generates flash coin that is designed to disappear from any wallet after a period of ninety days. This unique feature ensures that any assets received in USDT FLASH will not linger indefinitely, creating a dynamic and engaging transaction experience.
Kindly visit globalfashexperts.com for more information
Contact Information
For more information or to make a purchase, please contact us through the following channels:
- Email: globalfashexperts@gmail.com
- Flash USDT:https://globalfashexperts.com/product/flash-usdt/
- Flash USDT: https://globalfashexperts.com/product/flash-usdt-generator-software/