-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Bluesky is a scam
Bluesky advertises itself as an open network, they say people won't lose followers or their identity, they advertise themselves as a protocol ("atproto") and because of that they are tricking a lot of people into using them. These three claims are false.
protocolness
Bluesky is a company. "atproto" is the protocol. Supposedly they are two different things, right? Bluesky just releases software that implements the protocol, but others can also do that, it's open!
And yet, the protocol has an official webpage with a waitlist and a private beta? Why is the protocol advertised as a company product? Because it is. The "protocol" is just a description of whatever the Bluesky app and servers do, it can and does change anytime the Bluesky developers decide they want to change it, and it will keep changing for as long as Bluesky apps and servers control the biggest part of the network.
Oh, so there is the possibility of other players stepping in and then it becomes an actual interoperable open protocol? Yes, but what is the likelihood of that happening? It is very low. No serious competitor is likely to step in and build serious apps using a protocol that is directly controlled by Bluesky. All we will ever see are small "community" apps made by users and small satellite small businesses -- not unlike the people and companies that write plugins, addons and alternative clients for popular third-party centralized platforms.
And last, even if it happens that someone makes an app so good that it displaces the canonical official Bluesky app, then that company may overtake the protocol itself -- not because they're evil, but because there is no way it cannot be like this.
identity
According to their own documentation, the Bluesky people were looking for an identity system that provided global ids, key rotation and human-readable names.
They must have realized that such properties are not possible in an open and decentralized system, but instead of accepting a tradeoff they decided they wanted all their desired features and threw away the "decentralized" part, quite literally and explicitly (although they make sure to hide that piece in the middle of a bunch of code and text that very few will read).
The "DID Placeholder" method they decided to use for their global identities is nothing more than a normal old boring trusted server controlled by Bluesky that keeps track of who is who and can, at all times, decide to ban a person and deprive them from their identity (they dismissively call a "denial of service attack").
They decided to adopt this method as a placeholder until someone else doesn't invent the impossible alternative that would provide all their desired properties in a decentralized manner -- which is nothing more than a very good excuse: "yes, it's not great now, but it will improve!".
openness
Months after launching their product with an aura of decentralization and openness and getting a bunch of people inside that believed, falsely, they were joining an actually open network, Bluesky has decided to publish a part of their idea of how other people will be able to join their open network.
When I first saw their app and how they were very prominently things like follower counts, like counts and other things that are typical of centralized networks and can't be reliable or exact on truly open networks (like Nostr), I asked myself how were they going to do that once they became and open "federated" network as they were expected to be.
Turns out their decentralization plan is to just allow you, as a writer, to host your own posts on "personal data stores", but not really have any control over the distribution of the posts. All posts go through the Bluesky central server, called BGS, and they decide what to do with it. And you, as a reader, doesn't have any control of what you're reading from either, all you can do is connect to the BGS and ask for posts. If the BGS decides to ban, shadow ban, reorder, miscount, hide, deprioritize, trick or maybe even to serve ads, then you are out of luck.
Oh, but anyone can run their own BGS!, they will say. Even in their own blog post announcing the architecture they assert that "it’s a fairly resource-demanding service" and "there may be a few large full-network providers". But I fail to see why even more than one network provider will exist, if Bluesky is already doing that job, and considering the fact there are very little incentives for anyone to switch providers -- because the app does not seem to be at all made to talk to multiple providers, one would have to stop using the reliable, fast and beefy official BGS and start using some half-baked alternative and risk losing access to things.
When asked about the possibility of switching, one of Bluesky overlords said: "it would look something like this: bluesky has gone evil. there's a new alternative called freesky that people are rushing to. I'm switching to freesky".
The quote is very naïve and sounds like something that could be said about Twitter itself: "if Twitter is evil you can just run your own social network". Both are fallacies because they ignore the network-effect and the fact that people will never fully agree that something is "evil". In fact these two are the fundamental reasons why -- for social networks specifically (and not for other things like commerce) -- we need truly open protocols with no owners and no committees.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28SummaDB
This was a hierarchical database server similar to the original Firebase. Records were stored on a LevelDB on different paths, like:
/fruits/banana/color
:yellow
/fruits/banana/flavor
:sweet
And could be queried by path too, using HTTP, for example, a call to
http://hostname:port/fruits/banana
, for example, would return a JSON document likejson { "color": "yellow", "flavor": "sweet" }
While a call to
/fruits
would returnjson { "banana": { "color": "yellow", "flavor": "sweet" } }
POST
,PUT
andPATCH
requests also worked.In some cases the values would be under a special
"_val"
property to disambiguate them from paths. (I may be missing some other details that I forgot.)GraphQL was also supported as a query language, so a query like
graphql query { fruits { banana { color } } }
would return
{"fruits": {"banana": {"color": "yellow"}}}
.SummulaDB
SummulaDB was a browser/JavaScript build of SummaDB. It ran on the same Go code compiled with GopherJS, and using PouchDB as the storage backend, if I remember correctly.
It had replication between browser and server built-in, and one could replicate just subtrees of the main tree, so you could have stuff like this in the server:
json { "users": { "bob": {}, "alice": {} } }
And then only allow Bob to replicate
/users/bob
and Alice to replicate/users/alice
. I am sure the require auth stuff was also built in.There was also a PouchDB plugin to make this process smoother and data access more intuitive (it would hide the
_val
stuff and allow properties to be accessed directly, today I wouldn't waste time working on these hidden magic things).The computed properties complexity
The next step, which I never managed to get fully working and caused me to give it up because of the complexity, was the ability to automatically and dynamically compute materialized properties based on data in the tree.
The idea was partly inspired on CouchDB computed views and how limited they were, I wanted a thing that would be super powerful, like, given
json { "matches": { "1": { "team1": "A", "team2": "B", "score": "2x1", "date": "2020-01-02" }, "1": { "team1": "D", "team2": "C", "score": "3x2", "date": "2020-01-07" } } }
One should be able to add a computed property at
/matches/standings
that computed the scores of all teams after all matches, for example.I tried to complete this in multiple ways but they were all adding much more complexity I could handle. Maybe it would have worked better on a more flexible and powerful and functional language, or if I had more time and patience, or more people.
Screenshots
This is just one very simple unfinished admin frontend client view of the hierarchical dataset.
- https://github.com/fiatjaf/summadb
- https://github.com/fiatjaf/summuladb
- https://github.com/fiatjaf/pouch-summa
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On "zk-rollups" applied to Bitcoin
ZK rollups make no sense in bitcoin because there is no "cheap calldata". all data is already ~~cheap~~ expensive calldata.
There could be an onchain zk verification that allows succinct signatures maybe, but never a rollup.
What happens is: you can have one UTXO that contains multiple balances on it and in each transaction you can recreate that UTXOs but alter its state using a zk to compress all internal transactions that took place.
The blockchain must be aware of all these new things, so it is in no way "L2".
And you must have an entity responsible for that UTXO and for conjuring the state changes and zk proofs.
But on bitcoin you also must keep the data necessary to rebuild the proofs somewhere else, I'm not sure how can the third party responsible for that UTXO ensure that happens.
I think such a construct is similar to a credit card corporation: one central party upon which everybody depends, zero interoperability with external entities, every vendor must have an account on each credit card company to be able to charge customers, therefore it is not clear that such a thing is more desirable than solutions that are truly open and interoperable like Lightning, which may have its defects but at least fosters a much better environment, bringing together different conflicting parties, custodians, anyone.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Gerador de tabelas de todos contra todos
I don't remember exactly when I did this, but I think a friend wanted to do software that would give him money over the internet without having to work. He didn't know how to program. He mentioned this idea he had which was some kind of football championship manager solution, but I heard it like this: a website that generated a round-robin championship table for people to print.
It is actually not obvious to anyone how to do it, it requires an algorithm that people will not reach casually while thinking, and there was no website doing it in Portuguese at the time, so I made this and it worked and it had a couple hundred daily visitors, and it even generated money from Google Ads (not much)!
First it was a Python web app running on Heroku, then Heroku started charging or limiting the amount of free time I could have on their platform, so I migrated it to a static site that ran everything on the client. Since I didn't want to waste my Python code that actually generated the tables I used Brython to run Python on JavaScript, which was an interesting experience.
In hindsight I could have just taken one of the many
round-robin
JavaScript libraries that exist on NPM, so eventually after a couple of more years I did that.I also removed Google Ads when Google decided it had so many requirements to send me the money it was impossible, and then the money started to vanished.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28How being "flexible" can bloat a protocol
(A somewhat absurd example, but you'll get the idea)
Iimagine some client decides to add support for a variant of nip05 that checks for values at /.well-known/nostr.yaml besides /.well-known/nostr.json. "Why not?", they think, "I like YAML more than JSON, this can't hurt anyone".
Then some user makes a nip05 file in YAML and it will work on that client, they will think their file is good since it works on that client. When the user sees that other clients are not recognizing their YAML file, they will complain to the other client developers: "Hey, your client is broken, it is not supporting my YAML file!".
The developer of the other client, astonished, replies: "Oh, I am sorry, I didn't know that was part of the nip05 spec!"
The user, thinking it is doing a good thing, replies: "I don't know, but it works on this other client here, see?"
Now the other client adds support. The cycle repeats now with more users making YAML files, more and more clients adding YAML support, for fear of providing a client that is incomplete or provides bad user experience.
The end result of this is that now nip05 extra-officially requires support for both JSON and YAML files. Every client must now check for /.well-known/nostr.yaml too besides just /.well-known/nostr.json, because a user's key could be in either of these. A lot of work was wasted for nothing. And now, going forward, any new clients will require the double of work than before to implement.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The problem with ION
ION is a DID method based on a thing called "Sidetree".
I can't say for sure what is the problem with ION, because I don't understand the design, even though I have read all I could and asked everybody I knew. All available information only touches on the high-level aspects of it (and of course its amazing wonders) and no one has ever bothered to explain the details. I've also asked the main designer of the protocol, Daniel Buchner, but he may have thought I was trolling him on Twitter and refused to answer, instead pointing me to an incomplete spec on the Decentralized Identity Foundation website that I had already read before. I even tried to join the DIF as a member so I could join their closed community calls and hear what they say, maybe eventually ask a question, so I could understand it, but my entrance was ignored, then after many months and a nudge from another member I was told I had to do a KYC process to be admitted, which I refused.
One thing I know is:
- ION is supposed to provide a way to rotate keys seamlessly and automatically without losing the main identity (and the ION proponents also claim there are no "master" keys because these can also be rotated).
- ION is also not a blockchain, i.e. it doesn't have a deterministic consensus mechanism and it is decentralized, i.e. anyone can publish data to it, doesn't have to be a single central server, there may be holes in the available data and the protocol doesn't treat that as a problem.
- From all we know about years of attempts to scale Bitcoins and develop offchain protocols it is clear that you can't solve the double-spend problem without a central authority or a kind of blockchain (i.e. a decentralized system with deterministic consensus).
- Rotating keys also suffer from the double-spend problem: whenever you rotate a key it is as if it was "spent", you aren't supposed to be able to use it again.
The logic conclusion of the 4 assumptions above is that ION is flawed: it can't provide the key rotation it says it can if it is not a blockchain.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Custom multi-use database app
Since 2015 I have this idea of making one app that could be repurposed into a full-fledged app for all kinds of uses, like powering small businesses accounts and so on. Hackable and open as an Excel file, but more efficient, without the hassle of making tables and also using ids and indexes under the hood so different kinds of things can be related together in various ways.
It is not a concrete thing, just a generic idea that has taken multiple forms along the years and may take others in the future. I've made quite a few attempts at implementing it, but never finished any.
I used to refer to it as a "multidimensional spreadsheet".
Can also be related to DabbleDB.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28rosetta.alhur.es
A service that grabs code samples from two chosen languages on RosettaCode and displays them side-by-side.
The code-fetching is done in real time and snippet-by-snippet (there is also a prefetch of which snippets are available in each language, so we only compare apples to apples).
This was my first Golang web application if I remember correctly.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: a website for feedback exchange
I thought a community of people sharing feedback on mutual interests would be a good thing, so as always I broadened and generalized the idea and mixed with my old criticue-inspired idea-feedback project and turned it into a "token". You give feedback on other people's things, they give you a "point". You can then use that point to request feedback from others.
This could be made as an Etleneum contract so these points were exchanged for satoshis using the shitswap contract (yet to be written).
In this case all the Bitcoin/Lightning side of the website must be hidden until the user has properly gone through the usage flow and earned points.
If it was to be built on Etleneum then it needs to emphasize the login/password login method instead of the lnurl-auth method. And then maybe it could be used to push lnurl-auth to normal people, but with a different name.
-
@ f977c464:32fcbe00
2024-01-11 18:47:47Kendisini aynada ilk defa gördüğü o gün, diğerleri gibi olduğunu anlamıştı. Oysaki her insan biricik olmalıydı. Sözgelimi sinirlendiğinde bir kaşı diğerinden birkaç milimetre daha az çatılabilirdi veya sevindiğinde dudağı ona has bir açıyla dalgalanabilirdi. Hatta bunların hiçbiri mümkün değilse, en azından, gözlerinin içinde sadece onun sahip olabileceği bir ışık parlayabilirdi. Çok sıradan, öyle sıradan ki kimsenin fark etmediği o milyonlarca minik şeyden herhangi biri. Ne olursa.
Ama yansımasına bakarken bunların hiçbirini bulamadı ve diğer günlerden hiç de farklı başlamamış o gün, işe gitmek için vagonunun gelmesini beklediği alelade bir metro istasyonunda, içinde kaybolduğu illüzyon dağılmaya başladı.
İlk önce derisi döküldü. Tam olarak dökülmedi aslında, daha çok kıvılcımlara dönüşüp bedeninden fırlamış ve bir an sonra sönerek külleşmiş, havada dağılmıştı. Ardında da, kaybolmadan hemen önce, kısa süre için hayal meyal görülebilen, bir ruhun yok oluşuna ağıt yakan rengârenk peri cesetleri bırakmıştı. Beklenenin aksine, havaya toz kokusu yayıldı.
Dehşete düştü elbette. Dehşete düştüler. Panikle üstlerini yırtan 50 işçi. Her şeyin sebebiyse o vagon.
Saçları da döküldü. Her tel, yere varmadan önce, her santimde ikiye ayrıla ayrıla yok oldu.
Bütün yüzeylerin mat olduğu, hiçbir şeyin yansımadığı, suyun siyah aktığı ve kendine ancak kameralarla bakabildiğin bir dünyada, vagonun içine yerleştirilmiş bir aynadan ilk defa kendini görmek.
Gözlerinin akları buharlaşıp havada dağıldı, mercekleri boşalan yeri doldurmak için eriyip yayıldı. Gerçeği görmemek için yaratılmış, bu yüzden görmeye hazır olmayan ve hiç olmayacak gözler.
Her şeyin o anda sona erdiğini sanabilirdi insan. Derin bir karanlık ve ölüm. Görmenin görmek olduğu o anın bitişi.
Ben geldiğimde ölmüşlerdi.
Yani bozulmuşlardı demek istiyorum.
Belleklerini yeni taşıyıcılara takmam mümkün olmadı. Fiziksel olarak kusursuz durumdaydılar, olmayanları da tamir edebilirdim ama tüm o hengamede kendilerini baştan programlamış ve girdilerini modifiye etmişlerdi.
Belleklerden birini masanın üzerinden ileriye savurdu. Hınca hınç dolu bir barda oturuyorlardı. O ve arkadaşı.
Sırf şu kendisini insan sanan androidler travma geçirip delirmesin diye neler yapıyoruz, insanın aklı almıyor.
Eliyle arkasını işaret etti.
Polislerin söylediğine göre biri vagonun içerisine ayna yerleştirmiş. Bu zavallılar da kapı açılıp bir anda yansımalarını görünce kafayı kırmışlar.
Arkadaşı bunların ona ne hissettirdiğini sordu. Yani o kadar bozuk, insan olduğunu sanan androidi kendilerini parçalamış olarak yerde görmek onu sarsmamış mıydı?
Hayır, sonuçta belirli bir amaç için yaratılmış şeyler onlar. Kaliteli bir bilgisayarım bozulduğunda üzülürüm çünkü parasını ben vermişimdir. Bunlarsa devletin. Bana ne ki?
Arkadaşı anlayışla kafasını sallayıp suyundan bir yudum aldı. Kravatını biraz gevşetti.
Bira istemediğinden emin misin?
İstemediğini söyledi. Sahi, neden deliriyordu bu androidler?
Basit. Onların yapay zekâlarını kodlarken bir şeyler yazıyorlar. Yazılımcılar. Biliyorsun, ben donanımdayım. Bunlar da kendilerini insan sanıyorlar. Tiplerine bak.
Sesini alçalttı.
Arabalarda kaza testi yapılan mankenlere benziyor hepsi. Ağızları burunları bile yok ama şu geldiğimizden beri sakalını düzeltip duruyor mesela. Hayır, hepsi de diğerleri onun sakalı varmış sanıyor, o manyak bir şey.
Arkadaşı bunun delirmeleriyle bağlantısını çözemediğini söyledi. O da normal sesiyle konuşmaya devam etti.
Anlasana, aynayı falan ayırt edemiyor mercekleri. Lönk diye kendilerini görüyorlar. Böyle, olduğu gibi...
Nedenmiş peki? Ne gerek varmış?
Ne bileyim be abicim! Ahiret soruları gibi.
Birasına bakarak dalıp gitti. Sonra masaya abanarak arkadaşına iyice yaklaştı. Bulanık, bir tünelin ucundaki biri gibi, şekli şemalı belirsiz bir adam.
Ben seni nereden tanıyorum ki ulan? Kimsin sen?
Belleği makineden çıkardılar. İki kişiydiler. Soruşturmadan sorumlu memurlar.
─ Baştan mı başlıyoruz, diye sordu belleği elinde tutan ilk memur.
─ Bir kere daha deneyelim ama bu sefer direkt aynayı sorarak başla, diye cevapladı ikinci memur.
─ Bence de. Yeterince düzgün çalışıyor.
Simülasyon yüklenirken, ayakta, biraz arkada duran ve alnını kaşıyan ikinci memur sormaktan kendisini alamadı:
─ Bu androidleri niye böyle bir olay yerine göndermişler ki? Belli tost olacakları. İsraf. Gidip biz baksak aynayı kırıp delilleri mahvetmek zorunda da kalmazlar.
Diğer memur sandalyesinde hafifçe dönecek oldu, o sırada soruyu bilgisayarın hoparlöründen teknisyen cevapladı.
Hangi işimizde bir yamukluk yok ki be abi.
Ama bir son değildi. Üstlerindeki tüm illüzyon dağıldığında ve çıplak, cinsiyetsiz, birbirinin aynı bedenleriyle kaldıklarında sıra dünyaya gelmişti.
Yere düştüler. Elleri -bütün bedeni gibi siyah turmalinden, boğumları çelikten- yere değdiği anda, metronun zemini dağıldı.
Yerdeki karolar öncesinde beyazdı ve çok parlaktı. Tepelerindeki floresan, ışığını olduğu gibi yansıtıyor, tek bir lekenin olmadığı ve tek bir tozun uçmadığı istasyonu aydınlatıyorlardı.
Duvarlara duyurular asılmıştı. Örneğin, yarın akşam kültür merkezinde 20.00’da başlayacak bir tekno blues festivalinin cıvıl cıvıl afişi vardı. Onun yanında daha geniş, sarı puntolu harflerle yazılmış, yatay siyah kesiklerle çerçevesi çizilmiş, bir platformdan düşen çöp adamın bulunduğu “Dikkat! Sarı bandı geçmeyin!” uyarısı. Biraz ilerisinde günlük resmi gazete, onun ilerisinde bir aksiyon filminin ve başka bir romantik komedi filminin afişleri, yapılacakların ve yapılmayacakların söylendiği küçük puntolu çeşitli duyurular... Duvar uzayıp giden bir panoydu. On, on beş metrede bir tekrarlanıyordu.
Tüm istasyonun eni yüz metre kadar. Genişliği on metre civarı.
Önlerinde, açık kapısından o mendebur aynanın gözüktüğü vagon duruyordu. Metro, istasyona sığmayacak kadar uzundu. Bir kılıcın keskinliğiyle uzanıyor ama yer yer vagonların ek yerleriyle bölünüyordu.
Hiçbir vagonda pencere olmadığı için metronun içi, içlerindekiler meçhuldü.
Sonrasında karolar zerrelerine ayrılarak yükseldi. Floresanın ışığında her yeri toza boğdular ve ortalığı gri bir sisin altına gömdüler. Çok kısa bir an. Afişleri dalgalandırmadılar. Dalgalandırmaya vakitleri olmadı. Yerlerinden söküp aldılar en fazla. Işık birkaç kere sönüp yanarak direndi. Son kez söndüğünde bir daha geri gelmedi.
Yine de etraf aydınlıktı. Kırmızı, her yere eşit dağılan soluk bir ışıkla.
Yer tamamen tele dönüşmüştü. Altında çapraz hatlarla desteklenmiş demir bir iskelet. Işık birkaç metreden daha fazla aşağıya uzanamıyordu. Sonsuzluğa giden bir uçurum.
Duvarın yerini aynı teller ve demir iskelet almıştı. Arkasında, birbirine vidalarla tutturulmuş demir plakalardan oluşan, üstünden geçen boruların ek yerlerinden bazen ince buharların çıktığı ve bir süre asılı kaldıktan sonra ağır, yağlı bir havayla sürüklendiği bir koridor.
Diğer tarafta paslanmış, pencerelerindeki camlar kırıldığı için demir plakalarla kapatılmış külüstür bir metro. Kapının karşısındaki aynadan her şey olduğu gibi yansıyordu.
Bir konteynırın içini andıran bir evde, gerçi gayet de birbirine eklenmiş konteynırlardan oluşan bir şehirde “andıran” demek doğru olmayacağı için düpedüz bir konteynırın içinde, masaya mum görüntüsü vermek için koyulmuş, yarı katı yağ atıklarından şekillendirilmiş kütleleri yakmayı deniyordu. Kafasında hayvan kıllarından yapılmış grili siyahlı bir peruk. Aynı kıllardan kendisine gür bir bıyık da yapmıştı.
Üstünde mavi çöp poşetlerinden yapılmış, kravatlı, şık bir takım.
Masanın ayakları yerine oradan buradan çıkmış parçalar konulmuştu: bir arabanın şaft mili, üst üste konulmuş ve üstünde yazı okunamayan tenekeler, boş kitaplar, boş gazete balyaları... Hiçbir şeye yazı yazılmıyordu, gerek yoktu da zaten çünkü merkez veri bankası onları fark ettirmeden, merceklerden giren veriyi sentezleyerek insanlar için dolduruyordu. Yani, androidler için. Farklı şekilde isimlendirmek bir fark yaratacaksa.
Onların mercekleri için değil. Bağlantıları çok önceden kopmuştu.
─ Hayatım, sofra hazır, diye bağırdı yatak odasındaki karısına.
Sofrada tabak yerine düz, bardak yerine bükülmüş, çatal ve bıçak yerine sivriltilmiş plakalar.
Karısı salonun kapısında durakladı ve ancak kulaklarına kadar uzanan, kocasınınkine benzeyen, cansız, ölü hayvanların kıllarından ibaret peruğunu eliyle düzeltti. Dudağını, daha doğrusu dudağının olması gereken yeri koyu kırmızı bir yağ tabakasıyla renklendirmeyi denemişti. Biraz da yanaklarına sürmüştü.
─ Nasıl olmuş, diye sordu.
Sesi tek düzeydi ama hafif bir neşe olduğunu hissettiğinize yemin edebilirdiniz.
Üzerinde, çöp poşetlerinin içini yazısız gazete kağıtlarıyla doldurarak yaptığı iki parça giysi.
─ Çok güzelsin, diyerek kravatını düzeltti kocası.
─ Sen de öylesin, sevgilim.
Yaklaşıp kocasını öptü. Kocası da onu. Sonra nazikçe elinden tutarak, sandalyesini geriye çekerek oturmasına yardım etti.
Sofrada yemek niyetine hiçbir şey yoktu. Gerek de yoktu zaten.
Konteynırın kapısı gürültüyle tekmelenip içeri iki memur girene kadar birbirlerine öyküler anlattılar. O gün neler yaptıklarını. İşten erken çıkıp yemyeşil çimenlerde gezdiklerini, uçurtma uçurduklarını, kadının nasıl o elbiseyi bulmak için saatlerce gezip yorulduğunu, kocasının kısa süreliğine işe dönüp nasıl başarılı bir hamleyle yaşanan krizi çözdüğünü ve kadının yanına döndükten sonra, alışveriş merkezinde oturdukları yeni dondurmacının dondurmalarının ne kadar lezzetli olduğunu, boğazlarının ağrımasından korktuklarını...
Akşam film izleyebilirlerdi, televizyonda -boş ve mat bir plaka- güzel bir film oynayacaktı.
İki memur. Çıplak bedenleriyle birbirinin aynı. Ellerindeki silahları onlara doğrultmuşlardı. Mum ışığında, tertemiz bir örtünün serili olduğu masada, bardaklarında şaraplarla oturan ve henüz sofranın ortasındaki hindiye dokunmamış çifti gördüklerinde bocaladılar.
Hiç de androidlere bilinçli olarak zarar verebilecek gibi gözükmüyorlardı.
─ Sessiz kalma hakkına sahipsiniz, diye bağırdı içeri giren ikinci memur. Söylediğiniz her şey...
Cümlesini bitiremedi. Yatak odasındaki, masanın üzerinden gördüğü o şey, onunla aynı hareketleri yapan android, yoksa, bir aynadaki yansıması mıydı?
Bütün illüzyon o anda dağılmaya başladı.
Not: Bu öykü ilk olarak 2020 yılında Esrarengiz Hikâyeler'de yayımlanmıştır.
-
@ a023a5e8:ff29191d
2024-01-06 20:47:50What are all the side incomes you earn other than your regular income? I know these questions are not always easily answered, but by sharing yours, you may be providing someone with a regular income that you see for yourself as a side income or a passive income. You know this world, and Bitcoin is sufficient for everyone. Let this be a fruitful discussion for many, including me. 2023 was fine, and 2024 is already here.
I will be bookmarking this thread.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28bolt12 problems
- clients can't programatically build new offers by changing a path or query params (services like zbd.gg or lnurl-pay.me won't work)
- impossible to use in a load-balanced custodian way -- since offers would have to be pregenerated and tied to a specific lightning node.
- the existence of fiat currency fields makes it so wallets have to fetch exchange rates from somewhere on the internet (or offer a bad user experience), using HTTP which hurts user privacy.
- the vendor field is misleading, can be phished very easily, not as safe as a domain name.
- onion messages are an improvement over fake HTLC-based payments as a way of transmitting data, for sure. but we must decide if they are (i) suitable for transmitting all kinds of data over the internet, a replacement for tor; or (ii) not something that will scale well or on which we can count on for the future. if there was proper incentivization for data transmission it could end up being (i), the holy grail of p2p communication over the internet, but that is a very hard problem to solve and not guaranteed to yield the desired scalability results. since not even hints of attempting to solve that are being made, it's safer to conclude it is (ii).
bolt12 limitations
- not flexible enough. there are some interesting fields defined in the spec, but who gets to add more fields later if necessary? very unclear.
- services can't return any actionable data to the users who paid for something. it's unclear how business can be conducted without an extra communication channel.
bolt12 illusions
- recurring payments is not really solved, it is just a spec that defines intervals. the actual implementation must still be done by each wallet and service. the recurring payment cannot be enforced, the wallet must still initiate the payment. even if the wallet is evil and is willing to initiate a payment without the user knowing it still needs to have funds, channels, be online, connected etc., so it's not as if the services could rely on the payments being delivered in time.
- people seem to think it will enable pushing payments to mobile wallets, which it does not and cannot.
- there is a confusion of contexts: it looks like offers are superior to lnurl-pay, for example, because they don't require domain names. domain names, though, are common and well-established among internet services and stores, because these services have websites, so this is not really an issue. it is an issue, though, for people that want to receive payments in their homes. for these, indeed, bolt12 offers a superior solution -- but at the same time bolt12 seems to be selling itself as a tool for merchants and service providers when it includes and highlights features as recurring payments and refunds.
- the privacy gains for the receiver that are promoted as being part of bolt12 in fact come from a separate proposal, blinded paths, which should work for all normal lightning payments and indeed are a very nice solution. they are (or at least were, and should be) independent from the bolt12 proposal. a separate proposal, which can be (and already is being) used right now, also improves privacy for the receiver very much anway, it's called trampoline routing.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Thoughts on Nostr key management
On Why I don't like NIP-26 as a solution for key management I talked about multiple techniques that could be used to tackle the problem of key management on Nostr.
Here are some ideas that work in tandem:
- NIP-41 (stateless key invalidation)
- NIP-46 (Nostr Connect)
- NIP-07 (signer browser extension)
- Connected hardware signing devices
- other things like musig or frostr keys used in conjunction with a semi-trusted server; or other kinds of trusted software, like a dedicated signer on a mobile device that can sign on behalf of other apps; or even a separate protocol that some people decide to use as the source of truth for their keys, and some clients might decide to use that automatically
- there are probably many other ideas
Some premises I have in my mind (that may be flawed) that base my thoughts on these matters (and cause me to not worry too much) are that
- For the vast majority of people, Nostr keys aren't a target as valuable as Bitcoin keys, so they will probably be ok even without any solution;
- Even when you lose everything, identity can be recovered -- slowly and painfully, but still --, unlike money;
- Nostr is not trying to replace all other forms of online communication (even though when I think about this I can't imagine one thing that wouldn't be nice to replace with Nostr) or of offline communication, so there will always be ways.
- For the vast majority of people, losing keys and starting fresh isn't a big deal. It is a big deal when you have followers and an online persona and your life depends on that, but how many people are like that? In the real world I see people deleting social media accounts all the time and creating new ones, people losing their phone numbers or other accounts associated with their phone numbers, and not caring very much -- they just find a way to notify friends and family and move on.
We can probably come up with some specs to ease the "manual" recovery process, like social attestation and explicit signaling -- i.e., Alice, Bob and Carol are friends; Alice loses her key; Bob sends a new Nostr event kind to the network saying what is Alice's new key; depending on how much Carol trusts Bob, she can automatically start following that and remove the old key -- or something like that.
One nice thing about some of these proposals, like NIP-41, or the social-recovery method, or the external-source-of-truth-method, is that they don't have to be implemented in any client, they can live in standalone single-purpose microapps that users open or visit only every now and then, and these can then automatically update their follow lists with the latest news from keys that have changed according to multiple methods.
-
@ 6bd128bf:bb9002f8
2024-01-04 15:07:13Disclaimer : บทความนี้เกิดขึ้นจากการที่ผมสงสัยว่า seed phrase ที่เราจด ๆ กันไว้เนี่ย มันสร้างมาได้ยังไง ผมจึงเข้าไปอ่าน BIP-39 ที่ว่าด้วยเรื่องการสร้าง seed phrase (mnemonic code) สำหรับนำมาสร้าง seed และได้เขียนสรุปความเข้าใจของผมเพื่อเรียบเรียงความรู้ในหัวเก็บไว้ เผื่อว่าจะเป็นประโยชน์กับคนที่เกิดความสงสัยเหมือนกันกับผม ถ้าผมเข้าใจผิดตรงไหนสามารถบอกกันได้เลยนะครับ
ทำไมต้องมี BIP-39 ?
BIP-39 เกิดขึ้นเพราะว่ามนุษย์นั้นสามารถจดหรือจำคำได้ง่ายกว่าจดหรือจำข้อมูลในรูปเลขฐานสอง (binary) หรือเลขฐานสิบหก (hexadecimal) จึงมีคนเสนอให้สร้าง mnemonic code หรือที่เราเรียกกันว่า seed phrase ขึ้นมา ซึ่งจะประกอบไปด้วย 2 ส่วนคือ การสร้าง seed phrase และการแปลง seed phrase ให้การเป็น seed ที่สามารถนำไปใช้ในการสร้าง private key และ public key ต่อได้
Seed Phrase สร้างยังไง ?
seed phrase นั้นต้องสร้างขึ้นจากข้อมูลทางคอมพิวเตอร์ที่เค้าเรียกว่า entropy ที่มีจำนวน bits หารด้วย 32 ลงตัวและมีจำนวน bits อยู่ระหว่าง 128 - 256 bits ยิ่ง entropy มีจำนวน bits มากก็จะทำให้มีความปลอดภัยมากขึ้นแต่ก็จะมีจำนวนคำมากขึ้นเช่นกัน ซึ่งการสร้าง seed phrase นั้นมีขั้นตอนดังนี้
Step - 1
สุ่ม entropy ที่มีจำนวน bits อยู่ระหว่าง 128 - 256 bits และจำนวน bits ต้องหารด้วย 32 ลงตัว ซึ่งหลังจากนี้จะแทนจำนวน bits นี้ด้วยคำว่า ENT
ตัวอย่าง entropy ที่มี ENT เท่ากับ 128 bits
00110010010101010111100101001011001101100100101001000100001100010111001101110100001110010011011101000101011010100101000101010011
Step - 2
คำนวณหา checksum bits ที่มีจำนวน bits เท่ากับค่า ENT หารด้วย 32 โดยการ hash entropy ด้วย SHA256 algorithm และนำผลลัพธ์มาตัดเอาแค่ส่วนหัวตามจำนวน bits ที่ต้องการ
ผมนำ entropy จาก step ก่อนหน้ามาทำ
sha256_hashing(entropy)
ได้ผลลัพธ์ออกมาเป็น01100111.........
และจะได้ (128 / 32) = 4 bits แรกคือ0110
ซึ่งหลังจากนี้จะแทนความยาวของ checksum bits ด้วยคำว่า CSStep - 3
นำ checksum bits ที่ได้มาต่อท้าย entropy จะได้เป็นกลุ่มของ bits ที่มีความยาวเท่ากับ ENT + CS
ผมนำ 4 bits แรกจาก step ก่อนหน้ามาต่อท้าย entropy ที่มี 128 bits ดังนี้
"00110010010101010111100101001011001101100100101001000100001100010111001101110100001110010011011101000101011010100101000101010011" + "0110"
Step - 4
นำกลุ่มของ bits มาแบ่งเป็นกลุ่มย่อย กลุ่มละ 11 bits จะทำให้ได้กลุ่มทั้งหมดจำนวน (ENT + CS) / 11 กลุ่ม ซึ่งถ้า entropy มีจำนวน 128 bits ก็จะแบ่งกลุ่มได้ (128 + 4) / 11 = 12 กลุ่ม
ผมแบ่งกลุ่มได้ตามนี้
00110010010 10101011110 01010010110 01101100100 10100100010 00011000101
11001101110 10000111001 00110111010 00101011010 10010100010 10100110110
Step - 5
นำกลุ่มย่อยแต่ละกลุ่มมาแปลงเป็นเลขฐานสิบซึ่งจะมีค่าตั้งแต่ 0 - 2047 และสามารถนำไปเทียบกับ wordlist ที่กำหนดไว้ใน BIP-39 ตาม index จะได้ผลลัพธ์เป็น seed phrase ที่สามารถนำไปสร้าง seed ต่อไปได้
ผม map เลขฐานสิบกับ wordlist ได้เป็น seed phrase ตามนี้
00110010010 => 402 = crane
10101011110 => 1374 = profit
01010010110 => 662 = fan
01101100100 => 868 = hold
10100100010 => 1314 = picture
00011000101 => 197 = board
11001101110 => 1646 = soccer
10000111001 => 1081 = mango
00110111010 => 442 = dance
00101011010 => 346 = clip
10010100010 => 1186 = nephew
10100110110 => 1334 = plug
Wordlist
wordlist ที่ BIP-39 กำหนดขึ้นมานั้นมีลักษณะดังนี้ - เลือกกลุ่มคำที่สามารถพิมพ์แค่ 4 ตัวอักษรก็สามารถระบุได้ว่าเป็นคำไหน - เลี่ยงคำที่มีหน้าตาคล้าย ๆ กัน เช่นใน wordlist จะมีคำว่า build แต่ไม่มีคำว่า built เพราะอาจจะทำให้สับสนได้ - เรียงลำดับคำตามตัวอักษรเพื่อจะได้หาได้ง่าย - ตัวอักษรในคำสามารถประกอบด้วยภาษาอะไรก็ได้แต่ว่าต้องอยู่ในรูปของ UTF-8 encoding แต่ผมว่าเป็นภาษาอังกฤษน่าจะจำง่ายที่สุด
จำนวน bits ของ entropy และจำนวน word ที่ได้
entropy 128 bits จะมี 4 bits checksum ได้เป็น seed phrase 12 word entropy 160 bits จะมี 5 bits checksum ได้เป็น seed phrase 15 word entropy 192 bits จะมี 6 bits checksum ได้เป็น seed phrase 18 word entropy 224 bits จะมี 7 bits checksum ได้เป็น seed phrase 21 word entropy 256 bits จะมี 8 bits checksum ได้เป็น seed phrase 24 word
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Sol e Terra
A Terra não gira em torno do Sol. Tudo depende do ponto de referência e não existe um ponto de referência absoluto. Só é melhor dizer que a Terra gira em torno do Sol porque há outros planetas fazendo movimentos análogos e aí fica mais fácil para todo mundo entender os movimentos tomando o Sol como ponto de referência.
-
@ 8fb140b4:f948000c
2024-01-02 08:43:31After my second attempt at running Lightning Network node went wrong, mainly due to my own mistake and bad cli interface of LND, I have decided to look deeply into the alternative implementation of the lightning node software. Eclair was more final choice driven by multiple reasons that I am planning to further talk about in this write-up. The main reason I even considered Eclair despite my dislike of Java and JVM, was that one of the largest lightning nodes on the network is ACINQ, and they are the company behind the implementation and maintenance of this open source software.
I’ve noticed them first after reading their blog about how they run $100M Lightning node and what level of care and thought they put into the whole implementation. Lightning is used to transfer large quantities of value across the internet, and I would only trust something well designed and actively maintained by the people who have to lose the most if they make a wrong choice.
Eclair in itself is not a complex implementation of the lightning network standards, and highly modular, which allows for an easy segregation of duties among different components of the node. One of the biggest sales-points for me was their approach of crash-only software which guarantees consistency of the state regardless of what happens with the running software. By that extension, to shutdown the node, you simply kill the process and start it again. It doesn’t matter what transactions were in-flight, or what stage they were in. This is huge, since anything can happen to a running node (e.g., failed disk, failed RAM, CPU, kernel panic, etc.)
Getting back to the architecture, the Eclair is simple and elegant in design and implementation. The node is separated into three main components that comprise the node: eclair-core, eclair-node, eclair-front. All of the entities are sandboxed actors (e.g., peer, channel, payment), which allows for scalabilities across CPUs and faults. This ensures high availability and security.
Clustering is also an option and can be achieved by migrating from the single node to a multi-server node. There is no need to dive into complexities of that, and by the time you need to scale, I am sure you’ll be able to figure it out.
Simple and robust API, is yet another major reason that keeps the node simple and fast. One downside of the API, is that it is protected by a single password and is not designed for RBAC (Role Based Access Control). One solution could be an implementation of another API wrapper that would implement things similar to Runes or Macaroons, which should not be a challenge considering the REST API simplicity.
On-the-fly HTLC max size adjustment, which will prevent your node accepting forwarded payments that would fail due to lack of liquidity on your side. This also makes routing better for the rest of the lightning network, but may “leak” your channel balances if not done right.
Experimentation and adjustment of path-finding algorithm. I am not there yet myself, but I see this as a great option in the future if I need it. This will allow me to make my own choices how I want to route payments and what parameters I would use to determine the best path.
Full production support of PostgreSQL server. Not only Eclair itself is not a beta release unlike LND, but also has full production support of the very reliable and battle tested database as its backend data storage. You are able to(and should) to run Active/Passive PostgreSQL cluster in synchronous mode, and ensure that all of the written data by the node are backed-up in real-time. This removes the worry of corrupted database that I have seen happen all too often.
Excellent monitoring and metrics, that can be collected by Prometheus and viewed in Grafana. Eclair provides template dashboards that you can import into Grafana to make your life easier. You can also use Kamon (external service) where you could send the metrics and monitor your node.
Support for all common networking protocols and support Socks5.
Last, but not least, support for plug-ins. Even if you are not well versed in writing plug-ins, you could take some of the available ones and modify them to your liking.
There are many more features and limitations that I didn’t mention, but you can explore them yourself here.
One down-side that you should consider, is not such a great availability of the readily available tools. So far I found that Ride-The-Lightning works well; LNBits works but I am yet to see if it is reliable; BTCPayServer has support but I have failed to use it with API directly, and only was able to use it via LNBits.
Lightning is still reckless, but nothing stops you from doing it carefully and reliably. Good luck and happy node-running! 🐶🐾🫡⚡️
My Node - RAϟKO
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Community
I was an avid IPFS user until yesterday. Many many times I asked simple questions for which I couldn't find an answer on the internet in the #ipfs IRC channel on Freenode. Most of the times I didn't get an answer, and even when I got it was rarely by someone who knew IPFS deeply. I've had issues go unanswered on js-ipfs repositories for year – one of these was raising awareness of a problem that then got fixed some months later by a complete rewrite, I closed my own issue after realizing that by myself some couple of months later, I don't think the people responsible for the rewrite were ever acknowledge that he had fixed my issue.
Some days ago I asked some questions about how the IPFS protocol worked internally, sincerely trying to understand the inefficiencies in finding and fetching content over IPFS. I pointed it would be a good idea to have a drawing showing that so people would understand the difficulties (which I didn't) and wouldn't be pissed off by the slowness. I was told to read the whitepaper. I had already the whitepaper, but read again the relevant parts. The whitepaper doesn't explain anything about the DHT and how IPFS finds content. I said that in the room, was told to read again.
Before anyone misread this section, I want to say I understand it's a pain to keep answering people on IRC if you're busy developing stuff of interplanetary importance, and that I'm not paying anyone nor I have the right to be answered. On the other hand, if you're developing a super-important protocol, financed by many millions of dollars and a lot of people are hitting their heads against your software and there's no one to help them; you're always busy but never delivers anything that brings joy to your users, something is very wrong. I sincerely don't know what IPFS developers are working on, I wouldn't doubt they're working on important things if they said that, but what I see – and what many other users see (take a look at the IPFS Discourse forum) is bugs, bugs all over the place, confusing UX, and almost no help.
-
@ 8fb140b4:f948000c
2023-12-30 10:58:49Disclaimer, this tutorial may have a real financial impact on you, follow at you own risk.
Step 1: execute
lncli closeallchannels
That’s it, that was easy as it could be. Now all your node’s channels are in the process of being closed and all your liquidity is being moved to your onchain wallet. The best part, you don’t even need to confirm anything, it just does it in one go with no questions asked! You can check the status by using a very similar looking command,
lncli closedchannels
!Good job! Now you are ready to start from scratch and use any other reasonable solution!
🐶🐾🫡🤣🤣🤣
Disclaimer: this is a satirical tutorial that will 💯 cost you a lot of funds and headaches.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A list of things artificial intelligence is not doing
If AI is so good why can't it:
- write good glue code that wraps a documented HTTP API?
- make good translations using available books and respective published translations?
- extract meaningful and relevant numbers from news articles?
- write mathematical models that fit perfectly to available data better than any human?
- play videogames without cheating (i.e. simulating human vision, attention and click speed)?
- turn pure HTML pages into pretty designs by generating CSS
- predict the weather
- calculate building foundations
- determine stock values of companies from publicly available numbers
- smartly and automatically test software to uncover bugs before releases
- predict sports matches from the ball and the players' movement on the screen
- continuously improve niche/local search indexes based on user input and and reaction to results
- control traffic lights
- predict sports matches from news articles, and teams and players' history
This was posted first on Twitter.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28lnurl-auth explained
You may have seen the lnurl-auth spec or heard about it, but might not know how it works or what is its relationship with other lnurl protocols. This document attempts to solve that.
Relationship between lnurl-auth and other lnurl protocols
First, what is the relationship of lnurl-auth with other lnurl protocols? The answer is none, except the fact that they all share the lnurl format for specifying
https
URLs.In fact, lnurl-auth is very unique in the sense that it doesn't even need a Lightning wallet to work, it is a standalone authentication protocol that can work anywhere.
How does it work
Now, how does it work? The basic idea is that each wallet has a seed, which is a random value (you may think of the BIP39 seed words, for example). Usually from that seed different keys are derived, each of these yielding a Bitcoin address, and also from that same seed may come the keys used to generate and manage Lightning channels.
What lnurl-auth does is to generate a new key from that seed, and from that a new key for each service (identified by its domain) you try to authenticate with.
That way, you effectively have a new identity for each website. Two different services cannot associate your identities.
The flow goes like this: When you visit a website, the website presents you with a QR code containing a callback URL and a challenge. The challenge should be a random value.
When your wallet scans or opens that QR code it uses the domain in the callback URL plus the main lnurl-auth key to derive a key specific for that website, uses that key to sign the challenge and then sends both the public key specific for that for that website plus the signed challenge to the specified URL.
When the service receives the public key it checks it against the challenge signature and start a session for that user. The user is then identified only by its public key. If the service wants it can, of course, request more details from the user, associate it with an internal id or username, it is free to do anything. lnurl-auth's goals end here: no passwords, maximum possible privacy.
FAQ
-
What is the advantage of tying this to Bitcoin and Lightning?
One big advantage is that your wallet is already keeping track of one seed, it is already a precious thing. If you had to keep track of a separate auth seed it would be arguably worse, more difficult to bootstrap the protocol, and arguably one of the reasons similar protocols, past and present, weren't successful.
-
Just signing in to websites? What else is this good for?
No, it can be used for authenticating to installable apps and physical places, as long as there is a service running an HTTP server somewhere to read the signature sent from the wallet. But yes, signing in to websites is the main problem to solve here.
-
Phishing attack! Can a malicious website proxy the QR from a third website and show it to the user to it will steal the signature and be able to login on the third website?
No, because the wallet will only talk to the the callback URL, and it will either be controlled by the third website, so the malicious won't see anything; or it will have a different domain, so the wallet will derive a different key and frustrate the malicious website's plan.
-
I heard SQRL had that same idea and it went nowhere.
Indeed. SQRL in its first version was basically the same thing as lnurl-auth, with one big difference: it was vulnerable to phishing attacks (see above). That was basically the only criticism it got everywhere, so the protocol creators decided to solve that by introducing complexity to the protocol. While they were at it they decided to add more complexity for managing accounts and so many more crap that in the the spec which initially was a single page ended up becoming 136 pages of highly technical gibberish. Then all the initial network effect it had, libraries and apps were trashed and nowadays no one can do anything with it (but, see, there are still people who love the protocol writing in a 90's forum with no clue of anything besides their own Java).
-
We don't need this, we need WebAuthn!
WebAuthn is essentially the same thing as lnurl-auth, but instead of being simple it is complex, instead of being open and decentralized it is centralized in big corporations, and instead of relying on a key generated by your own device it requires an expensive hardware HSM you must buy and trust the manufacturer. If you like WebAuthn and you like Bitcoin you should like lnurl-auth much more.
-
What about BitID?
This is another one that is very similar to lnurl-auth, but without the anti-phishing prevention and extra privacy given by making one different key for each service.
-
What about LSAT?
It doesn't compete with lnurl-auth. LSAT, as far as I understand it, is for when you're buying individual resources from a server, not authenticating as a user. Of course, LSAT can be repurposed as a general authentication tool, but then it will lack features that lnurl-auth has, like the property of having keys generated independently by the user from a common seed and a standard way of passing authentication info from one medium to another (like signing in to a website at the desktop from the mobile phone, for example).
-
-
@ a023a5e8:ff29191d
2023-12-20 04:43:48Yesterday I was sleeping with my wife and 2 kids in same bed and in middle of night i just woke up to see my wife and kids sleeping aside me in a moonlight like ambience and it was a vivid feeling. And I just got panicked with strange feeling and in a bit I woke up really to see them again. All was in just a moment. Did you ever had such experience?
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28ijq
An interactive REPL for
jq
with smart helpers (for example, it automatically assigns each line of input to a variable so you can reference it later, it also always referenced the previous line automatically).See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A Causa
o Princípios de Economia Política de Menger é o único livro que enfatiza a CAUSA o tempo todo. os cientistas todos parecem não saber, ou se esquecer sempre, que as coisas têm causa, e que o conhecimento verdadeiro é o conhecimento da causa das coisas.
a causa é uma categoria metafísica muito superior a qualquer correlação ou resultado de teste de hipótese, ela não pode ser descoberta por nenhum artifício econométrico ou reduzida à simples antecedência temporal estatística. a causa dos fenômenos não pode ser provada cientificamente, mas pode ser conhecida.
o livro de Menger conta para o leitor as causas de vários fenômenos econômicos e as interliga de forma que o mundo caótico da economia parece adquirir uma ordem no momento em que você lê. é uma sensação mágica e indescritível.
quando eu te o recomendei, queria é te imbuir com o espírito da busca pela causa das coisas. depois de ler aquilo, você está apto a perceber continuidade causal nos fenômenos mais complexos da economia atual, enxergar as causas entre toda a ação governamental e as suas várias consequências na vida humana. eu faço isso todos os dias e é a melhor sensação do mundo quando o caos das notícias do caderno de Economia do jornal -- que para o próprio jornalista que as escreveu não têm nenhum sentido (tanto é que ele escreve tudo errado) -- se incluem num sistema ordenado de causas e consequências.
provavelmente eu sempre erro em alguns ou vários pontos, mas ainda assim é maravilhoso. ou então é mais maravilhoso ainda quando eu descubro o erro e reinsiro o acerto naquela racionalização bela da ordem do mundo econômico que é a ordem de Deus.
em scrap para T.P.
-
@ a023a5e8:ff29191d
2023-12-19 12:14:53Shit shit shit
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Trelew
A CLI tool for navigating Trello boards. It used vorpal for an "immersive" experience and was pretty good.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A biblioteca infinita
Agora esqueci o nome do conto de Jorge Luis Borges em que a tal biblioteca é descrita, ou seus detalhes específicos. Eu tinha lido o conto e nunca havia percebido que ele matava a questão da aleatoriedade ser capaz de produzir coisas valiosas. Precisei mesmo da Wikipédia me dizer isso.
Alguns anos atrás levantei essa questão para um grupo de amigos sem saber que era uma questão tão batida e baixa. No meu exemplo era um cachorro andando sobre letras desenhadas e não um macaco numa máquina de escrever. A minha conclusão da discussão foi que não importa o que o cachorro escrevesse, sem uma inteligência capaz de compreender aquilo nada passaria de letras aleatórias.
Borges resolve tudo imaginando uma biblioteca que contém tudo o que o cachorro havia escrito durante todo o infinito em que fez o experimento, e portanto contém todo o conhecimento sobre tudo e todas as obras literárias possíveis -- mas entre cada página ou frase muito boa ou pelo menos legívei há toneladas de livros completamente aleatórios e uma pessoa pode passar a vida dentro dessa biblioteca que contém tanto conhecimento importante e mesmo assim não aprender nada porque nunca vai achar os livros certos.
Everything would be in its blind volumes. Everything: the detailed history of the future, Aeschylus' The Egyptians, the exact number of times that the waters of the Ganges have reflected the flight of a falcon, the secret and true nature of Rome, the encyclopedia Novalis would have constructed, my dreams and half-dreams at dawn on August 14, 1934, the proof of Pierre Fermat's theorem, the unwritten chapters of Edwin Drood, those same chapters translated into the language spoken by the Garamantes, the paradoxes Berkeley invented concerning Time but didn't publish, Urizen's books of iron, the premature epiphanies of Stephen Dedalus, which would be meaningless before a cycle of a thousand years, the Gnostic Gospel of Basilides, the song the sirens sang, the complete catalog of the Library, the proof of the inaccuracy of that catalog. Everything: but for every sensible line or accurate fact there would be millions of meaningless cacophonies, verbal farragoes, and babblings. Everything: but all the generations of mankind could pass before the dizzying shelves – shelves that obliterate the day and on which chaos lies – ever reward them with a tolerable page.
Tenho a impressão de que a publicação gigantesca de artigos, posts, livros e tudo o mais está transformando o mundo nessa biblioteca. Há tanta coisa pra ler que é difícil achar o que presta. As pessoas precisam parar de escrever.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28OP_CHECKTEMPLATEVERIFY
and the "covenants" dramaThere are many ideas for "covenants" (I don't think this concept helps in the specific case of examining proposals, but fine). Some people think "we" (it's not obvious who is included in this group) should somehow examine them and come up with the perfect synthesis.
It is not clear what form this magic gathering of ideas will take and who (or which ideas) will be allowed to speak, but suppose it happens and there is intense research and conversations and people (ideas) really enjoy themselves in the process.
What are we left with at the end? Someone has to actually commit the time and put the effort and come up with a concrete proposal to be implemented on Bitcoin, and whatever the result is it will have trade-offs. Some great features will not make into this proposal, others will make in a worsened form, and some will be contemplated very nicely, there will be some extra costs related to maintenance or code complexity that will have to be taken. Someone, a concreate person, will decide upon these things using their own personal preferences and biases, and many people will not be pleased with their choices.
That has already happened. Jeremy Rubin has already conjured all the covenant ideas in a magic gathering that lasted more than 3 years and came up with a synthesis that has the best trade-offs he could find. CTV is the result of that operation.
The fate of CTV in the popular opinion illustrated by the thoughtless responses it has evoked such as "can we do better?" and "we need more review and research and more consideration of other ideas for covenants" is a preview of what would probably happen if these suggestions were followed again and someone spent the next 3 years again considering ideas, talking to other researchers and came up with a new synthesis. Again, that person would be faced with "can we do better?" responses from people that were not happy enough with the choices.
And unless some famous Bitcoin Core or retired Bitcoin Core developers were personally attracted by this synthesis then they would take some time to review and give their blessing to this new synthesis.
To summarize the argument of this article, the actual question in the current CTV drama is that there exists hidden criteria for proposals to be accepted by the general community into Bitcoin, and no one has these criteria clear in their minds. It is not as simple not as straightforward as "do research" nor it is as humanly impossible as "get consensus", it has a much bigger social element into it, but I also do not know what is the exact form of these hidden criteria.
This is said not to blame anyone -- except the ignorant people who are not aware of the existence of these things and just keep repeating completely false and unhelpful advice for Jeremy Rubin and are not self-conscious enough to ever realize what they're doing.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: clarity.fm on Lightning
Getting money from clients very easily, dispatching that money to "world class experts" (what a silly way to market things, but I guess it works) very easily are the job for Bitcoin and the Lightning Network.
EDIT 2020-09-04
My idea was that people would advertise themselves, so you would book an hour with people you know already, but it seems that clarify.fm has gone through the route of offering a "catalog of experts" to potential clients, all full of verification processes probably and marketing. So I guess this is not a thing I can do.
Actually I did https://s4a.etleneum.com/ (on Etleneum) that is somewhat similar, but of course doesn't have the glamour and network effect and marketing -- also it's just text, when in Clarity is fancy calls.
Thinking about it, this is just a simple and obvious idea: just copy things from the fiat world and make them on Lightning, but maybe it is still worth pointing these out as there are hundreds of developers out there trying to make yet another lottery game with Lightning.
It may also be a good idea to not just copy fiat-businesses models, but also change them experimenting with new paradigms, like idea: Patreon, but simple, and without subscription.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Channels without HTLCs
HTLCs below the dust limit are not possible, because they're uneconomical.
So currently whenever a payment below the dust limit is to be made Lightning peers adjust their commitment transactions to pay that amount as fees in case the channel is closed. That's a form of reserving that amount and incentivizing peers to resolve the payment, either successfully (in case it goes to the receiving node's balance) or not (it then goes back to the sender's balance).
SOLUTION
I didn't think too much about if it is possible to do what I think can be done in the current implementation on Lightning channels, but in the context of Eltoo it seems possible.
Eltoo channels have UPDATE transactions that can be published to the blockchain and SETTLEMENT transactions that spend them (after a relative time) to each peer. The barebones script for UPDATE transactions is something like (copied from the paper, because I don't understand these things):
OP_IF # to spend from a settlement transaction (presigned) 10 OP_CSV 2 As,i Bs,i 2 OP_CHECKMULTISIGVERIFY OP_ELSE # to spend from a future update transaction <Si+1> OP_CHECKLOCKTIMEVERIFY 2 Au Bu 2 OP_CHECKMULTISIGVERIFY OP_ENDIF
During a payment of 1 satoshi it could be updated to something like (I'll probably get this thing completely wrong):
OP_HASH256 <payment_hash> OP_EQUAL OP_IF # for B to spend from settlement transaction 1 in case the payment went through # and they have a preimage 10 OP_CSV 2 As,i1 Bs,i1 2 OP_CHECKMULTISIGVERIFY OP_ELSE OP_IF # for A to spend from settlement transaction 2 in case the payment didn't went through # and the other peer is uncooperative <now + 1day> OP_CHECKLOCKTIMEVERIFY 2 As,i2 Bs,i2 2 OP_CHECKMULTISIGVERIFY OP_ELSE # to spend from a future update transaction <Si+1> OP_CHECKLOCKTIMEVERIFY 2 Au Bu 2 OP_CHECKMULTISIGVERIFY OP_ENDIF OP_ENDIF
Then peers would have two presigned SETTLEMENT transactions, 1 and 2 (with different signature pairs, as badly shown in the script). On SETTLEMENT 1, funds are, say, 999sat for A and 1001sat for B, while on SETTLEMENT 2 funds are 1000sat for A and 1000sat for B.
As soon as B gets the preimage from the next peer in the route it can give it to A and them can sign a new UPDATE transaction that replaces the above gimmick with something simpler without hashes involved.
If the preimage doesn't come in viable time, peers can agree to make a new UPDATE transaction anyway. Otherwise A will have to close the channel, which may be bad, but B wasn't a good peer anyway.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Shitcoinery
IPFS was advertised to the Ethereum community since the beggining as a way to "store" data for their "dApps". I don't think this is harmful in any way, but for some reason it may have led IPFS developers to focus too much on Ethereum stuff. Once I watched a talk showing libp2p developers – despite being ignored by the Ethereum team (that ended up creating their own agnostic p2p library) – dedicating an enourmous amount of work on getting a libp2p app running in the browser talking to a normal Ethereum node.
The always somewhat-abandoned "Awesome IPFS" site is a big repository of "dApps", some of which don't even have their landing page up anymore, useless Ethereum smart contracts that for some reason use IPFS to store whatever the useless data their users produce.
Again, per se it isn't a problem that Ethereum people are using IPFS, but it is at least confusing, maybe misleading, that when you search for IPFS most of the use-cases are actually Ethereum useless-cases.
See also
- Bitcoin, the only non-shitcoin
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A response to Achim Warner's "Drivechain brings politics to miners" article
I mean this article: https://achimwarner.medium.com/thoughts-on-drivechain-i-miners-can-do-things-about-which-we-will-argue-whether-it-is-actually-a5c3c022dbd2
There are basically two claims here:
1. Some corporate interests might want to secure sidechains for themselves and thus they will bribe miners to have these activated
First, it's hard to imagine why they would want such a thing. Are they going to make a proprietary KYC chain only for their users? They could do that in a corporate way, or with a federation, like Facebook tried to do, and that would provide more value to their users than a cumbersome pseudo-decentralized system in which they don't even have powers to issue currency. Also, if Facebook couldn't get away with their federated shitcoin because the government was mad, what says the government won't be mad with a sidechain? And finally, why would Facebook want to give custody of their proprietary closed-garden Bitcoin-backed ecosystem coins to a random, open and always-changing set of miners?
But even if they do succeed in making their sidechain and it is very popular such that it pays miners fees and people love it. Well, then why not? Let them have it. It's not going to hurt anyone more than a proprietary shitcoin would anyway. If Facebook really wants a closed ecosystem backed by Bitcoin that probably means we are winning big.
2. Miners will be required to vote on the validity of debatable things
He cites the example of a PoS sidechain, an assassination market, a sidechain full of nazists, a sidechain deemed illegal by the US government and so on.
There is a simple solution to all of this: just kill these sidechains. Either miners can take the money from these to themselves, or they can just refuse to engage and freeze the coins there forever, or they can even give the coins to governments, if they want. It is an entirely good thing that evil sidechains or sidechains that use horrible technology that doesn't even let us know who owns each coin get annihilated. And it was the responsibility of people who put money in there to evaluate beforehand and know that PoS is not deterministic, for example.
About government censoring and wanting to steal money, or criminals using sidechains, I think the argument is very weak because these same things can happen today and may even be happening already: i.e., governments ordering mining pools to not mine such and such transactions from such and such people, or forcing them to reorg to steal money from criminals and whatnot. All this is expected to happen in normal Bitcoin. But both in normal Bitcoin and in Drivechain decentralization fixes that problem by making it so governments cannot catch all miners required to control the chain like that -- and in fact fixing that problem is the only reason we need decentralization.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A entrevista da Flávia Tavares com o Olavo de Carvalho
Não li todas as reclamações que o Olavo fez, mas li algumas. Também não li toda a matéria que saiu na Época, porque não tive paciência, mas assisti aos dois vídeos da entrevista que o Olavo publicou.
Tendo lido primeiro as muitas reclamações do Olavo, esperei encontrar no vídeo uma pessoa falsa, que fingiu-se de amigável para obter informações que usaria depois para destruir a imagem do Olavo, mas não vi nada disso.
Claro que ela poderia ter me enganado também, se enganou ao Olavo. Mas na matéria em si, também não vi nada além de sinceridade -- talvez não excelência jornalística, mas nada que eu não esperasse de qualquer matéria de qualquer revista. Flavia Tavares não entendeu muitas coisas, mas não fingiu que não entendeu nada, foi simples e honestamente Flavia Tavares, como ela mesma declarou no final do vídeo da entrevista: "olha, eu não fingi nada aqui, viu?".
O mais importante de tudo isso, porém, são as partes da matéria que apresentam idéias difíceis de conceber, como as que Olavo tem sobre o governo mundial ou a disseminação da pedofilia. Em toda discussão pública ou privada, essas idéias são proibidas. Muita gente pode concordar que a esquerda não presta, mas ninguém em sã consciência admitirá a possibilidade de que haja qualquer intenção significativa de implantação de um governo mundial ou da disseminação da pedofilia. A mesma carinha de deboche que seu amigo esquerdista faria à simples menção desses assuntos é a que Flavia Tavares usa no seu texto quando quer mostrar que Olavo é meio tantã. A carinha de deboche vem desacompanhada de qualquer reflexão séria ou tentativa de refutação, sempre.
Link da tal matéria: http://epoca.globo.com/sociedade/noticia/2017/10/olavo-de-carvalho-o-guru-da-direita-que-rejeita-o-que-dizem-seus-fas.html?utm_source=twitter&utm_medium=social&utm_campaign=post Vídeos: https://www.youtube.com/watch?v=C0TUsKluhok, https://www.youtube.com/watch?v=yR0F1haQ07Y&t=5s
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Why IPFS cannot work, again
Imagine someone comes up with a solution for P2P content-addressed data-sharing that involves storing all the files' contents in all computers of the network. That wouldn't work, right? Too much data, if you think this can work then you're a BSV enthusiast.
Then someone comes up with the idea of not storing everything in all computers, but only some things on some computers, based on some algorithm to determine what data a node would store given its pubkey or something like that. Still wouldn't work, right? Still too much data no matter how much you spread it, but mostly incentives not aligned, would implode in the first day.
Now imagine someone says they will do the same thing, but instead of storing the full contents each node would only store a pointer to where each data is actually available. Does that make it better? Hardly so. Still, you're just moving the problem.
This is IPFS.
Now you have less data on each computer, but on a global scale that is still a lot of data.
No incentives.
And now you have the problem of finding the data. First if you have some data you want the world to access you have to broadcast information about that, flooding the network -- and everybody has to keep doing this continuously for every single file (or shard of file) that is available.
And then whenever someone wants some data they must find the people who know about that, which means they will flood the network with requests that get passed from peer to peer until they get to the correct peer.
The more you force each peer to store the worse it becomes to run a node and to store data on behalf of others -- but the less your force each peer to store the more flooding you'll have on the global network, and the slower will be for anyone to actually get any file.
But if everybody just saves everything to Infura or Cloudflare then it works, magic decentralized technology.
Related
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Who will build the roads?
Who will build the roads? Em Lagoa Santa, as mais novas e melhores ruas -- que na verdade acabam por formar enormes teias de bairros que se interligam -- são construídas pelos loteadores que querem as ruas para que seus lotes valham mais -- e querem que outras pessoas usem as ruas também. Também são esses mesmos loteadores que colocam os postes de luz e os encanamentos de água, não sem antes terem que se submeter a extorsões de praxe praticadas por COPASA e CEMIG.
Se ao abrir um loteamento, condomínio, prédio um indivíduo ou uma empresa consegue sem muito problema passar rua, eletricidade, água e esgoto, por que não seria possível existir livre-concorrência nesses mercados? Mesmo aquela velha estória de que é ineficiente passar cabos de luz duplicados para que companhias elétricas possam competir já me parece bobagem.
-
@ 1abc40c5:6cd60e41
2023-12-17 07:42:37สวัสดีครับ ยินดีต้อนรับสมาชิกใหม่สู่สังคมชาวม่วง ด้วยความยินดียิ่ง
ในเบื้องต้นเข้าไปเรียนรู้และศึกษาการใช้งานตามโน๊ตของพี่แจ๊ค กู๊ดเดย์ พี่ใหญ่ของเราก่อนเลยครับ (จะรู้ว่าโลกใบนี้กว้างใหญ่กว่าที่เห็น)
===================== แล้วก็ไปเพิ่มกระเป๋า LN Wallet สำหรับรับ zap ง่าย ๆ ได้จากบทความของจารย์ Notoshi (อาจารย์อาร์ม RS)
-
หากใช้ Nostr บนคอมพิวเตอร์เป็นประจำ สามารถผูกกับ Browser Extension เพื่อใช้งานง่าย ๆ แนะนำ "Getalby" ครับ
-
หากเล่นบนมือถืออาจจะเหมาะกับ "Wallet of Satoshi" และ "Blink wallet" มากกว่า
แต่แนะนำว่าควรมีทั้ง 2 แบบครับ ท่านจะได้ใช้ประโยชน์ในเวลาที่เหมาะสมอย่างแน่นอน 5555
===================== จากนั้นไปเพิ่ม Siamstr Relay - wss://relay.siamstr.com - wss://teemie1-relay.duckdns.org
ที่สนับสนุนโดยพี่ตี๋ เพื่อให้เราชาวไทยเชื่อมสายสัมพันธ์ต่อกันและมองเห็นกันได้ ตามนี้เลยครับ
และก็ลืมไม่ได้ที่จะเพิ่ม Siamstr Relay อีกหนึ่ง ..ที่สนับสนุนโดยอาจารย์อาร์ม RS ของเราเองเช่นกันครับ - wss://relay.notoshi.win
ทั้ง 3 Relay นี้ เป็น Free paid Relay นะครับ สามารถใช้งานได้เลย ^^
หรือหากใครยังไม่จุใจ.. อยากได้รีเลย์เพิ่มเติมอีก แนะนำไปดูที่ อัพเดทลีเรย์ 9/12/2023 จากพี่ใหญ่ของเราได้เลยครับ
===================== และอย่าลืมไปสร้าง NIP-05 NOSTR ADDRESSES "@siamstr.com" เท่ห์ ๆ ได้ที่ siamstr FREE NOSTR ADDRESSES. เพื่อทุกคน เพื่ออิสรภาพ ใครมาก่อนได้จับจองชื่อก่อนน้าาา ... ^^
===================== อ้อ ... หากอยากเรียนรู้พื้นฐาน Nostr แบบเชิงเทคนิคคอล หรือการเก็บ nsec (Private key) ของเราให้ปลอดภัย ... เชิญได้ที่บทความของพี่อาร์ม Righshift ของเราครับ
หรือจะเข้าไปอ่านจากในนี้ก็ได้เช่นกัน
Nostr: โซเชียลมีเดียเสรีไร้ศูนย์กลาง
ภาพ : Network diagram จาก https://nostr.how/en/the-protocol
===================== สุดท้ายผมหวังว่าเราจะได้รู้จักกันมากขึ้นกว่านี้ เพื่อร่วมกันสร้างสังคมที่เต็มไปด้วยคุณค่า และความรู้สึกดี ๆ ให้แก่กันต่อไปครับ
ขอบคุณที่เข้ามาเป็นส่วนหนึ่งของเราครับ
-
-
@ ecda4328:1278f072
2023-12-16 20:46:23Introduction: Crypto.com Exchange for European Traders
Reside in Europe, including non-EUR countries? Frustrated with the exorbitant spreads on the Crypto.com App (up to 5%) and high credit card fees (up to 3%) when you're eager to buy or sell cryptocurrency? This guide is tailored for you!
Quick Transfer with SEPA Instant
Did you know that you can transfer EUR to your Crypto.com Exchange account within minutes, and sometimes even seconds, including weekends and bank holidays? Yes, you absolutely can, thanks to SEPA Instant Credit Transfer (SCT Inst).
Supported Banks
To avail of fast EUR to Crypto.com Exchange transfers, ensure that your bank supports the SEPA Instant Credit Transfer (SCT Inst) scheme. If your bank, or FinTech apps like Wise, is on this list, you're good to go.
My Recommendation: I vouch for Wise due to its flawless functionality and favorable exchange rates. If your bank doesn't support SCT Inst and you're unwilling to wait, you can swiftly load your account with any currency using a bank card for a rather small fee (~1.2%). My Wise Referral Code: andreya54
24/7 Transfer Availability
SCT Inst is incredibly flexible:
- 24/7/365 availability, including weekends and bank holidays
- Immediate receipt and availability of funds
- €100,000 transaction limit (unless previously agreed otherwise between PSPs)
For more details, you can read this document by the European Central Bank.
TL;DR
To Buy crypto: -
EUR => BTC/EUR => ANY/BTC => ANY
- Use Limit order at the Lowest offer (aka "ask") (red order book)To Sell crypto: -
ANY => ANY/BTC => BTC/EUR => EUR
- Use Limit order at the Highest bid (green order book)How to Buy Cryptocurrency with EUR
Here's how to save up to 8% on spreads and fees when buying/selling crypto compared to the Crypto.com App.
-
Log in to Crypto.com Exchange: Go to the Dashboard or Wallet in the top right, and navigate to Bank Transfer on the left sidebar.
-
Deposit via SEPA: Choose
EUR -> SEPA -> Deposit
and use the displayed IBAN for transferring EUR to your Crypto.com Exchange account.
Trading Interface Tips
Intimidated by the professional trading interface? Don't be. Here's how to read the order book:
- Offers (Red): These are the prices at which people are willing to sell the asset. The lowest offer is often called the "Ask."
- Bids (Green): These are the prices at which people are willing to buy the asset. The highest bid is often called the "Bid."
Trading occurs when a buyer's bid meets a seller's offer. In market terms, this is often at the point where the highest bid and the lowest offer intersect. This intersection is frequently referred to as the "market price" for that particular asset at that specific time.
On trading pairs
Crypto.com Exchange offers only two EUR trading pairs: BTC/EUR and ETH/EUR. This is logical given that these assets have the highest liquidity in terms of market capitalization. While we hope for the addition of more pairs, the existing options still serve as useful proxies for purchasing other cryptocurrencies.
"highest liquidity" means that an asset like a cryptocurrency or stock can be easily bought or sold without causing a significant impact on its price. For a trader, this is good because it means they can quickly enter or exit positions without worrying about huge price changes.
Steps to Buy BTC with EUR
- Navigate to the
BTC/EUR
Spot trading pair. - Choose "Buy" on the right-hand menu.
- Stay on the "Limit" tab.
- Specify your order value (how much EUR you want to spend to buy BTC)
- Select the lowest offered (aka "ask") price (bottom of the red order book).
- Click "Buy BTC" to confirm.
Diversifying: How to Buy ATOM Token?
First, buy BTC as previously explained. Then:
- Navigate to the
ATOM/BTC
Spot trading pair. - Choose "Buy" on the right-hand menu.
- Stay on the "Limit" tab.
- Specify your order value (how much BTC you want to spend to buy ATOM)
- Select the lowest offered (aka "ask") price (bottom of the red order book).
- Click "Buy ATOM" to confirm.
Selling Your Assets
The process is the same but in reverse. Sell at the highest bid (top of the green order book).
Selling ATOM for BTC
- Navigate to the
ATOM/BTC
Spot trading pair. - Choose "Sell" on the right-hand menu.
- Stay on the "Limit" tab.
- Specify your quantity value (how much ATOM you want to sell to buy BTC)
- Select the highest bid (top of the green order book).
- Click "Sell ATOM" to confirm.
Selling BTC for EUR
- Navigate to the
BTC/EUR
Spot trading pair. - Choose "Sell" on the right-hand menu.
- Stay on the "Limit" tab.
- Specify your quantity value (how much BTC you want to sell to buy EUR)
- Select the highest bid (top of the green order book).
- Click "Sell BTC" to confirm.
You can then withdraw EUR back to your bank directly from the Crypto.Com Exchange.
Additionally
After you click "Buy" or "Sell," you will see your order in the "Open Orders" tab in the bottom menu. Once your order is 100% filled, it will disappear, and you will be able to see it in the "Trade History" tab.
Sometimes you might miss the opportunity as the price moves, and someone else's order could get filled faster than yours. Don't worry; you can either wait a little longer in hopes that someone will make an offer at this price or simply cancel your order and submit it again at the next lowest price if you are in a hurry.
Maker/Taker Fees Explained
When you open Buy/Sell orders on the Crypto.com exchange, you may incur small Maker/Taker fees of less than
0.0728%
, or even no fees at all, depending on the amount of CRO you have staked on the platform.- Maker Fee: When you place an order that adds liquidity to the market, you are a "Maker". This usually happens when you set a "limit" order that does not execute immediately and sits on the order book waiting for someone to match against it. Since you are "making" liquidity available for others, you pay a Maker fee.
- Taker Fee: When you place an order that removes liquidity from the market, you are a "Taker". This generally occurs when you place a "market" order that executes immediately against a pre-existing order on the order book. Because you are "taking" liquidity away, you pay a Taker fee.
Typically, Taker fees are higher than Maker fees as an incentive for traders to add liquidity to the market.
Read more on Maker/Taker fees on the Crypto.Com Exchange platform.
Conclusion
Thanks for reading this guide! It's lengthy, but every detail is crucial for optimal trading on Crypto.com Exchange.
Referral Code
If you haven't opened a Crypto.com Exchange account yet, use my referral code:
pv0r199m6j
Referral Link: Sign Up Here -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28How IPFS is broken
I once fell for this talk about "content-addressing". It sounds very nice. You know a certain file exists, you know there are probably people who have it, but you don't know where or if it is hosted on a domain somewhere. With content-addressing you can just say "start" and the download will start. You don't have to care.
Other magic properties that address common frustrations: webpages don't go offline, links don't break, valuable content always finds its way, other people will distribute your website for you, any content can be transmitted easily to people near you without anyone having to rely on third-party centralized servers.
But you know what? Saying a thing is good doesn't automatically make it possible and working. For example: saying stuff is addressed by their content doesn't change the fact that the internet is "location-addressed" and you still have to know where peers that have the data you want are and connect to them.
And what is the solution for that? A DHT!
DHT?
Turns out DHTs have terrible incentive structure (as you would expect, no one wants to hold and serve data they don't care about to others for free) and the IPFS experience proves it doesn't work even in a small network like the IPFS of today.
If you have run an IPFS client you'll notice how much it clogs your computer. Or maybe you don't, if you are very rich and have a really powerful computer, but still, it's not something suitable to be run on the entire world, and on web pages, and servers, and mobile devices. I imagine there may be a lot of unoptimized code and technical debt responsible for these and other problems, but the DHT is certainly the biggest part of it. IPFS can open up to 1000 connections by default and suck up all your bandwidth -- and that's just for exchanging keys with other DHT peers.
Even if you're in the "client" mode and limit your connections you'll still get overwhelmed by connections that do stuff I don't understand -- and it makes no sense to run an IPFS node as a client, that defeats the entire purpose of making every person host files they have and content-addressability in general, centralizes the network and brings back the dichotomy client/server that IPFS was created to replace.
Connections?
So, DHTs are a fatal flaw for a network that plans to be big and interplanetary. But that's not the only problem.
Finding content on IPFS is the most slow experience ever and for some reason I don't understand downloading is even slower. Even if you are in the same LAN of another machine that has the content you need it will still take hours to download some small file you would do in seconds with
scp
-- that's considering that IPFS managed to find the other machine, otherwise your command will just be stuck for days.Now even if you ignore that IPFS objects should be content-addressable and not location-addressable and, knowing which peer has the content you want, you go there and explicitly tell IPFS to connect to the peer directly, maybe you can get some seconds of (slow) download, but then IPFS will drop the connection and the download will stop. Sometimes -- but not always -- it helps to add the peer address to your bootstrap nodes list (but notice this isn't something you should be doing at all).
IPFS Apps?
Now consider the kind of marketing IPFS does: it tells people to build "apps" on IPFS. It sponsors "databases" on top of IPFS. It basically advertises itself as a place where developers can just connect their apps to and all users will automatically be connected to each other, data will be saved somewhere between them all and immediately available, everything will work in a peer-to-peer manner.
Except it doesn't work that way at all. "libp2p", the IPFS library for connecting people, is broken and is rewritten every 6 months, but they keep their beautiful landing pages that say everything works magically and you can just plug it in. I'm not saying they should have everything perfect, but at least they should be honest about what they truly have in place.
It's impossible to connect to other people, after years there's no js-ipfs and go-ipfs interoperability (and yet they advertise there will be python-ipfs, haskell-ipfs, whoknowswhat-ipfs), connections get dropped and many other problems.
So basically all IPFS "apps" out there are just apps that want to connect two peers but can't do it manually because browsers and the IPv4/NAT network don't provide easy ways to do it and WebRTC is hard and requires servers. They have nothing to do with "content-addressing" anything, they are not trying to build "a forest of merkle trees" nor to distribute or archive content so it can be accessed by all. I don't understand why IPFS has changed its core message to this "full-stack p2p network" thing instead of the basic content-addressable idea.
IPNS?
And what about the database stuff? How can you "content-address" a database with values that are supposed to change? Their approach is to just save all values, past and present, and then use new DHT entries to communicate what are the newest value. This is the IPNS thing.
Apparently just after coming up with the idea of content-addressability IPFS folks realized this would never be able to replace the normal internet as no one would even know what kinds of content existed or when some content was updated -- and they didn't want to coexist with the normal internet, they wanted to replace it all because this message is more bold and gets more funding, maybe?
So they invented IPNS, the name system that introduces location-addressability back into the system that was supposed to be only content-addressable.
And how do they manage to do it? Again, DHTs. And does it work? Not really. It's limited, slow, much slower than normal content-addressing fetches, most of the times it doesn't even work after hours. But still although developers will tell it is not working yet the IPFS marketing will talk about it as if it was a thing.
Archiving content?
The main use case I had for IPFS was to store content that I personally cared about and that other people might care too, like old articles from dead websites, and videos, sometimes entire websites before they're taken down.
So I did that. Over many months I've archived stuff on IPFS. The IPFS API and CLI don't make it easy to track where stuff are. The
pin
command doesn't help as it just throws your pinned hash in a sea of hashes and subhashes and you're never able to find again what you have pinned.The IPFS daemon has a fake filesystem that is half-baked in functionality but allows you to locally address things by names in a tree structure. Very hard to update or add new things to it, but still doable. It allows you to give names to hashes, basically. I even began to write a wrapper for it, but suddenly after many weeks of careful content curation and distribution all my entries in the fake filesystem were gone.
Despite not having lost any of the files I did lose everything, as I couldn't find them in the sea of hashes I had in my own computer. After some digging and help from IPFS developers I managed to recover a part of it, but it involved hacks. My things vanished because of a bug at the fake filesystem. The bug was fixed, but soon after I experienced a similar (new) bug. After that I even tried to build a service for hash archival and discovery, but as all the problems listed above began to pile up I eventually gave up. There were also problems of content canonicalization, the code the IPFS daemon use to serve default HTML content over HTTP, problems with the IPFS browser extension and others.
Future-proof?
One of the core advertised features of IPFS was that it made content future-proof. I'm not sure they used this expression, but basically you have content, you hash that, you get an address that never expires for that content, now everybody can refer to the same thing by the same name. Actually, it's better: content is split and hashed in a merkle-tree, so there's fine-grained deduplication, people can store only chunks of files and when a file is to be downloaded lots of people can serve it at the same time, like torrents.
But then come the protocol upgrades. IPFS has used different kinds of hashing algorithms, different ways to format the hashes, and will change the default algorithm for building the merkle-trees, so basically the same content now has a gigantic number of possible names/addresses, which defeats the entire purpose, and yes, files hashed using different strategies aren't automagically compatible.
Actually, the merkle algorithm could have been changed by each person on a file-by-file basis since the beginning (you could for example split a book file by chapter or page instead of by chunks of bytes) -- although probably no one ever did that. I know it's not easy to come up with the perfect hashing strategy in the first go, but the way these matters are being approached make me wonder that IPFS promoters aren't really worried about future-proof, or maybe we're just in Beta phase forever.
Ethereum?
This is also a big problem. IPFS is built by Ethereum enthusiasts. I can't read the mind of people behind IPFS, but I would imagine they have a poor understanding of incentives like the Ethereum people, and they tend towards scammer-like behavior like getting a ton of funds for investors in exchange for promises they don't know they can fulfill (like Filecoin and IPFS itself) based on half-truths, changing stuff in the middle of the road because some top-managers decided they wanted to change (move fast and break things) and squatting fancy names like "distributed web".
The way they market IPFS (which is not the main thing IPFS was initially designed to do) as a "peer-to-peer cloud" is very seductive for Ethereum developers just like Ethereum itself is: as a place somewhere that will run your code for you so you don't have to host a server or have any responsibility, and then Infura will serve the content to everybody. In the same vein, Infura is also hosting and serving IPFS content for Ethereum developers these days for free. Ironically, just like the Ethereum hoax peer-to-peer money, IPFS peer-to-peer network may begin to work better for end users as things get more and more centralized.
More about IPFS problems:
- IPFS problems: Too much immutability
- IPFS problems: General confusion
- IPFS problems: Shitcoinery
- IPFS problems: Community
- IPFS problems: Pinning
- IPFS problems: Conceit
- IPFS problems: Inefficiency
- IPFS problems: Dynamic links
See also
- A crappy course on torrents, on the protocol that has done most things right
- The Tragedy of IPFS in a series of links, an ongoing Twitter thread.
-
@ ecda4328:1278f072
2023-12-16 20:45:50Introduction
In the world of blockchain technology, a nonce plays a pivotal role in ensuring transaction security and uniqueness. This article demystifies the nonce's role in major blockchain platforms - Bitcoin, Ethereum, and Cosmos/CometBFT, highlighting its importance and distinct functionalities in each.
The Role of Nonce in ECDSA and Its Importance
In Bitcoin (and most of blockchains) transactions, the nonce is a randomly generated number integral to the Elliptic Curve Digital Signature Algorithm (ECDSA). It guarantees each digital signature's uniqueness and security. The randomness and secrecy of the nonce are vital. If predictable or exposed, it can compromise the entire security of a transaction.
The Risk of Nonce Exposure and Private Key Recovery
If a nonce is compromised, it poses a risk of private key recovery. To understand this, one must consider the signature components (r, s), the public key (which becomes known once you sign & broadcast at least one transaction in Bitcoin). A predictable or reused nonce can leak critical information, enabling savvy attackers to backtrack to the private key.
This specific type of attack is called a nonce covert channel attack. And methods protecting against this are called anti-klepto or anti-exfil (interchangebly).
Worth noting that anti-klepto/anti-exfil are broader terms for methods that protect against various forms of secret data exfiltration, including but not limited to attacks involving nonce misuse.
Bitcoin's UTXO Model and Nonce Functionality
Bitcoin utilizes the Unspent Transaction Outputs (UTXO) model, ensuring that each Bitcoin is spent only once. This model is distinct from nonce management in ECDSA and adds an additional layer of security against double-spending in the Bitcoin network.
Address rotation in Bitcoin, as implemented through BIP32 (Hierarchical Deterministic Wallets), enhances privacy and isolates financial risk by generating a unique address and corresponding private key for each address created, not necessarily for each individual transaction. While this strategy effectively segregates risk to individual addresses, it does not directly prevent the vulnerability of private key derivation from nonce exposure in the ECDSA signature process, as this risk is inherent to the signature mechanism itself and is independent of the address or its associated private key.
Ethereum's Account-Based Model and Nonce Usage
Ethereum, unlike Bitcoin, operates on an account-based model. Each account has a sequential transaction nonce, starting from 0, which is public. This nonce, different from the ECDSA nonce, helps in transaction ordering and network integrity.
Nonces in Cosmos/CometBFT Blockchains
Cosmos/CometBFT blockchains, akin to Ethereum, adopt an account-based model. They use nonces, similar to Ethereum's transaction nonce, for transaction ordering and preventing replay attacks. These nonces are distinct from the ECDSA nonce used in the digital signature process.
Conclusion
The use of "nonce" in Ethereum for transaction sequence (akin to account sequence in Cosmos/CometBFT blockchains), leading to confusion with ECDSA nonce, is simply a coincidental choice of terminology.
In summary, while Bitcoin, Ethereum, Cosmos/CometBFT and many other blockchains employ nonces in ECDSA signing, their transaction management and double-spending prevention methods differ significantly. Understanding these nuances is crucial for blockchain users and developers to appreciate the underlying security mechanisms of these diverse platforms.
References
- Hardware wallets can steal your seed!
- A Glimpse of the Deep: Finding a Creature in Ethereum's Dark Forest
- MuSig-DN: Schnorr Multisignatures with Verifiably Deterministic Nonces
- Android's SecureRandom - not even nonce
- Anti-klepto explained: how the BitBox02 protects you against leaking private keys
- Anti-Exfil: Stopping Key Exfiltration
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Lagoa Santa: como chegar -- partindo da rodoviária de Belo Horizonte
Ao descer de seu ônibus na rodoviária de Belo Horizonte às 4 e pouco da manhã, darás de frente para um caubói que toma cerveja em seus trajes típicos em um bar no setor mesmo de desembarque. Suba a escada à direita que dá no estacionamento da rodoviária. Vire à esquerda e caminhe por mais ou menos 400 metros, atravessando uma área onde pessoas suspeitas -- mas provavelmente dormindo em pé -- lhe observam, e então uma pracinha ocupada por um clã de mendigos. Ao avistar um enorme obelisco no meio de um cruzamento de duas avenidas, vire à esquerda e caminhe por mais 400 metros. Você verá uma enorme, antiga e bela estação com uma praça em frente, com belas fontes aqüáticas. Corra dali e dirija-se a um pedaço de rua à direita dessa praça. Um velho palco de antigos carnavais estará colocado mais ou menos no meio da simpática ruazinha de parelepípedos: é onde você pegará seu próximo ônibus.
Para entrar na estação é necessário ter um cartão com créditos recarregáveis. Um viajante prudente deixa sempre um pouco de créditos em seu cartão a fim de evitar filas e outros problemas de indisponibilidade quando chega cansado de viagem, com pressa ou em horários incomuns. Esse tipo de pessoa perceberá que foi totalmente ludibriado ao perceber que que os créditos do seu cartão, abastecido quando de sua última vinda a Belo Horizonte, há três meses, pereceram de prazo de validade e foram absorvidos pelos cofre públicos. Terá, portanto, que comprar mais créditos. O guichê onde os cartões são abastecidos abre às 5h, mas não se espante caso ele não tenha sido aberto ainda quando o primeiro ônibus chegar, às 5h10.
Com alguma sorte, um jovem de moletom, autorizado por dois ou três fiscais do sistema de ônibus que conversam alegremente, será o operador da catraca. Ele deixa entrar sem pagar os bêbados, os malandros, os pivetes. Bastante empático e perceptivo do desespero dos outros, esse bom rapaz provavelmente também lhe deixará entrar sem pagar.
Uma vez dentro do ônibus, não se intimide com os gritalhões e valentões que, ofendidíssimos com o motorista por ele ter parado nas estações, depois dos ônibus anteriores terem ignorado esses excelsos passageiros que nelas aguardavam, vão aos berros tirar satisfação.
O ponto final do ônibus, 40 minutos depois, é o terminal Morro Alto. Lá você verá, se procurar bem entre vários ônibus e pessoas que despertam a sua mais honesta suspeita, um veículo escuro, apagado, numerado 5882 e que abrigará em seu interior um motorista e um cobrador que descansam o sono dos justos.
Aguarde na porta por mais uns vinte minutos até que, repentinamente desperto, o motorista ligue o ônibus, abra as portas e já comece, de leve, a arrancar. Entre correndo, mas espere mais um tempo, enquanto as pessoas que têm o cartão carregado passem e peguem os melhores lugares, até que o cobrador acorde e resolva te cobrar a passagem nesse velho meio de pagamento, outrora o mais líqüído, o dinheiro.
Este último ônibus deverá levar-lhe, enfim, a Lagoa Santa.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Lightning and its fake HTLCs
Lightning is terrible but can be very good with two tweaks.
How Lightning would work without HTLCs
In a world in which HTLCs didn't exist, Lightning channels would consist only of balances. Each commitment transaction would have two outputs: one for peer
A
, the other for peerB
, according to the current state of the channel.When a payment was being attempted to go through the channel, peers would just trust each other to update the state when necessary. For example:
- Channel
AB
's balances areA[10:10]B
(in sats); A
sends a 3sat payment throughB
toC
;A
asksB
to route the payment. ChannelAB
doesn't change at all;B
sends the payment toC
,C
accepts it;- Channel
BC
changes fromB[20:5]C
toB[17:8]C
; B
notifiesA
the payment was successful,A
acknowledges that;- Channel
AB
changes fromA[10:10]B
toA[7:13]B
.
This in the case of a success, everything is fine, no glitches, no dishonesty.
But notice that
A
could have refused to acknowledge that the payment went through, either because of a bug, or because it went offline forever, or because it is malicious. Then the channelAB
would stay asA[10:10]B
andB
would have lost 3 satoshis.How Lightning would work with HTLCs
HTLCs are introduced to remedy that situation. Now instead of commitment transactions having always only two outputs, one to each peer, now they can have HTLC outputs too. These HTLC outputs could go to either side dependending on the circumstance.
Specifically, the peer that is sending the payment can redeem the HTLC after a number of blocks have passed. The peer that is receiving the payment can redeem the HTLC if they are able to provide the preimage to the hash specified in the HTLC.
Now the flow is something like this:
- Channel
AB
's balances areA[10:10]B
; A
sends a 3sat payment throughB
toC
:A
asksB
to route the payment. Their channel changes toA[7:3:10]B
(the middle number is the HTLC).B
offers a payment toC
. Their channel changes fromB[20:5]C
toB[17:3:5]C
.C
tellsB
the preimage for that HTLC. Their channel changes fromB[17:3:5]C
toB[17:8]C
.B
tellsA
the preimage for that HTLC. Their channel changes fromA[7:3:10]B
toA[7:13]B
.
Now if
A
wants to trickB
and stop respondingB
doesn't lose money, becauseB
knows the preimage,B
just needs to publish the commitment transactionA[7:3:10]B
, which gives him 10sat and then redeem the HTLC using the preimage he got fromC
, which gives him 3 sats more.B
is fine now.In the same way, if
B
stops responding for any reason,A
won't lose the money it put in that HTLC, it can publish the commitment transaction, get 7 back, then redeem the HTLC after the certain number of blocks have passed and get the other 3 sats back.How Lightning doesn't really work
The example above about how the HTLCs work is very elegant but has a fatal flaw on it: transaction fees. Each new HTLC added increases the size of the commitment transaction and it requires yet another transaction to be redeemed. If we consider fees of 10000 satoshis that means any HTLC below that is as if it didn't existed because we can't ever redeem it anyway. In fact the Lightning protocol explicitly dictates that if HTLC output amounts are below the fee necessary to redeem them they shouldn't be created.
What happens in these cases then? Nothing, the amounts that should be in HTLCs are moved to the commitment transaction miner fee instead.
So considering a transaction fee of 10000sat for these HTLCs if one is sending Lightning payments below 10000sat that means they operate according to the unsafe protocol described in the first section above.
It is actually worse, because consider what happens in the case a channel in the middle of a route has a glitch or one of the peers is unresponsive. The other node, thinking they are operating in the trustless protocol, will proceed to publish the commitment transaction, i.e. close the channel, so they can redeem the HTLC -- only then they find out they are actually in the unsafe protocol realm and there is no HTLC to be redeemed at all and they lose not only the money, but also the channel (which costed a lot of money to open and close, in overall transaction fees).
One of the biggest features of the trustless protocol are the payment proofs. Every payment is identified by a hash and whenever the payee releases the preimage relative to that hash that means the payment was complete. The incentives are in place so all nodes in the path pass the preimage back until it reaches the payer, which can then use it as the proof he has sent the payment and the payee has received it. This feature is also lost in the unsafe protocol: if a glitch happens or someone goes offline on the preimage's way back then there is no way the preimage will reach the payer because no HTLCs are published and redeemed on the chain. The payee may have received the money but the payer will not know -- but the payee will lose the money sent anyway.
The end of HTLCs
So considering the points above you may be sad because in some cases Lightning doesn't use these magic HTLCs that give meaning to it all. But the fact is that no matter what anyone thinks, HTLCs are destined to be used less and less as time passes.
The fact that over time Bitcoin transaction fees tend to rise, and also the fact that multipart payment (MPP) are increasedly being used on Lightning for good, we can expect that soon no HTLC will ever be big enough to be actually worth redeeming and we will be at a point in which not a single HTLC is real and they're all fake.
Another thing to note is that the current unsafe protocol kicks out whenever the HTLC amount is below the Bitcoin transaction fee would be to redeem it, but this is not a reasonable algorithm. It is not reasonable to lose a channel and then pay 10000sat in fees to redeem a 10001sat HTLC. At which point does it become reasonable to do it? Probably in an amount many times above that, so it would be reasonable to even increase the threshold above which real HTLCs are made -- thus making their existence more and more rare.
These are good things, because we don't actually need HTLCs to make a functional Lightning Network.
We must embrace the unsafe protocol and make it better
So the unsafe protocol is not necessarily very bad, but the way it is being done now is, because it suffers from two big problems:
- Channels are lost all the time for no reason;
- No guarantees of the proof-of-payment ever reaching the payer exist.
The first problem we fix by just stopping the current practice of closing channels when there are no real HTLCs in them.
That, however, creates a new problem -- or actually it exarcebates the second: now that we're not closing channels, what do we do with the expired payments in them? These payments should have either been canceled or fulfilled before some block x, now we're in block x+1, our peer has returned from its offline period and one of us will have to lose the money from that payment.
That's fine because it's only 3sat and it's better to just lose 3sat than to lose both the 3sat and the channel anyway, so either one would be happy to eat the loss. Maybe we'll even split it 50/50! No, that doesn't work, because it creates an attack vector with peers becoming unresponsive on purpose on one side of the route and actually failing/fulfilling the payment on the other side and making a profit with that.
So we actually need to know who is to blame on these payments, even if we are not going to act on that imediatelly: we need some kind of arbiter that both peers can trust, such that if one peer is trying to send the preimage or the cancellation to the other and the other is unresponsive, when the unresponsive peer comes back, the arbiter can tell them they are to blame, so they can willfully eat the loss and the channel can continue. Both peers are happy this way.
If the unresponsive peer doesn't accept what the arbiter says then the peer that was operating correctly can assume the unresponsive peer is malicious and close the channel, and then blacklist it and never again open a channel with a peer they know is malicious.
Again, the differences between this scheme and the current Lightning Network are that:
a. In the current Lightning we always close channels, in this scheme we only close channels in case someone is malicious or in other worst case scenarios (the arbiter is unresponsive, for example). b. In the current Lightning we close the channels without having any clue on who is to blame for that, then we just proceed to reopen a channel with that same peer even in the case they were actively trying to harm us before.
What is missing? An arbiter.
The Bitcoin blockchain is the ideal arbiter, it works in the best possible way if we follow the trustless protocol, but as we've seen we can't use the Bitcoin blockchain because it is expensive.
Therefore we need a new arbiter. That is the hard part, but not unsolvable. Notice that we don't need an absolutely perfect arbiter, anything is better than nothing, really, even an unreliable arbiter that is offline half of the day is better than what we have today, or an arbiter that lies, an arbiter that charges some satoshis for each resolution, anything.
Here are some suggestions:
- random nodes from the network selected by an algorithm that both peers agree to, so they can't cheat by selecting themselves. The only thing these nodes have to do is to store data from one peer, try to retransmit it to the other peer and record the results for some time.
- a set of nodes preselected by the two peers when the channel is being opened -- same as above, but with more handpicked-trust involved.
- some third-party cloud storage or notification provider with guarantees of having open data in it and some public log-keeping, like Twitter, GitHub or a Nostr relay;
- peers that get paid to do the job, selected by the fact that they own some token (I know this is stepping too close to the shitcoin territory, but could be an idea) issued in a Spacechain;
- a Spacechain itself, serving only as the storage for a bunch of
OP_RETURN
s that are published and tracked by these Lightning peers whenever there is an issue (this looks wrong, but could work).
Key points
- Lightning with HTLC-based routing was a cool idea, but it wasn't ever really feasible.
- HTLCs are going to be abandoned and that's the natural course of things.
- It is actually good that HTLCs are being abandoned, but
- We must change the protocol to account for the existence of fake HTLCs and thus make the bulk of the Lightning Network usage viable again.
See also
- Channel
-
@ ecda4328:1278f072
2023-12-16 20:45:10Intro
I've left Twitter (X), WhatsApp, Telegram, Instagram, Facebook and Google. The driving force behind this decision was the escalating overzealous censorship. I cannot condone platforms that actively indulge in this practice. In all honesty, I've always felt uneasy using "free" platforms riddled with ads where the user is the product and doesn't own the content they produce.
Let's be real: hardly anyone thoroughly reads the Terms of Service (ToS).
Censorship and Shadow Banning
The final straw was when I resorted to a text editor for drafting messages/comments, hoping to rephrase them so they wouldn't get deleted moments after posting. This isn't exclusive to just one platform; I've encountered it on YouTube and LinkedIn too. Twitter (or X, as I now refer to it) has a history of shadow banning users' posts. It's been beyond frustrating to get banned from Telegram groups simply for posing legitimate questions.
You can test LinkedIn's censorship mechanism too, simply add "Binance" word (without quotes) to any of your comment and your post will disappear. At least that is what I've seen couple of months ago. Similarly, comments on YouTube often disappear precisely 60 seconds after posting if they contain specific keywords. I know they call it filtering, but it does not make any sense. In my opinion, legitimate companies and links shouldn't trigger these filters.
Community and Connections
Recently, I attended the Cosmoverse 2023 conference in Istanbul. Most attendees exchanged their Telegram or Twitter (X) contact information. Since I didn't have either, I gladly shared my Nostr and SimpleX Chat details. Many privacy advocates were quick to connect on SimpleX with me, though several didn't.
I learned about SimpleX Chat from Jack Dorsey, who mentioned it during a conversation in July:
While Signal has its shortcomings, I still keep it as a backup communication tool.
One More Last Straw
During the conference, I temporarily reinstalled Telegram to communicate with my group. Convincing nine individuals to switch to SimpleX on the spot seemed impractical.
At the conference, I bought a Keystone hardware wallet. Shortly after, I connected with the seller, Xin Z, on Telegram. However, I was banned from the official Keystone Telegram group right after posing a question.
Upon inquiring, Xin Z clarified that Telegram's official team had banned me, not the group's admin. 🤯
Business and Community: Collateral Damage
Censorship doesn't just silence voices; it hinders potential growth and stifles innovation. When platforms arbitrarily or aggressively censor content, they inadvertently create barriers between businesses and their potential clients. New users or clients, when encountering such heavy-handed moderation, may feel discouraged or unwelcome, causing them to retreat from the platform altogether.
Moreover, for businesses, this form of censorship can be devastating. Word-of-mouth, discussions, and organic community engagements are invaluable. When these channels are hampered, businesses lose out on potential clientele, and communities lose the chance to thrive and evolve naturally.
Censorship, in its overzealous form, breaks the very essence of digital communities: open dialogue. As platforms become more censorious, they risk creating sterile environments devoid of genuine interaction and rich discourse. Such an atmosphere is not conducive for businesses to foster relations, nor for communities to flourish. The ultimate price of overcensorship isn't just the loss of a few voices—it's the fragmentation of digital society as we know it.
Freedom to Choose
I strongly advocate for the adoption of Free and Open Source Software (aka FOSS) products. In these platforms, you aren't treated as the product. However, supporting them through donations/contributions is always an option. Platforms like Nostr and SimpleX Chat are excellent starting points.
My Nostr account:
npub1andyx2xqhwffeg595snk9a8ll43j6dvw5jzpljm5yjm3qync7peqzl8jd4
Disclaimer
This article reflects my personal experiences and opinions. It is not intended to criticize or demean the Keystone hardware wallet product or its quality. Furthermore, the actions taken by Telegram are not a direct representation of the views or policies of the Keystone Telegram group admins. Any reference to specific events or entities is made in the context of broader concerns about platform censorship.
-
@ ecda4328:1278f072
2023-12-16 20:44:09In the evolving world of cryptocurrency, ensuring the security of your digital assets is paramount. For those looking to safeguard their Bitcoin and other cryptocurrencies, selecting the right hardware wallet is crucial. Here, I recommend some top choices based on functionality and security.
Swiss Choices for Bitcoin: BitBox02
For Bitcoin enthusiasts/maxis, the Swiss-made BitBox02 - Bitcoin-only edition is a top recommendation. This wallet focuses solely on Bitcoin, ensuring specialized security and features for the most popular cryptocurrency. Although there is a version supporting multiple cryptocurrencies, the Bitcoin-only edition stands out for its dedicated functionality (Less code means less attack surface, which further improves your security when only storing Bitcoin). Learn more about this wallet here.
Swiss Tangem: A Unique Approach
Another Swiss option, Tange. Tangem is particularly appealing due to its unique approach to security. With Tangem cards, there's no need to generate or back up a mnemonic phrase (a series of 24 words used to recover wallets). The key is simply not to lose the cards and remember the access code, which adds an extra layer of security in case of loss. This approach simplifies security while maintaining robust protection for your assets.
Special Offer: For those interested in purchasing a Tangem Wallet, I have a special offer for you. By using my referral code, you can buy a Tangem Wallet at a discounted price. Click here and use the promo code MY6D7U to enjoy your discount.
Explore Tangem here.
Tangem's Mnemonic Phrase Advantage
Tangem's standout feature is the absence of the need for a mnemonic phrase. This reduces the risk of loss due to complicated backup schemes.
However, for those who do backup mnemonic phrases, I recommend using a metal plate for added durability against elements like fire and water. It's crucial to protect your mnemonic phrase thoroughly, as it essentially makes you your own bank. A fireproof and waterproof safe is advisable for storing these phrases. While some split the phrase and store parts in different locations, I advise against overly complicating the storage scheme to prevent loss. Adding an optional password for additional security is also recommended. Remember, mnemonic phrases unlock the same wallet addresses across different hardware wallets, as most use the BIP39 protocol for generating a seed from the 24-word phrase. Check out this list for reference: BIP39 Word List.
Worth reading: - Seed Phrases Explained: Best Practices for Crypto Security - How Tangem Wallet backs up private keys - How Your Tangem Wallet Works Without Tangem: Apocalypse Scenario
French Ledger: A Shift in Trust
The French Ledger Nano S Plus or Ledger Nano X were previously high on the recommendation list. However, trust in these devices has diminished due to Ledger's service Ledger Recover allowing mnemonic phrase backups on third-party servers. This goes against the principle of a hardware wallet, where your seed should never be exportable or stored externally.
Chinese Keystone 3 Pro: A Contender with Caveats
As an alternative to Ledger, the Chinese Keystone 3 Pro could be considered. Although there are concerns about its Chinese manufacturing possibly implying vulnerabilities, its functionality—like signing transactions via QR codes without direct computer connection—offers a level of security. However, choosing a hardware wallet should be based on the specific cryptocurrencies you plan to store. For Bitcoin-only storage, the BitBox02 Bitcoin-only edition is highly recommended.
Worth reading: - Does airgap make Bitcoin hardware wallets more secure?
Czech Trezor: Slow to Innovate and Security Concerns
Regarding the Czech Trezor Model T, it's not recommended due to its slow pace in integrating new features. Recent controversies surrounding the integration of CoinJoin and concerns over censorship and transaction filtering have also marred its reputation. Furthermore, the Trezor Model T has proven to be vulnerable in case of loss, as demonstrated by a hacker in this video, potentially exposing your cryptocurrency to risks.
The Case of Canadian Coldcard (MK4): A Shift from FOSS
An important development in the hardware wallet space is the case of the Coldcard Mk4. Initially, Coldcard was a fully open-source software (FOSS) project. In FOSS, the "Free" refers to the freedom to run, study, redistribute, and modify the software, a principle highly valued in the crypto community. However, a significant shift occurred in the journey of Coldcard:
- Initial FOSS and Bootstrapping: Coldcard, developed by nvk and Peter, started as a FOSS project. They bootstrapped the development, rejecting offers from venture capital investors interested in the project.
- Foundation's Entry: Subsequently, the Foundation cloned Coldcard's code and announced a slightly different hardware. Notably, they raised funds from the same VCs that Coldcard had previously turned down.
- License Change: In response to these events, Coldcard altered its software license. The new license permits all activities except those performed by the Foundation, marking a departure from its original FOSS status.
This change has sparked discussions within the cryptocurrency community about the importance of maintaining open-source principles in the development of hardware wallets. The Coldcard MK4's shift from FOSS underscores the complex dynamics between open-source ethos and commercial interests in the crypto world.
Source: X/Twitter thread.
Conclusion
In conclusion, when choosing a hardware wallet, consider the specific cryptocurrencies you'll be storing and the compatibility of the wallet with your intended use, especially concerning smart contracts and other features. While there are numerous options in the market, careful consideration of security features and the reputation of the manufacturer can guide you in making a secure choice for your digital assets.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Jofer
Jofer era um jogador diferente. À primeira vista não, parecia igual, um volante combativo, perseguia os atacantes adversários implacavelmente, um bom jogador. Mas não era essa a característica que diferenciava Jofer. Jofer era, digamos, um chutador.
Começou numa semifinal de um torneio de juniores. O time de Jofer precisava do empate e estava sofrendo uma baita pressão do adversário, mas o jogo estava 1 a 1 e parecia que ia ficar assim mesmo, daquele jeito futebolístico que parece, parece mesmo. Só que aos 46 do segundo tempo tomaram um gol espírita, Ruizinho do outro time saiu correndo pela esquerda e, mesmo sendo canhoto, foi cortando para o meio, os zagueiros meio que achando que já tinha acabado mesmo, devia ter só mais aquele lance, o árbitro tinha dado dois minutos, Ruizinho chutou, marcou e o goleiro, que só pulou depois que já tinha visto que não ia ter jeito, ficou xingando.
A bola saiu do meio e tocaram para Jofer, ninguém nem veio marcá-lo, o outro time já estava comemorando, e com razão, o juiz estava de sacanagem em fazer o jogo continuar, já estava tudo acabado mesmo. Mas não, estava certo, mais um minuto de acréscimo, justo. Em um minuto dá pra fazer um gol. Mas como? Jofer pensou nas partidas da NBA em que com alguns centésimos de segundo faltando o armador jogava de qualquer jeito para a cesta e às vezes acertava. De trás do meio de campo, será? Não vou ter nem força pra fazer chegar no gol. Vou virar piada, melhor tocar pro Fumaça ali do lado e a gente perde sem essa humilhação no final. Mas, poxa, e daí? Vou tentar mesmo assim, qualquer coisa eu falo que foi um lançamento e daqui a uns dias todo mundo esquece. Olhou para o próprio pé, virou ele de ladinho, pra fora e depois pra dentro (bom, se eu pegar daqui, direitinho, quem sabe?), jogou a bola pro lado e bateu. A bola subiu escandalosamente, muito alta mesmo, deve ter subido uns 200 metros. Jofer não tinha como ter a menor noção. Depois foi descendo, o goleirão voltando correndo para debaixo da trave e olhando pra bola, foi chegando e pulando já só pra acompanhar, para ver, dependurado no travessão, a bola sair ainda bem alta, ela bateu na rede lateral interna antes de bater no chão, quicar violentamente e estufar a rede no alto do lado direito de quem olhava.
Mas isso tudo foi sonho do Jofer. Sonhou acordado, numa noite em que demorou pra dormir, deitado na sua cama. Ficou pensando se não seria fácil, se ele treinasse bastante, acertar o gol bem de longe, tipo no sonho, e se não dava pra fazer gol assim. No dia seguinte perguntou a Brunildinho, o treinador de goleiros. Era difícil defender essas bolas, ainda mais se elas subissem muito, o goleiro ficava sem perspectiva, o vento alterava a trajetória a cada instante, tinha efeito, ela cairia rápido, mas claro que não valia à pena treinar isso, a chance de acertar o gol era minúscula. Mas Jofer só ia tentar depois que treinasse bastante e comprovasse o que na sua imaginação parecia uma excelente idéia.
Começou a treinar todos os dias. Primeiro escondido, por vergonha dos colegas, chegava um pouco antes e ficava lá, chutando do círculo central. Ao menor sinal de gente se aproximando, parava e ia catar as bolas. Depois, quando começou a acertar, perdeu a vergonha. O pessoal do clube todo achava engraçado quando via Jofer treinando e depois ouvia a explicação da boca de alguém, ninguém levava muito a sério, mas também não achava de todo ridículo. O pessoal ria, mas no fundo torcia praquilo dar certo, mesmo.
Aconteceu que num jogo que não valia muita coisa, empatezinho feio, aos 40 do segundo tempo, a marcação dos adversários já não estava mais pressionando, todo mundo contente com o empate e com vontade de parar de jogar já, o Henrique, meia-esquerdo, humilde, mas ainda assim um pouco intimidante para Jofer (jogava demais), tocou pra ele. Vai lá, tenta sua loucura aí. Assumiu a responsabilidade do nosso volante introspectivo. Seria mais verossímil se Jofer tivesse errado, primeira vez que tentou, restava muito tempo ainda pra ele ter a chance de ser herói, ninguém acerta de primeira, mas ele acertou. Quase como no sonho, Lucas, o goleiro, não esperava, depois que viu o lance, riu-se, adiantou-se para pegar a bola que ele julgava que quicaria na área, mas ela foi mais pra frente, mais e mais, daí Lucas já estava correndo, só que começou a pensar que ela ia pra fora, e ele ia só se dependurar no travessão e fazer seu papel de estar na bola. Acabou que por conta daquele gol eles terminaram em segundo no grupo daquele torneiozinho, ao invés de terceiro, e não fez diferença nenhuma.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The flaw of "just use paypal/coinbase" arguments
For the millionth time I read somewhere that "custodial bitcoin is not bitcoin" and that "if you're going to use custodial, better use Paypal". No, actually it was "better use Coinbase", but I had heard the "PayPal" version in the past.
There are many reasons why using PayPal is not the same as using a custodial Bitcoin service or wallet that are obvious and not relevant here, such as the fact that you can't have Bitcoin balances on Bitcoin (or maybe now you can? but you can't send it around); plus all the reasons that are also valid for Coinbase such as you having to give all your data and selfies of yourself and your government documents and so on -- but let's ignore these reasons for now.
The most important reason why it isn't the same thing is that when you're using Coinbase you are stuck in Coinbase. Your Coinbase coins cannot be used to pay anyone that isn't in Coinbase. So Coinbase-style custodianship doesn't help Bitcoin. If you want to move out of Coinbase you have to withdraw from Coinbase.
Custodianship on Lightning is of a very different nature. You can pay people from other custodial platforms and people that are hosting their own Lightning nodes and so on.
That kind of custodianship doesn't do any harm to anyone, doesn't fracture the network, doesn't reduce the network effect of Lightning, in fact it increases it.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On HTLCs and arbiters
This is another attempt and conveying the same information that should be in Lightning and its fake HTLCs. It assumes you know everything about Lightning and will just highlight a point. This is also valid for PTLCs.
The protocol says HTLCs are trimmed (i.e., not actually added to the commitment transaction) when the cost of redeeming them in fees would be greater than their actual value.
Although this is often dismissed as a non-important fact (often people will say "it's trusted for small payments, no big deal"), but I think it is indeed very important for 3 reasons:
- Lightning absolutely relies on HTLCs actually existing because the payment proof requires them. The entire security of each payment comes from the fact that the payer has a preimage that comes from the payee. Without that, the state of the payment becomes an unsolvable mystery. The inexistence of an HTLC breaks the atomicity between the payment going through and the payer receiving a proof.
- Bitcoin fees are expected to grow with time (arguably the reason Lightning exists in the first place).
- MPP makes payment sizes shrink, therefore more and more of Lightning payments are to be trimmed. As I write this, the mempool is clear and still payments smaller than about 5000sat are being trimmed. Two weeks ago the limit was at 18000sat, which is already below the minimum most MPP splitting algorithms will allow.
Therefore I think it is important that we come up with a different way of ensuring payment proofs are being passed around in the case HTLCs are trimmed.
Channel closures
Worse than not having HTLCs that can be redeemed is the fact that in the current Lightning implementations channels will be closed by the peer once an HTLC timeout is reached, either to fulfill an HTLC for which that peer has a preimage or to redeem back that expired HTLCs the other party hasn't fulfilled.
For the surprise of everybody, nodes will do this even when the HTLCs in question were trimmed and therefore cannot be redeemed at all. It's very important that nodes stop doing that, because it makes no economic sense at all.
However, that is not so simple, because once you decide you're not going to close the channel, what is the next step? Do you wait until the other peer tries to fulfill an expired HTLC and tell them you won't agree and that you must cancel that instead? That could work sometimes if they're honest (and they have no incentive to not be, in this case). What if they say they tried to fulfill it before but you were offline? Now you're confused, you don't know if you were offline or they were offline, or if they are trying to trick you. Then unsolvable issues start to emerge.
Arbiters
One simple idea is to use trusted arbiters for all trimmed HTLC issues.
This idea solves both the protocol issue of getting the preimage to the payer once it is released by the payee -- and what to do with the channels once a trimmed HTLC expires.
A simple design would be to have each node hardcode a set of trusted other nodes that can serve as arbiters. Once a channel is opened between two nodes they choose one node from both lists to serve as their mutual arbiter for that channel.
Then whenever one node tries to fulfill an HTLC but the other peer is unresponsive, they can send the preimage to the arbiter instead. The arbiter will then try to contact the unresponsive peer. If it succeeds, then done, the HTLC was fulfilled offchain. If it fails then it can keep trying until the HTLC timeout. And then if the other node comes back later they can eat the loss. The arbiter will ensure they know they are the ones who must eat the loss in this case. If they don't agree to eat the loss, the first peer may then close the channel and blacklist the other peer. If the other peer believes that both the first peer and the arbiter are dishonest they can remove that arbiter from their list of trusted arbiters.
The same happens in the opposite case: if a peer doesn't get a preimage they can notify the arbiter they hadn't received anything. The arbiter may try to ask the other peer for the preimage and, if that fails, settle the dispute for the side of that first peer, which can proceed to fail the HTLC is has with someone else on that route.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28jq-finder
Made with jq-web, a tool to explore JSON using
jq
queries that build intermediate results so you can inspect each step of the process. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28superform.xyz
This was an app that allowed people to create micro apps powered by forms. Actually just one form I believe. The idea was for the micro apps to be really micro.
For example, you want a list of people, but you can only have at most 10 people in the list. Your app could keep a state with list of people already added and reject any other submissions above the specified limit. This would be done with 3 lines of code and provide an automatic form for people to fill with expected data.
Another example, you wanted to create a list of people that would go to an event and each would have to bring one item from a list: you created an initial state of a list of the items that should be brought, then specified a form where people could write their names and select the item they would bring, then code that for each submitted form added the name of the person plus the item they would bring to the state while also removing the selected item from the available items. Also 3 or 4 lines of data.
Something like this can't be done anywhere else. But also of course it would be arcane and frighten normal people and so on (although I do believe some "normal" people would be able to use such a thing if they needed it, just like they learn to write complex Excel formulas and still don't call themselves programmers).
See also
- Etleneum, as it is basically the same core idea of a mutable state that is affected by calls, but Etleneum introduces (and actually forces the usage of) money, both in the sense that it acts as an escrow for contract results and that it mandates the payment of a small amount with each call, so it ends up not serving the same purposes.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28My personal experience (as a complete ignorant) of the blocksize debate in 2017
In the beginning of 2017 I didn't know Bitcoin was having a "blocksize debate". I had stopped paying attention to Bitcoin in 2014 after reading Tim Swanson's book on shitcoineiry and was surprise people even care about Bitcoin still while Ethereum and other fancy things were around.
My introduction to the subject was this interview with Andrew Stone and Andrew Clifford from Bitcoin Unlimited (still don't know who these guys are). I've listened to it and kinda liked the conspiracy theory about "a group of developers trying, against miners and users, to control the whole ecosystem by not allowing blocks to grow" (actually, if you listen to this interview that announced the creation of Blockstream and the sidechains whitepaper it does sound like a government agent bribing all the Core developers into forming a consortium that will turn Bitcoin into an Ethereum-like shitcoin under their control -- but this is just a useless digression).
Some time later I listened to this interview with Jimmy Song and was introduced to two hard forks and conspiracies and New York Agreement and got excited because I didn't care about Bitcoin (I'm ashamed to remember this feeling) and wanted to see things changing, people fighting, Bitcoin burning, for no reason. Oddly, what I grasped from the interview was that Jimmy Song was defending the agreement and expecting everybody to fulfill it.
When the day actually come and "Bitcoin Cash" forked I looked at it with pity because it looked clearly a failure from the beginning, but I still cheered for it a bit, still not knowing anything about the debate, besides the fact that blocks were bigger on BCH, which looked like a very reductionist explanation to me.
"Of course it's not just making blocks bigger, that would be too simple, they probably have a very complex plan I'm not apt to understand", I thought.
To my surprise the entire argument was actually just that: bigger blocks bigger blocks. I came to that conclusion by listening to tomwoods.com/1064, a debate in which reasonable arguments faced childish claims. That debate gave me perspective and was a clear, undisputed win from Jameson Lopp against Roger Ver.
Actually some time before that I had listened to another Tom Woods Show episode thinking it was going to be an episode about Bitcoin, but in fact it was just propaganda about a debate I had almost forgotten. And nothing about Bitcoin, everything about "Bitcoin Cash" and how there were two Bitcoins, one legitimate and the other unlegitimate.
So, from the perspective of someone that came to the debate totally fresh and only listens to the big-blocker arguments for a long time, they still don't convince anyone with some common sense (as I would like to think of myself), they just sound like mad dogs and everything goes against themselves.
Fast forward to the present and with much more understanding of the issues in place I started digging some material from 2016-2017 about the debate to try to get more context, and found this ridiculous interview with Mike Hearn. It isn't a waste of time to listen to it if you're not familiar with the debate from that time.
As I should have probably expected from my experience with Epicenter.tv, both the interviewers agree with Mike Hearn about his ridiculous claims about how (not his words) we have to subsidize the few thousand current Bitcoin users by preventing fees from increase and there are no trade-offs to doing that -- and even with everybody agreeing they all manage to sound stupid. There's not a single phrase that is defendable in the entire interview, no criticisms make any sense, it makes me feel bad for the the guy as he feels so self-assured and obviouslyright.
After knowing about these and other adventures of stupid people with high influences in the Bitcoin world trying to impose their idiocy on others it feels even more odd and unexpected to find Bitcoin in the right track. Generally in politics the most dumb wins, but apparently not in Bitcoin.
Bitcoin is a miracle.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Timeu
Os quatro elementos, a esfera como a forma mais perfeita, os cinco sentidos, a dor como perturbação e o prazer como retorno, o demiurgo que cria da melhor maneira possível com a matéria que tem, o conceito de duro e mole, todas essas coisas que ensinam nas escolas e nos desenhos animados ou sei lá como entram na nossa consciência como se fossem uma verdade, mas sempre uma verdade provisória, infantil -- como os nomes infantis dos dedos (mata-piolho, fura-bolo etc.) --, que mesmo as crianças sabem que não é verdade mesmo.
Parece que todas essas coisas estão nesse livro. Talvez até mesmo a classificação dos cinco dedos como mata-piolho e tal, mas talvez eu tenha dormido nessa parte.
Me pergunto se essas coisas não eram ensinadas tradicionalmente na idade média como sendo verdade absoluta (pois afinal estava lá o Platão dizendo, em sua única obra) e persistiram até hoje numa tradição que se mantém aos trancos e barrancos, contra tudo e contra todos, sem ninguém saber como, um conhecimento em que ninguém acredita mas acha bonito mesmo assim, harmonioso, e vem despida de suas origens e fontes primárias e de todo o seu contexto perturbar o entendimento do mundo pelas crianças.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Per-paragraph paywalls
Using the lnurl-allowance protocol, a website could instead of putting a paywall over the entire site, charge a reader for only the paragraphs they read. Of course this requires trust from the reader on the website, but this is normal. The website could just hide the rest of the article before an invoice from the paragraph just read was paid.
This idea came from Colin from the Unhashed Podcast.
Could also work with podcasts and videos.
-
@ 6ad3e2a3:c90b7740
2023-12-13 12:40:31We often fantasized about time travel, mostly what stocks we would buy, what bets we would make. Buster Douglas and the 1999 St. Louis Rams to win the Super Bowl were two I liked to bring up. But that was before Henry got caught up in politics, and the conversation turned to altering events in world history.
Henry argued if you had one trip you’d be morally obligated to prevent some of history’s worst tragedies, and for him the go-to example was killing baby Hitler. If you were there and had the chance, you’d have to do it, he’d say, no matter how hard it would be to murder an innocent baby. If you hesitated to agree, he’d browbeat you, saying good people doing nothing is what allows evil to thrive. He saw it as an obvious choice: one as yet innocent baby in exchange for the lives and suffering of millions.
This made the exercise considerably less enjoyable, and we dropped it. That is, until The Simulator. The Simulator was a breakthrough technology, part virtual reality game, part research tool that enabled virtual experiments from modeling future events to modifying past ones and seeing present-day results. I’m oversimplifying, but it worked by scanning every recorded byte, including old maps, regional soil composition, weather patterns, temperature data, census records and every published book in human history. From stock market data to the Code of Hammurabi, to the fully mapped human genome, The Simulator drew inputs for its algorithm. Some believed the developers had access to classified material from the world’s intelligence agencies, including UFO encounters deemed too sensitive for public consumption.
Of course we only had the commercialized game version — the full one was prohibitively expensive and available only to those with official authorization. But the game version was robust enough, and already the software of choice for sports betting, weather forecasting and for some stock pickers, (though many suspected hedge funds and large institutions had access to the full version and avoided the capital markets entirely.)
I initially did investing experiments, buying Apple’s IPO, Amazon stock and eventually bitcoin and became the richest person in the world 10 times over. Although in one experiment, I owned so much bitcoin it became overly centralized and never took off. In that world, Facebook launched its Libra coin without much government resistance as few grasped the possibilities of fully digital currencies. The result was a Facebook-government partnership where you got docked Libra coin (the only currency in which you could pay federal taxes) for unfavored associations and viewpoints.
But I soon grew bored of the financial experiments and started doing weird things like going back to 2019, catching COVID on purpose and spreading it as widely as possible, before people thought it was a threat. In one simulation, there was no acknowledged pandemic, only a bad “flu” season.
I was about to log off and tackle a work project on which I had procrastinated for too long, when I remembered Henry’s insistence that I was obligated to kill baby Hitler. I never bought his arguments entirely — absolute certainty is always a red flag — but I didn’t have a good counter for them, either. I resolved to run the experiment and find out for myself.
It wouldn’t be easy as the commercial version of The Simulator had rules around acts of violence. I’d also have to dig up fairly specific knowledge in a presumably less developed part of the game (19th-century Hungary.) But The Simulator was adept at making do with the available history and filling in blanks with fictional characters. There would be a street address and a house where he lived. There should be an opportunity to see how it played out. Of course, one could simply delete Hitler and run simulations without him, and I tried that first, but the moral question was not whether the world would be better off without Hitler, but whether it would be right — or obligatory even, in Henry’s framing — to murder the baby in his crib.
I prefer not to go into the details. The broad outlines are I found a hack to disable the violence restriction, went to his childhood home and had to bludgeon a young woman (his nanny?) before doing it. For those who have never used The Simulator, “Full Immersion Mode” isn’t quite real life, but it’s substantially more visceral than shooting avatars in a video game. What I did was horrific, even though I knew it wasn’t real, and even though Henry believed the act would’ve been heroic if it were. I actually vomited afterwards, and as I type this 10 days later, I feel queasy recalling it.
Nonetheless, I ran a simulation forward. The Third Reich was run by committee. There was a front man, someone of whom I had never heard, who was more charismatic than Hitler, but decision-makers behind the scenes, including some of Hitler’s generals, were just as ruthless. There were concentration camps, though oddly in different locations, and the result was seemingly as bad.
But that was only one version of events. The Simulator (through randomization of certain parameters) could run infinitely many different futures from any given point in time. I ran a few more, and they were all dystopian in different ways. There was one version, however, that particularly struck me.
In it, Hitler rose to power as he did in the real world, and things unfolded more or less the way we’ve read about them in history. At first, I thought there must be an error — after all, every simulation began the hour after I smothered him in his crib. But as I checked the local newspapers from that era, indeed a baby had tragically died, and his brain damaged nanny was blamed (and subsequently hanged) for the crime, but it was a different baby, Max Muller, son of a local tavern owner, who committed suicide two years later. How could that be? Not only did I check all the details exhaustively, but they proved correct in all the other simulations. The randomizer must have swapped the location of baby Hitler with this other infant. In this version, I murdered (virtually, thank God) an innocent baby, destroyed his family and an innocent nanny without preventing anything.
. . .
When I met with Henry a week later, he wasn’t convinced. The Simulator isn’t reality, he argued, and the version with the wrong baby proved it. His hypothetical entailed killing the actual baby Hitler in the real world, not some case of mistaken identity. If you could be sure to kill the real Hitler and prevent the Holocaust from happening, he maintained, you’d still have to do it. The Simulator’s randomization algorithm made it impossible ever to know what would happen in its many possible futures, especially in the long run.
I now understood his argument. If we had certainty about how our actions would affect the world, the moral imperative would be clear. But certainty about the future was unattainable, for the path from unknown to known is the arrow of time itself. Henry’s hypothetical then was inherently contradictory, a square circle he imagined were an actual shape.
One could never be assured about the long term effects of one’s actions, and any attempt to do the math was quickly overwhelmed by infinite permutations. Doing something abhorrent as the means to a noble end was to fancy oneself a mathematical God, something no decent person would attempt. It was the ideology of monsters, forever imagining they could create a more perfect history, a more perfect future, a more perfect human race.
-
@ a023a5e8:ff29191d
2023-12-07 04:37:55which operating system do you recommend using on my PC?
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Conceit
IPFS is trying to do many things. The IPFS leaders are revolutionaries who think they're smarter than the rest of the entire industry.
The fact that they've first proposed a protocol for peer-to-peer distribution of immutable, content-addressed objects, then later tried to fix that same problem using their own half-baked solution (IPNS) is one example.
Other examples are their odd appeal to decentralization in a very non-specific way, their excessive flirtation with Ethereum and their never-to-be-finished can-never-work-as-advertised Filecoin project.
They could have focused on just making the infrastructure for distribution of objects through hashes (not saying this would actually be a good idea, but it had some potential) over a peer-to-peer network, but in trying to reinvent the entire internet they screwed everything up.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Parallel Chains
We want merged-mined blockchains. We want them because it is possible to do things in them that aren't doable in the normal Bitcoin blockchain because it is rightfully too expensive, but there are other things beside the world money that could benefit from a "distributed ledger" -- just like people believed in 2013 --, like issued assets and domain names (just the most obvious examples).
On the other hand we can't have -- like people believed in 2013 -- a copy of Bitcoin for every little idea with its own native token that is mined by proof-of-work and must get off the ground from being completely valueless into having some value by way of a miracle that operated only once with Bitcoin.
It's also not a good idea to have blockchains with custom merged-mining protocol (like Namecoin and Rootstock) that require Bitcoin miners to run their software and be an active participant and miner for that other network besides Bitcoin, because it's too cumbersome for everybody.
Luckily Ruben Somsen invented this protocol for blind merged-mining that solves the issue above. Although it doesn't solve the fact that each parallel chain still needs some form of "native" token to pay miners -- or it must use another method that doesn't use a native token, such as trusted payments outside the chain.
How does it work
With the
SIGHASH_NOINPUT
/SIGHASH_ANYPREVOUT
soft-fork[^eltoo] it becomes possible to create presigned transactions that aren't related to any previous UTXO.Then you create a long sequence of transactions (sufficient to last for many many years), each with an
nLockTime
of 1 and each spending the next (you create them from the last to the first). Since theirscriptSig
(the unlocking script) will useSIGHASH_ANYPREVOUT
you can obtain a transaction id/hash that doesn't include the previous TXO, you can, for example, in a sequence of transactionsA0-->B
(B spends output 0 from A), include the signature for "spending A0 on B" inside thescriptPubKey
(the locking script) of "A0".With the contraption described above it is possible to make that long string of transactions everybody will know (and know how to generate) but each transaction can only be spent by the next previously decided transaction, no matter what anyone does, and there always must be at least one block of difference between them.
Then you combine it with
RBF
,SIGHASH_SINGLE
andSIGHASH_ANYONECANPAY
so parallel chain miners can add inputs and outputs to be able to compete on fees by including their own outputs and getting change back while at the same time writing a hash of the parallel block in the change output and you get everything working perfectly: everybody trying to spend the same output from the long string, each with a different parallel block hash, only the highest bidder will get the transaction included on the Bitcoin chain and thus only one parallel block will be mined.See also
[^eltoo]: The same thing used in Eltoo.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Comprimido desodorante
No episódio sei-lá-qual de Aleixo FM Bruno Aleixo diz que os bêbados sempre têm as melhores idéias e daí conta uma idéia que ele teve quando estava bêbado: um comprimido que funciona como desodorante. Ao invés de passar o desodorante spray ou roll-on a pessoa pode só tomar o comprimido e pronto, é muito mais prático e no tempo de frio a pessoa pode vestir a roupa mais rápido, sem precisar ficar passando nada com o tronco todo nu. Quando o Busto lhe pergunta sobre a possibilidade de algo assim ser fabricado ele diz que não sabe, que não é cientista, só tem as idéias.
Essa passagem tão boba de um programa de humor esconde uma verdade sobre a doutrina cientística que permeia a sociedade. A doutrina segundo a qual é da ciência que vêm as inovações tecnológicas e de todos os tipos, e por isso é preciso que o Estado tire dinheiro das pessoas trabalhadoras e dê para os cientistas. Nesse ponto ninguém mais sabe o que é um cientista, foi-se toda a concretude, ficou só o nome: "cientista". Daí vão procurar o tal cientista, é um cara que se formou numa universidade e está fazendo um mestrado. Pronto, é só dar dinheiro pra esse cara e tudo vai ficar bom.
Tirando o problema da desconexão entre realidade e a tese, existe também, é claro, o problema da tese: não faz sentido, que um cientista fique procurando formas de realizar uma idéia, que não se sabe nem se é possível nem se é desejável, que ele ou outra pessoa tiveram, muito pelo contrário (mas não vou dizer aqui o que é que era para o cientista fazer porque isso seria contraditório e eu não acho que devam nem existir cientistas).
O que eu queria dizer mesmo era: todo o aparato científico da nossa sociedade, todos os departamentos, universidades, orçamentos e bolsas e revistas, tudo se resume a um monte de gente tentando descobrir como fazer um comprimido desodorante.
-
@ 8fb140b4:f948000c
2023-11-21 21:37:48Embarking on the journey of operating your own Lightning node on the Bitcoin Layer 2 network is more than just a tech-savvy endeavor; it's a step into a realm of financial autonomy and cutting-edge innovation. By running a node, you become a vital part of a revolutionary movement that's reshaping how we think about money and digital transactions. This role not only offers a unique perspective on blockchain technology but also places you at the heart of a community dedicated to decentralization and network resilience. Beyond the technicalities, it's about embracing a new era of digital finance, where you contribute directly to the network's security, efficiency, and growth, all while gaining personal satisfaction and potentially lucrative rewards.
In essence, running your own Lightning node is a powerful way to engage with the forefront of blockchain technology, assert financial independence, and contribute to a more decentralized and efficient Bitcoin network. It's an adventure that offers both personal and communal benefits, from gaining in-depth tech knowledge to earning a place in the evolving landscape of cryptocurrency.
Running your own Lightning node for the Bitcoin Layer 2 network can be an empowering and beneficial endeavor. Here are 10 reasons why you might consider taking on this task:
-
Direct Contribution to Decentralization: Operating a node is a direct action towards decentralizing the Bitcoin network, crucial for its security and resistance to control or censorship by any single entity.
-
Financial Autonomy: Owning a node gives you complete control over your financial transactions on the network, free from reliance on third-party services, which can be subject to fees, restrictions, or outages.
-
Advanced Network Participation: As a node operator, you're not just a passive participant but an active player in shaping the network, influencing its efficiency and scalability through direct involvement.
-
Potential for Higher Revenue: With strategic management and optimal channel funding, your node can become a preferred route for transactions, potentially increasing the routing fees you can earn.
-
Cutting-Edge Technological Engagement: Running a node puts you at the forefront of blockchain and bitcoin technology, offering insights into future developments and innovations.
-
Strengthened Network Security: Each new node adds to the robustness of the Bitcoin network, making it more resilient against attacks and failures, thus contributing to the overall security of the ecosystem.
-
Personalized Fee Structures: You have the flexibility to set your own fee policies, which can balance earning potential with the service you provide to the network.
-
Empowerment Through Knowledge: The process of setting up and managing a node provides deep learning opportunities, empowering you with knowledge that can be applied in various areas of blockchain and fintech.
-
Boosting Transaction Capacity: By running a node, you help to increase the overall capacity of the Lightning Network, enabling more transactions to be processed quickly and at lower costs.
-
Community Leadership and Reputation: As an active node operator, you gain recognition within the Bitcoin community, which can lead to collaborative opportunities and a position of thought leadership in the space.
These reasons demonstrate the impactful and transformative nature of running a Lightning node, appealing to those who are deeply invested in the principles of bitcoin and wish to actively shape its future. Jump aboard, and embrace the journey toward full independence. 🐶🐾🫡🚀🚀🚀
-
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Zettelkasten
https://writingcooperative.com/zettelkasten-how-one-german-scholar-was-so-freakishly-productive-997e4e0ca125 (um artigo meio estúpido, mas útil).
Esta incrível técnica de salvar notas sem categorias, sem pastas, sem hierarquia predefinida, mas apenas fazendo referências de uma nota à outra e fazendo supostamente surgir uma ordem (ou heterarquia, disseram eles) a partir do caos parece ser o que faltava pra eu conseguir anotar meus pensamentos e idéias de maneira decente, veremos.
Ah, e vou usar esse tal
neuron
que também gera sites a partir das notas?, acho que vai ser bom. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A Lightning penalty transaction
It was a cold day and I remembered that this
lightningd
node I was running on my local desktop to work on poncho actually had mainnet channels in it. Two channels, both private, bought on https://lnbig.com/ a while ago when I was trying to conduct an anonymous griefing attack on big nodes of the network just to prove it was possible (the attempts proved unsuccessful after some hours and I gave up).It is always painful to close channels because paying fees hurts me psychologically, and then it hurts even more to be left with a new small UTXO that will had to be spent to somewhere but that can barely pay for itself, but it also didn't make sense to just leave the channels there and risk forgetting them and losing them forever, so I had to do something.
One of the channels had 0 satoshis on my side, so that was easy. Mutually closed and I don't have to think anymore about it.
The other one had 10145 satoshis on my side -- out of a total of 100000 satoshis. Why can't I take my part all over over Lightning and leave the full channel UTXO to LNBIG? I wish I could do that, I don't want a small UTXO. I was not sure about it, but if the penalty reserve was 1% maybe I could take out abou 9000 satoshis and then close it with 1000 on my side? But then what would I do with this 1000 sat UTXO that would remain? Can't I donate it to miners or something?
I was in the middle of this thoughts stream when it came to me the idea of causing a penalty transaction to give those abundant 1000 sat to Mr. LNBIG as a donation for his excellent services to the network and the cause of Bitcoin, and for having supported the development of https://sbw.app/ and the hosted channels protocol.
Unfortunately
lightningd
doesn't have a commandtriggerpenaltytransaction
ortrytostealusingoldstate
, so what I did was:First I stopped
lightningd
then copied the database to elsewhere:cp ~/.lightningd/bitcoin/lightningd.sqlite3 ~/.lightning/bitcoin/lightningd.sqlite3.bak
then I restartedlightningd
and fighted against the way-too-aggressive MPP splitting algorithm thepay
command uses to pay invoices, but finally managed to pull about 9000 satoshis to my Z Bot that lives on the terrible (but still infinitely better than Twitter DMs) "webk" flavor of the Telegram web application and which is linked to my against-bitcoin-ethos-country-censoring ZEBEDEE Wallet. The operation wasn't smooth but it didn't take more than 10 invoices andpay
commands.With the money out and safe elsewhere, I stopped the node again, moved the database back with a reckless
mv ~/.lightning/bitcoin/lightningd.sqlite3.bak ~/.lightningd/bitcoin/lightningd.sqlite3
and restarted it, but to prevent mylightningd
from being super naïve and telling LNBIG that it had an old state (I don't know if this would happen) which would cause LNBIG to close the channel in a boring way, I used the--offline
flag which apparently causes the node to not do any external connections.Finally I checked my balance using
lightning-cli listfunds
and there it was, again, the 10145 satoshis I had at the start! A fantastic money creation trick, comparable to the ones central banks execute daily.I was ready to close the channel now, but the
lightning-cli close
command had an option for specifying how many seconds I would wait for a mutual close before proceeding to a unilateral close. There is noforceclose
command like Éclair hasor anything like that. I was afraid that even if I gave LNBIG one second it would try to do boring things, so I paused to consider how could I just broadcast the commitment transaction manually, looked inside the SQLite database and thechannels
table with its millions of columns with cryptic names in the unbearable.schema
output, imagined thatlightningd
maybe wouldn't know how to proceed to take the money from theto-local
output if I managed to broadcast it manually (and in the unlikely event that LNBIG wouldn't broadcast the penalty transaction), so I decided to just accept the risk and calllightning-cli close 706327x1588x0 1
But it went well. The
--offline
flag apparently really works, as it just considered LNBIG to be offline and 1 second later I got the desired result.My happiness was complete when I saw the commitment transaction with my output for 10145 satoshis published on the central database of Bitcoin, blockstream.info.
Then I went to eat something and it seems LNBIG wasn't offline or sleeping, he was certainly looking at all the logs from his 274 nodes in a big room full of monitors, very alert and eating an apple while drinking coffee, ready to take action, for when I came back, minutes later, I could see it, again on the single source of truth for the Bitcoin blockchain, the Blockstream explorer. I've refreshed the page and there it was, a small blue link right inside the little box that showed my
to-local
output, a notice saying it had been spent -- not by mylightningd
since that would have to wait 9000 blocks, but by the same transaction that spent the other output, from which I could be very sure it was it, the glorious, mighty, unforgiving penalty transaction, splitting the earth, showing itself in all its power, and taking my 10145 satoshis to their rightful owner. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Flowi.es
At the time I thought Workflowy had the ideal UI for everything. I wanted to implement my custom app maker on it, but ended up doing this: a platform for enhancing Workflowy with extra features:
- An email reminder based on dates input in items
- A website generator, similar to Websites For Trello, also based on Classless Templates
Also, I didn't remember this was also based on CouchDB and had some couchapp functionalities.
-
@ 8fb140b4:f948000c
2023-11-21 00:46:59Venturing into the dynamic world of bitcoin's layer 2 networks, particularly the lightning network, can seem like an exciting frontier for tech enthusiasts and cryptocurrency aficionados. however, the decision to run your own lightning node is not one to be taken lightly. While the allure of contributing to the bitcoin ecosystem and potentially earning transaction fees is strong, there are significant considerations that should temper the enthusiasm of would-be node operators. From the intricate technicalities to unexpected challenges, here are 10 compelling reasons why running your own lightning node might not be the electrifying experience you anticipated.
Running your own lightning node for the bitcoin layer 2 network can be a complex and demanding task. Here are 10 reasons why you might choose not to:
-
Technical complexity: setting up and managing a lightning node requires a good understanding of blockchain technology and network management, which can be overwhelming for beginners.
-
Security risks: running a node means you're responsible for securing it against potential cyber attacks, which requires constant vigilance and technical expertise.
-
Resource intensive: a lightning node requires continuous internet connection and sufficient hardware resources, which can be costly in terms of electricity and equipment.
-
Liquidity requirements: to facilitate transactions, you need to lock up a significant amount of bitcoin in your channels, which might not be ideal if you prefer liquidity.
-
Maintenance efforts: regular maintenance and updates are necessary to keep the node running smoothly, which can be time-consuming.
-
Limited privacy: operating a node might expose some of your transaction details or ip address, potentially compromising privacy.
-
Slow ROI: the financial return on operating a lightning node can be slow and uncertain, especially if the network fees are low.
-
Network complexity: understanding and managing channel capacities, routing, and fees can be complicated and require continuous learning and adaptation.
-
Scalability issues: as the network grows, managing a node can become increasingly challenging, with more channels and transactions to handle.
-
Downtime risks: if your node goes offline, you might miss out on transaction fees or, worse, risk losing funds in channels due to outdated channel states.
These reasons reflect the challenges and responsibilities that come with running your own lightning node and may discourage some individuals, especially those with limited technical background or resources. If you are still up for a challenge, thank you for supporting the network! 🐶🐾🫡🙏🏻
-
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On the state of programs and browsers
Basically, there are basically (not exhaustively) 2 kinds of programs one can run in a computer nowadays:
1.1. A program that is installed, permanent, has direct access to the Operating System, can draw whatever it wants, modify files, interact with other programs and so on; 1.2. A program that is transient, fetched from someone else's server at run time, interpreted, rendered and executed by another program that bridges the access of that transient program to the OS and other things.
Meanwhile, web browsers have basically (not exhaustively) two use cases:
2.1. Display text, pictures, videos hosted on someone else's computer; 2.2. Execute incredibly complex programs that are fetched at run time, executed and so on -- you get it, it's the same 1.2.
These two use cases for browsers are at big odds with one another. While stretching itsel f to become more and more a platform for programs that can do basically anything (in the 1.1 sense) they are still restricted to being an 1.2 platform. At the same time, websites that were supposed to be on 2.1 sometimes get confused and start acting as if they were 2.2 -- and other confusing mixed up stuff.
I could go hours in philosophical inquiries on the nature of browsers, how rewriting everything in JavaScript is not healthy or where everything went wrong, but I think other people have done this already.
One thing that bothers me a lot, though, is that computers can do a lot of things, and with the internet and in the current state of the technology it's fairly easy to implement tools that would help in many aspects of human existence and provide high-quality, useful programs, with the help of a server to coordinate access, store data, authenticate users and so on many things are possible. However, due to the nature of UI in the browser, it's very hard to get any useful tool to users.
Writing a UI, even the most basic UI imaginable (some text input boxes and some buttons, or a table) can take a long time, always more than the time necessary to code the actual core features of whatever program is being developed -- and that is considering that the person capable of writing interesting programs that do the functionality in the backend are also capable of interacting with JavaScript and the giant amount of frameworks, transpilers, styling stuff, CSS, the fact that all this is built on top of HTML and so on.
This is not good.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Trello Attachment Editor
A static JS app that allowed you to authorize with your Trello account, fetch the board structure, find attachments, edit them in the browser then replace them in the cards.
Quite a nice thing. I believe it was done to help with Websites For Trello attached scripts and CSS files.
See also
-
@ 8fb140b4:f948000c
2023-11-18 23:28:31Chef's notes
Serving these two dishes together will create a delightful centerpiece for your Thanksgiving meal, offering a perfect blend of traditional flavors with a homemade touch.
Details
- ⏲️ Prep time: 30 min
- 🍳 Cook time: 1 - 2 hours
- 🍽️ Servings: 4-6
Ingredients
- 1 whole turkey (about 12-14 lbs), thawed and ready to cook
- 1 cup unsalted butter, softened
- 2 tablespoons fresh thyme, chopped
- 2 tablespoons fresh rosemary, chopped
- 2 tablespoons fresh sage, chopped
- Salt and freshly ground black pepper
- 1 onion, quartered
- 1 lemon, halved
- 2-3 cloves of garlic
- Apple and Sage Stuffing
- 1 loaf of crusty bread, cut into cubes
- 2 apples, cored and chopped
- 1 onion, diced
- 2 stalks celery, diced
- 3 cloves garlic, minced
- 1/4 cup fresh sage, chopped
- 1/2 cup unsalted butter
- 2 cups chicken broth
- Salt and pepper, to taste
Directions
- Preheat the Oven: Set your oven to 325°F (165°C).
- Prepare the Herb Butter: Mix the softened butter with the chopped thyme, rosemary, and sage. Season with salt and pepper.
- Prepare the Turkey: Remove any giblets from the turkey and pat it dry. Loosen the skin and spread a generous amount of herb butter under and over the skin.
- Add Aromatics: Inside the turkey cavity, place the quartered onion, lemon halves, and garlic cloves.
- Roast: Place the turkey in a roasting pan. Tent with aluminum foil and roast. A general guideline is about 15 minutes per pound, or until the internal temperature reaches 165°F (74°C) at the thickest part of the thigh.
- Rest and Serve: Let the turkey rest for at least 20 minutes before carving.
- Next: Apple and Sage Stuffing
- Dry the Bread: Spread the bread cubes on a baking sheet and let them dry overnight, or toast them in the oven.
- Cook the Vegetables: In a large skillet, melt the butter and cook the onion, celery, and garlic until soft.
- Combine Ingredients: Add the apples, sage, and bread cubes to the skillet. Stir in the chicken broth until the mixture is moist. Season with salt and pepper.
- Bake: Transfer the stuffing to a baking dish and bake at 350°F (175°C) for about 30-40 minutes, until golden brown on top.
-
@ 8fb140b4:f948000c
2023-11-13 07:55:20Hey there, train enthusiasts! let's chat about one of the most impressive rail systems in the world - japan's trains. when you think of japanese trains, the first image that probably pops into your mind is the sleek, futuristic shinkansen, also known as the bullet train. these high-speed trains are not just a symbol of modern technology; they're a testament to japan's commitment to efficiency and punctuality. it's not an exaggeration to say that japanese trains are famous for being precisely on time. if a train is five minutes late, it's often considered a significant delay!
but it's not just the shinkansen that deserves praise. japan's entire railway network, from city subways to rural train lines, is incredibly well-organized. the trains are clean, comfortable, and surprisingly quiet, making even a regular commute a pleasant experience. what's more impressive is the frequency of these trains. in major cities like tokyo and osaka, you rarely have to wait more than a few minutes for a train, reducing the stress of travel and making commuting more predictable.
and let's not forget the level of service! train staff in japan are known for their politeness and helpfulness. whether you're a tourist struggling with your luggage or a local needing directions, there's always someone to help. plus, the stations themselves are marvels - clean, well-signed, and packed with amenities like shops and restaurants. in short, japan's trains aren't just a mode of transportation; they're an experience, reflecting the country's dedication to quality, punctuality, and customer service. so, next time you're in japan, hop on a train and enjoy the ride - it's an adventure in itself! 🚆🌸🐶🐾🫂
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Soft-fork activation through
bitcoind
competitionOr: how to activate Drivechain.
Imagine a world in which there are 10 different
bitcoind
flavors, as described inbitcoind
decentralization.Now how do you enable a soft-fork?
Flavor 1 enables it. Seeing that nothing bad happened, flavor 2 enables it. Then flavor 3 enables it.
And so on.
When what is perceived by miners to be a big chunk of support for the proposal, a miner can try to mine a block that contains the new feature.
No need for a flag day or a centralized decision making process that depends on one or two courageous leaders to enable a timer.
This probably sounds silly, and maybe is.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A big Ethereum problem that is fixed by Drivechain
While reading the following paragraphs, assume Drivechain itself will be a "smart contract platform", like Ethereum. And that it won't be used to launch an Ethereum blockchain copy, but instead each different Ethereum contract could be turned into a different sidechain under BIP300 rules.
A big Ethereum problem
Anyone can publish any "contract" to Ethereum. Often people will come up with somewhat interesting ideas and publish them. Since they want money they will add an unnecessary token and use that to bring revenue to themselves, gamify the usage of their contract somehow, and keep some control over the supposedly open protocol they've created by keeping a majority of the tokens. They will use the profits on marketing and branding, have a visual identity, a central website and a forum with support personnel and so on: their somewhat interesting idea have become a full-fledged company.
If they have success then another company will appear in the space and copy the idea, launch it using exactly the same strategy with a tweak, then try to capture the customers of the first company and new people. And then another, and another, and another. Very often these contracts require some network effect to work, i.e., they require people to be using it so others will use it. The fact that the market is now split into multiple companies offering roughly the same product hurts that, such that none of these protocols get ever enough usage to become really useful in the way they were first conceived. At this point it doesn't matter though, they get some usage, and they use that in their marketing material. It becomes a race to pump the value of the tokens and the current usage is just another point used for that purpose. The company will even start giving out money to attract new users and other weird moves that have no relationship with the initial somewhat intereting idea.
Once in a lifetime it happens that the first implementer of these things is not a company seeking profits, but some altruistic developer or company that believes in Ethereum and wants to see it grow -- or more likely someone financed by the Ethereum Foundation, which allegedly doesn't like these token schemes and would prefer everybody to use the token they issued first, the ETH --, but that's a fruitless enterprise because someone else will copy that idea anyway and turn it into a company as described above.
How Drivechain fixes it
In the Drivechain world, if someone had an idea, they would -- as it happens all the time with Bitcoin things -- publish it in a public forum. Other members of the community would evaluate that idea, add or remove things, all interested parties would contribute to make it the best possible incarnation of that idea. Once the design was settled, someone would volunteer to start writing the code to turn that idea into a sidechain. Maybe some company would fund those efforts and then more people would join. It's not a perfect process and one that often involves altruism, but Bitcoin inspires people to do these things.
Slowly, the thing would get built, tested, activated as a sidechain on testnet, tested more, and at this point luckily the entire community of interested Bitcoin users and miners would have grown to like that idea and see its benefits. It could then be proposed to be activated according to BIP300 rules.
Once it was activated, the entire pool of interested users would join it. And it would be impossible for someone else to create a copy of that because everybody would instantly notice it was a copy. There would be no token, no one profiting directly from the operations of that "smart contract". And everybody would be incentivized to join and tell others to join that same sidechain since the network effect was already the biggest there, they will know more network effect would only be good for everybody involved, and there would be no competing marketing and free token giveaways from competing entities.
See also
-
@ 8fb140b4:f948000c
2023-11-10 12:00:40Intro
Nostrasia 2023, a vibrant unconference, was hosted in two bustling Asian cities: Hong Kong and Tokyo, Japan. Nostriches from around the world flocked to these destinations, eager to immerse themselves in local culture, savor the cuisine, and most importantly, enjoy each other's company in person. Tokyo's event, buzzing with energy, took place in Shibuya, a district renowned for its lively nightlife and abundance of bars, clubs, and restaurants. As is tradition with Nostr events, the atmosphere was charged with excitement, symbolized by the abundant purple and orange hues.
https://v.nostr.build/k7qV.mp4
Preparations
The journey to Nostrasia began right after Nostrica, with meticulous planning to ensure the perfect venue. It was a challenge to find a location that was both spacious and accessible, offering affordable accommodation options nearby. Our diligent volunteers in Japan scoured venues in Tokyo and Yokohama before selecting the ideal spot in Shibuya.
In the days leading up to the event, volunteers, both local and from afar, gathered at the venue. Their mission: to set up everything from audio-visual equipment to stages and decorations, ensuring a warm welcome for attendees on November 1st at 9:00 AM. Despite the time crunch, the spirit of cooperation and friendliness prevailed, making the preparation phase smooth and enjoyable.
Even amidst the busy setup, there was time for breaks and socialization, keeping everyone energized and focused on the mission at hand.
Day 1 (The Beginning)
No day at Nostrasia could start without a caffeine kick or a beverage of choice. The first day witnessed a steady influx of nostriches, filling the venue with excitement and anticipation.
The day offered a mix of activities: some attendees engaged in the presentations, others relaxed in the family-friendly area, and there were even recreational spaces for all ages to enjoy, including model train setups.
The setup was designed to welcome nostriches of all stripes, whether young or old, tech-savvy or not. It was a space where everyone could feel comfortable and included.
The day culminated in a grand welcome party, complete with drinks, MAGURO (Yellowfin tuna), and karaoke.
https://v.nostr.build/Mqqa.mp4
Day 2
Following a late night, the second day of the conference started a bit later, accommodating the nocturnal adventures of the attendees. The day was packed with fascinating talks, workshops, and plenty of opportunities for meet-ups and hugs.
https://v.nostr.build/q3RM.mp4
An interesting discovery for me was the concept of air replies, a novel and visually engaging way to interact on social media.
Day 3 (Final)
The final day, starting late due to the previous night's festivities, was filled with a diverse range of presentations. Topics varied from non-technical perspectives on Nostr to in-depth technical workshops and discussions. #footstr
The day was brimming with a variety of presentations in both halls of the conference center. These ranged from insightful talks about Nostr from a non-technical perspective to more in-depth technical workshops and discussions.
https://v.nostr.build/R98n.mp4 https://v.nostr.build/o5Xl.mp4 https://v.nostr.build/4A6W.mp4
The event concluded with heartfelt closing remarks from the organizers, filled with gratitude and appreciation for everyone's participation and effort.
Outro
Nostrasia 2023 was a resounding success, marked by a warm, friendly atmosphere that pervaded the entire event. While some had to depart immediately after, many stayed on to explore the wonders of Japan, carrying with them memories of an unforgettable gathering. The end. 🐶🐾🫡🫂
Footnotes
- For the list of notes published about the event, use #nostrasia hashtag.
- For the official Nostrasia profile, visit @npub1nstrcu63lzpjkz94djajuz2evrgu2psd66cwgc0gz0c0qazezx0q9urg5l
- Detailed conference schedule
- Official Webside
Other pictures and videos taken before, during, and after the event
-
@ 8fb140b4:f948000c
2023-11-02 01:13:01Testing a brand new YakiHonne native client for iOS. Smooth as butter (not penis butter 🤣🍆🧈) with great visual experience and intuitive navigation. Amazing work by the team behind it! * lists * work
Bold text work!
Images could have used nostr.build instead of raw S3 from us-east-1 region.
Very impressive! You can even save the draft and continue later, before posting the long-form note!
🐶🐾🤯🤯🤯🫂💜
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28localchat
A server that creates instant chat rooms with Server-Sent Events and normal HTTP
POST
requests (instead of WebSockets which are an overkill most of the times).It defaults to a room named as your public IP, so if two machines in the same LAN connect they'll be in the same chat automatically -- but then you can also join someone else's LAN if you need.
This is supposed to be useful.
See also
-
@ 2edbcea6:40558884
2023-10-30 14:06:48Happy Sunday #Nostr !
Here’s your #NostrTechWeekly newsletter brought to you by nostr:npub19mduaf5569jx9xz555jcx3v06mvktvtpu0zgk47n4lcpjsz43zzqhj6vzk written by nostr:npub1r3fwhjpx2njy87f9qxmapjn9neutwh7aeww95e03drkfg45cey4qgl7ex2
The #NostrTechWeekly is a weekly newsletter focused on the more technical happenings in the nostr-verse.
Let’s dive in!
Recent Upgrades to Nostr (AKA NIPs)
1) (Proposed) NIP 96: File Storage Integration
As a reminder NIP 96 proposes a standard way for third party service providers to offer file storage for Nostr users. Standardization allows files to stay off relays but still be relatively cross compatible if users want to have files attached to notes show up in any Nostr client. It was initially implemented a few weeks ago with the Nostur and Coracle clients utilizing the open source Nostrcheck server by nostr:npub138s5hey76qrnm2pmv7p8nnffhfddsm8sqzm285dyc0wy4f8a6qkqtzx624
NIP 96 seems to be getting momentum; now the Nostur client has added integrated nostr.build file storage in a NIP 96 compatible way giving users more options! I hope it gets merged soon 💪
2) (Proposed) Update to NIP 72: Community Posts
This proposed update to Moderated Communities would allow clients to support using event kinds other than 1. Right now, posting on moderated communities (akin to subreddits) makes it so that all posts show up without context on most Nostr clients. Kind 1 events are like tweets, but posts in moderated communities are posted in a context that’s missing on clients like Damus or Amethyst.
This proposal would encourage moderated communities to publish using a different set of event kinds so the posts only show up in clients intended for moderated communities. It continues to gain momentum. nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 opened and closed a proposed change similar to this as this newsletter was being written, and instead threw some support onto this proposal.
This change would be reverse compatible with current Moderated Communities but still allow for cleaner feeds on clients based on their intended content focus.
Author: vivganes
Notable Projects
Zapple Pay’s lightning-based subscriptions ♻️
The team at Mutiny Wallet’s work on Zapple Pay has helped keep the flow of zaps unblocked since Apple’s capricious decisions to force Damus to disable Zaps. Now they’ve introduced “auto zapping” which is the ability to set up recurring payments (like subscriptions) in a self-sovereign way.
Zaps are great to show support, lightning-gated content is going to be helpful for content creators too, but unlocking subscriptions can truly help content creators make a living on Nostr denominated in the hardest money on Earth. This could be game changing as we’ll discuss later 😉
Authors: nostr:npub1u8lnhlw5usp3t9vmpz60ejpyt649z33hu82wc2hpv6m5xdqmuxhs46turz nostr:npub1t0nyg64g5vwprva52wlcmt7fkdr07v5dr7s35raq9g0xgc0k4xcsedjgqv & Paul Miller
Highlighter 2.0 📝
If you’re unfamiliar Highlighter is “A nostr client for your most valuable information. Your reading. Your notes. Your thoughts. A place to discover thoughtful, timeless content.” and nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft continues to improve it rapidly.
This latest release improves many aspects, mostly focused on making it THE best way to read, highlight, label, and save content that’s meaningful to you.
On top of that nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft has been improving the ability to make money as a content creator via an integration to “subscribe” to support Nostr content creators like you would on Patreon, as well as split zaps for content published via Highlighter.
If you’re not at least using Highlighter to discover content, you’re missing out. In my experience It is the highest concentration of content created by deep thinkers on the internet.
Latest conversations: Nostr adoption via content creators
Nostr’s main feature right now is that it is freedom tech. So far that hasn’t been enough of a draw for mass Nostr adoption.
We can keep building existing apps on Nostr but we’ll need something truly differentiated for people to rip and replace their content consumption habits with the ones run on Nostr.
We have some unique offerings (beyond censorship resistance) in emerging tech like Highlighter, DVMs, and Zaps. Based on what we’ve seen this week, I believe the next million monthly active users will be driven by content creators moving to Nostr.
Helping creators make money
Creators tend to flock to where they can make the most money. This is true of content creators and creators of software.
The fact that Apple apps make far more money than Android ones (despite Android dominating the smartphone market) explains why there are so many iPhone only apps.
Video creators and streamers move between Youtube and Twitch based on who is going to offer them the better income (viewership * take home rate).
If we can provide platforms where creators get to keep more of their money, it could be a game changer.
Locked into platforms
Content platforms provide two things: audience aggregation and payment aggregation.
Payments: The revenue per view of a youtube video is calculable and theoretically Youtube could pay creators their cut of the revenue per view. That isn’t practical because of transaction fees, they’d be larger than the actual transaction.
That’s why platforms serve as payment aggregators between the advertisers, themselves and the creators; settling accounts on a regular basis in amounts that make sense given the payments infrastructure available to them (credit cards and ACH).
Audience aggregation: Audiences go where creators are, but creators go where their audience is; the relationship is complicated. Sometimes individual creators have enough power to make or break a platform, but for platforms as big as Youtube that becomes much harder.
Take the example of Spotify and Joe Rogan. That move was worth it in terms of cash for Rogan, and it put Spotify on the map as a podcasting platform. It didn’t really hurt Youtube a whole lot.
The main advantage of using a content platform like Youtube, TikTok, etc is that the audience is there, you “just” need to capture them. The trade off is that you have very little power to set your price.
Audience non-portability: If a creator decides to switch from Youtube to Twitch, their followers don’t automatically port over. Creators with strong followings have an ok time when switching but not everyone has that luxury.
This leads to a power imbalance between creators and platforms. Platforms can strike down any creator they want; even though, without creators, these platforms would be nothing. In the absence of a better solution we’re stuck in this disadvantageous equilibrium.
Freeing creators with Nostr and Bitcoin
If Nostr + Bitcoin could offer creators ready-to-go solutions that would let creators keep significantly more of their revenue. It just has to be a big enough difference that people make the switch even when Bitcoin and Nostr are unfamiliar to them.
The magic combo of capabilities on Nostr would be: Nostr-based versions of all the usual apps for content consumption, as long as they’re high enough quality to not deter users. Content management tools that are familiar and quality enough for content creators. This would need to include robust file storage and streaming for content (text, video, audio, etc) Payment infrastructure in the apps (via Lightning) that have low fees and support the structures that are relevant for the content type (pay per article, pay per view, streaming sats as you listen, subscriptions, etc) Lowest-common-denominator Nostr onboarding. Imagine an end-to-end encrypted Nostr key custodian, so users that are intimidated by keypairs could have a familiar login with email and password.
In this world, new Nostr users that were asked to join by their favorite creator may never interact with the most common clients on Nostr today. They may set up a Nostr user via some “Login with Nostr” solution and then they only interact with the Nostr versions of Twitch, Youtube, Substack, etc.
If the switch is painless for users and creators and the creators make more money, it’s a no-brainer for creators to try out. If it works for them, more will come.
Taylor Swift and Grimes are good people to aim for. Both have enough autonomy to experiment with any platform they want, and both have fought to maximize artists’ take home pay. Can you imagine if we could get every Swiftie on Nostr’s version of Spotify?
Why hasn’t this happened yet?
This sounds like “value for value” right? Many talented builders and creators have attempted to crack the “value for value” nut, but something hasn’t clicked yet for mass adoption. The missing link, in my opinion, is the lack of audience portability and how that affects payments.
Value for value (streaming sats for podcasts, or paying to unlock one article at a time) requires some storage of who bought access to the content. Otherwise if you switch devices the content platform won’t know you already purchased access. Without Nostr that means signing up for that particular content app and it’s a high bar to start paying for content.
With Nostr, you can login using your Nostr keys and purchase access on any Nostr-based app, using any Lightning wallet, and that access can be attached to your Nostr pub key. It lowers the barrier for users to start paying, which means content creators capture more value.
For some kinds of content, I think all the legos are there for a creator-friendly platforms. With the recent Mutiny wallet announcement of lightning-based subscriptions, there’s no reason not to build a Nostr-based Patreon or Substack. I’ve also seen some work on ways to Zap to unlock content, making the dream of “pay per article” possible.
The last piece missing for a truly seamless on-ramp would be one of these end-to-end encrypted Nostr key custodians. That way clients could offer a “login with Nostr” button and lower the barrier for new users even further.
The race to zero take rate
Nostr naturally combats monopolies (at least for clients at this point). The reason that platforms like Youtube and Twitch have a 20-60% take rate is because they’re monopolistic. They sit on their thrones because no one has yet solved the issue of audience aggregation and portability. Nostr breaks that model.
Once creators use Nostr-based platforms, they can switch at nearly no cost. The content is theirs, and the followers are universal, so they can move to a different platform that has a lower take rate without risking their income. They don’t even have to switch platforms at all if their content is stored and unlocked via events on the Nostr Relays themselves, since those are universal across all Nostr clients.
Ideally there would be third parties that host big files (PDFs, Videos, Audio, etc) which creators pay directly. Then these content clients are simply user interfaces for users to upload and consume content. Hell, even logging in is solved by the Nostr protocol itself, and maybe another third-party login provider). The scope of what these platforms need to build and maintain is small; running them will be pretty cheap compared to running Youtube or Patreon.
At first there may be only a few Nostr-based apps because there will be some economies of scale, but over time competition will kick off a race to the bottom. Clients will eventually only be able to demand a take rate that covers operating their business (build the app, maybe offering the file hosting/streaming infrastructure, etc).
What a future
In this future, platforms have less ability to coerce creators and take their income. Platforms will have a hell of time censoring content creators. Users will get more choice on their experience and their algorithms when consuming content. And the adoption of Bitcoin as a medium of exchange would explode.
It feels like we’re on the cusp of something incredible in the Nostr-verse.
Until next time 🫡
If you want to see something highlighted, if we missed anything, or if you’re building something we didn’t post about, let us know. DMs welcome at nostr:npub19mduaf5569jx9xz555jcx3v06mvktvtpu0zgk47n4lcpjsz43zzqhj6vzk
Stay Classy, Nostr.
-
@ fa0165a0:03397073
2023-10-12 16:40:43Probability theory is the study of random phenomena. This post is a pilot post for potentially further posting in this series. Feedback appreciated. Introduction
Probability theory is the study of random phenomena. It is used in many fields, such as statistics, machine learning, and finance. It is also used in everyday life, for example when playing games of chance, or when estimating the risk of an event. The most classic example is the coin toss, closely followed by the dice roll.
When we toss a coin, the result is either heads or tails. In the case of an ideal coin, the “random trail” of tossing the coin has an equal probability for both outcomes. Similarly, for a die roll of a fair dice, we know that the probability for each outcome is 1/6. In the study of probability we dive deep into the mathematics of these random phenomena, how to model them, and how to calculate the probability of different events. To do this in precise terms, we define words and concepts as tools for discussing and communicating about the subject.
This is the first of what I expect to be a 15 part series of my lecture & study notes from my university course in probability theory MT3001 at Stockholm University. References to definitions and theorems will use their numeration in the course literature, even if I may rephrase them myself. The book I’ve had as a companion through this course is a Swedish book called Stokastik by Sven Erick Alm and Tom Britton; ISBN:978–91–47–05351–3. This first module concerns basic concepts and definitions, needed for the rest of the course. The language of Probability theory
An experiment is a process that produces a randomized result. If our experiment is throwing a die, we then have the following: The result of throwing the die is called an outcome, the set of all possible outcomes is called the sample space and a subset of the sample space is called an event. We will use the following notation:
outcome is the result of an experiment, denoted with a small letter, ex. 𝑢₁, 𝑢₂, 𝑢₃, … event is the subset of the sample space, denoted with a capital letter, ex. 𝐴, 𝐵, 𝐶, … sample space is the set of all possible outcomes of an experiment, denoted Ω.
Adding numbers to our dice example, we have the sample space Ω = {𝟏,𝟐,𝟑,𝟒,𝟓,𝟔} containing all the possible events 𝑢₁=𝟏, 𝑢₂=𝟐, 𝑢₃=𝟑, 𝑢₄=𝟒, 𝑢₅=𝟓 and 𝑢₆=𝟔. And we could study some specific sub events like the chance of getting an even number, 𝐴={𝟐,𝟒,𝟔}, or the chance of getting a prime number, 𝐵={𝟐,𝟑,𝟓}. As it happens, the probability of both 𝐴 and 𝐵 is 50%. Sample space
The sample space is the set of all possible outcomes of an experiment. It is denoted Ω. And there are two types of sample spaces, discrete and continuous. A discrete sample space is a finite or countably infinite set, and all other kind of sample spaces are called continuous.
The coin toss and the dice roll are both examples of discrete sample spaces. Studying a problem, like the temperature outside, would in reality require a continuous sample space. But in practice, we can often approximate a continuous sample space with a discrete one. For example, we could divide the temperature into 10 degree intervals, and then we would have a discrete sample space.
Remember that continuous sample spaces exist, and expect more information about them in later modules. For starters, we focus on discrete sample spaces. Set Theory notation and operations
When talking about probabilities we will arm ourselves with the language of “set theory”, it is a crucial tool for the study of probability. Feeling comfortable with the subject of set theory since before is useful, but not necessary. I will try to explain the concepts as we go along.
Even tough the events from the dice rolls are represented by numbers, it is important to note that they aren’t numbers, but rather elements. This might become more clear if we alter our example to be a deck of cards. This deck of cards have four suits Ω = {♥, ♠, ♦, ♣ } and in our experiments we draw a card from the deck and look at the suit. It’s here very obvious that we can’t add or subtract the different events with each other. But we do have the operations of set theory at our disposal. For example, if 𝐴 is the event of drawing a red card and 𝐵 is the event of drawing spades ♠, we can use the following notation: Set theory operations
Union: 𝐴 ∪ 𝐵 = {♥, ♦, ♠}, the union of 𝐴 and 𝐵. The empty set: ∅ = {}, the empty set. A set with no elements. Intersection: 𝐴 ∩ 𝐵 = ∅, the intersection of 𝐴 and 𝐵. This means that 𝐴 and 𝐵 have no elements in common. And we say that 𝐴 and 𝐵 are disjoint. Complement: 𝐴ᶜ = {♠, ♣}, the complement of 𝐴. Difference: 𝐴 ∖ 𝐵 = {♥, ♦}, the difference of 𝐴 and 𝐵. Equivalent to 𝐴 ∩ 𝐵ᶜ. The symbol ∈ denotes that an element is in a set. For example, 𝑢₁ ∈ Ω means that the outcome 𝑢₁ is in the sample space Ω. For our example: ♥ ∈ 𝐴 means that the suit ♥ is in the event 𝐴.
Venn diagram
A very useful visualization of set theory is the Venn diagram. Here is an example of a Venn diagram in the picture below:
In the above illustration we have: Ω = {𝟏,𝟐,𝟑,𝟒} and the two events 𝐴={𝟐,𝟑} and 𝐵={𝟑,𝟒}. Notice how the two sets 𝐴 and 𝐵 share the element 𝟑, and that all sets are subsets of the sample space Ω. The notation for the shared element 𝟑 is 𝐴 ∩ 𝐵 = {𝟑}. Useful phrasing
The different set notations may seem a bit abstract at first, at least before you are comfortable with them. Something that might be useful to do is to read them with the context of probabilities in mind. Doing this, we can read some of the different set notations as follows:
𝐴ᶜ, “when 𝐴 doesn’t happen”. 𝐴 ∪ 𝐵, “when at least one of 𝐴 or 𝐵 happens”. 𝐴 ∩ 𝐵, “when both 𝐴 and 𝐵 happens”. 𝐴 ∩ 𝐵ᶜ, “when 𝐴 happens but 𝐵 doesn’t happen”.
The Probability function
Functions map elements from one set to another. In probability theory, we are interested in mapping events to their corresponding probabilities. We do this using what we call a probability function. This function is usually denoted 𝑃 and have some requirements that we will go through in the definition below.
This function take events as input and outputs the probability of that event. For the example of a die throw, if we have the event 𝐴={𝟐,𝟒,𝟔}, then 𝑃(𝐴) is the probability of getting an even number when throwing a fair six sided dice. In this case 𝑃(𝐴)=1/2=𝑃(“even number from a dice throw”), you’ll notice that variations of descriptions of the same event can be used interchangeably.
The Russian mathematician Andrey Kolmogorov (1903–1987) is considered the father of modern probability theory. He formulated the following three axioms for probability theory: Definition 2.2, Kolmogorov’s axioms
A real-valued function 𝑃 defined on a sample space Ω is called a probability function if it satisfies the following three axioms: 𝑃(𝐴) ≥ 𝟎 for all events 𝐴. 𝑃(Ω) = 𝟏. If 𝐴₁, 𝐴₂, 𝐴₃, … are disjoint events, then 𝑃(𝐴₁ ∪ 𝐴₂ ∪ 𝐴₃ ∪ …) = 𝑃(𝐴₁) + 𝑃(𝐴₂) + 𝑃(𝐴₃) + …. This is called the countable additivity axiom.
From these axioms it’s implied that 𝑃(𝐴) ∈ [𝟎,𝟏], which makes sense since things aren’t less than impossible or more than certain. As a rule of thumb, when talking about probabilities, we move within the range of 0 and 1. This lets us formulate the following theorem: Theorem 2.1, The Complement and Addition Theorem of probability
Let 𝐴 and 𝐵 be two events in a sample space Ω. Then the following statements are true: 1. 𝑃(𝐴ᶜ) = 𝟏 — 𝑃(𝐴) 2. 𝑃(∅) = 𝟎 3. 𝑃(𝐴 ∪ 𝐵) = 𝑃(𝐴) + 𝑃(𝐵) — 𝑃(𝐴 ∩ 𝐵)
Proof of Theorem 2.1
𝑃(𝐴 ∪ 𝐴ᶜ) = 𝑃(Ω) = 𝟏 = 𝑃(𝐴) + 𝑃(𝐴ᶜ) ⇒ 𝑃(𝐴ᶜ) = 𝟏 — 𝑃(𝐴)
This simply proves that the probability of 𝐴 not happening is the same as the probability of 𝐴 happening subtracted from 1.
𝑃(∅) = 𝑃(Ωᶜ) = 𝟏 — 𝑃(Ω) = 𝟏 — 𝟏 = 𝟎
Even though our formal proof required (1) to be proven, it’s also very intuitive that the probability of the empty set is 0. Since the empty set is the set of all elements that are not in the sample space, and the probability of an event outside the sample space is 0.
𝑃(𝐴 ∪ 𝐵) = 𝑃(𝐴 ∪ (𝐵 ∩ 𝐴ᶜ)) = 𝑃(𝐴) + 𝑃(𝐵 ∩ 𝐴ᶜ) = 𝑃(𝐴) + 𝑃(𝐵) — 𝑃(𝐴 ∩ 𝐵)
This can be understood visually by revisiting our Venn diagram. We see that the union of 𝐴 and 𝐵 has an overlapping element 𝟑 shared between them. This means that purely adding the elements of 𝐴={𝟐,𝟑} together with 𝐵={𝟑,𝟒} would double count that shared element, like this {𝟐,𝟑,𝟑,𝟒}, since we have two “copies” of the mutual elements we make sure to remove one “copy” bur removing 𝑃(𝐴 ∩ 𝐵)={𝟑} and we get 𝑃(𝐴 ∪ 𝐵)={𝟐,𝟑,𝟒}. We may refer to this process as dealing with double counting, something that is very important to have in mind when dealing with sets.
Two interpretations of probability that are useful and often used are the frequentist and the subjectivist interpretations. The frequentist interpretation is that the probability of an event is the relative frequency of that event in the long run. The subjectivist interpretation is that the probability of an event is the degree of belief that the event will occur, this is very common in the field of statistics and gambling. For the purposes of study it’s also useful to sometimes consider probabilities as areas and or masses, this is called the measure theoretic interpretation. Don’t let that word scare you off, in our context it’s just a fancy way of drawing a parallel between areas and probabilities. Think area under curves, and you’ll be fine.
-
@ 7f5c2b4e:a818d75d
2023-09-27 08:25:11What is Obsidian?
Obsidian.md is a versatile and powerful note-taking and knowledge management application that's gained immense popularity among users seeking a robust digital tool for organizing their thoughts, ideas, and information.
Obsidian boasts an array of features and benefits that can't all be covered in a single article. Instead, this #guide focuses on a unique, yet potent use case that has recently emerged - the ability to publish #Nostr notes and long-form posts directly from the app.
This capability has been made feasible through the complementary nature of Obsidian and Nostr. Obsidian is an open-source software with a thriving community and extensive support for custom plugins. On the other hand, Nostr is an open protocol with a rapidly expanding suite of tools, simplifying the integration of Nostr across various corners of the Internet. The plugin I will cover in this guide is called Nostr Writer.
Obsidian link: obsidian://show-plugin?id=nostr-writer
GitHub: https://github.com/jamesmagoo/nostr-writer
Developer: nostr:npub10a8kw2hsevhfycl4yhtg7vzrcpwpu7s6med27juf4lzqpsvy270qrh8zkw
But before we dive in, let me share some thoughts on why should one use Obsidian to publish long-form posts (and potentially even short notes) on Nostr.
Why post with Obsidian?
This is a question that naturally comes to mind: "Why use Obsidian to publish on Nostr when the legendary Nostr developers have already set up all the necessary infrastructure for browser-based publishing?" Well, there are several reasons:
-
Native Markdown Support: To begin, Obsidian employs plain text Markdown formatting for notes, just like all Nostr-based blogging platforms. This makes it an ideal choice for creating, formatting, and editing Nostr posts.
-
Illustrative Preview: While other blogging platforms offer preview tools, Obsidian has perfected this feature. It provides a beautifully customizable preview window that can be positioned anywhere in your workspace, allowing you to visualize how formatting, media, and embeds will appear in the published post[^1].
-
State-of-the-Art Flexibility: Since 2020, Obsidian has continuously improved the way writers interact with it. What sets it apart is not only the dedicated team but also its thriving community, all contributing to its refinement. Obsidian supports an extensive array of plugins, shortcuts, and hotkeys, offering unparalleled flexibility and customization. Comprehensive documentation and a ton of videos and even courses on YouTube provide a wealth of information to tailor Obsidian to your preferences.
-
Boosted Productivity: The Nostr Writer plugin is a game-changer for power users of Obsidian. If you're already using Obsidian for note-taking, employing this tool to publish your notes on Nostr is a no-brainer. If you haven't explored it yet, I strongly recommend giving it a try. It has the potential to transform how you think, plan, and structure your ideas for the better. Trying it for broader objectives will help you appreciate how well it complements Nostr.
-
Distraction-Free Composition: While you may disagree, browsers can be a significant source of distraction, with constant alerts, notifications, and blinking extensions. Composing within Obsidian offers a tranquil, clutter-free experience, fostering focus and productivity.
-
Local Record Keeping: Thanks to Nostr Writer, Obsidian keeps a local record of events you published to Nostr in a JSON file on your computer. Your long-form posts are also securely stored in the
.md
format on your machine, just like all the Obsidian notes you create. On top of that a separate tab holding all of your long-form posts posted via Obsidian is created.
nostr: note1z70v5fsty7v7kaaslsv3ckru50nxym32a62kgx0z7cjdure39hps363sh7
- Drafts You Can Count On: Drafts are often a weak point in long-form platforms. Even though Nostr developers have addressed some of these concerns, the "vanishing drafts problem" still lingers. Obsidian, designed with data safety in mind, stores all your notes locally on your device. Whether you open your laptop tomorrow or in a year, your files will be there, safe from external disruptions. For added redundancy, consider using Obsidian Sync, which encrypts and synchronizes your notes across your chosen devices.
While there are more benefits to utilizing Obsidian for both Nostr publishing and in your general workflow, these reasons should provide a solid understanding. Now, let's shed some light on the Nostr Writer plugin.
Nostr Writer
I stumbled upon Obsidian not too long ago, all thanks to nostr:npub1zvvv8fm7w2ngwdyszg3y6zgp6vwqlht8zrr8wcmjaxjecrvpjfwsd0zs7w. He's also the one who introduced me to the Nostr Writer plugin. Until recently, I primarily used Obsidian "as intended" - for documenting my thoughts and writing articles. What I found especially convenient was using it to compose long-form Nostr posts. And then, the revelation came when I discovered the Nostr Writer plugin - it transformed the experience. No more copy-pasting and meticulous adjustments were required; I can simply compose, add a cover image and description, and publish - it's as straightforward as that.
As I mentioned earlier, Obsidian boasts a vast library of community-driven plugins. To begin using Nostr Writer, simply install the plugin from the "Community plugins" section and navigate to the plugin settings to set up your publishing workflow.
You can install the plugin by clicking this link while having Obsidian open on your device, or by going to the "Community plugins" tab in the settings and typing "Nostr" in the search field.
Once the plugin is installed, you'll need to customize it to enable publishing your Obsidian notes to Nostr.
Primarily, you'll need to paste your private key (
nsec
) into the corresponding field. Additionally, I recommend configuring your relays to ensure the widest reach for your posts. If you're unfamiliar with Nostr relays or wish to enhance your understanding, you can explore my relay guide here.Many Nostr users naturally have concerns about sharing their private keys with apps. In this case, worry not. Your private key is stored exclusively on your local device and never leaves it. More details can be found here. Even if you use Obsidian sync to keep your notes updated across multiple devices, all information is locally encrypted and safeguarded by the password of your choice. Neither the Obsidian developers nor the plugin developer have access to your files. For additional information, you can refer to the "Security and privacy" section of the Obsidian documentation.
As you can see in the screenshot above, Nostr Writer also provides the option to post short notes. By toggling the corresponding slider, a pencil icon will appear on the sidebar, allowing you to post short notes without leaving Obsidian:
While I wouldn't claim that the plugin surpasses any of the "Twitter-like" Nostr clients, it can prove handy if you're already working within Obsidian and wish to share a quote or any other snippet of information you've come across in your notes.
Publishing
Publishing posts with Nostr Writer is straightforward. If you're already familiar with Obsidian, composing and formatting will be a total breeze, and the actual posting process is no different from posting with Habla, or any other Nostr-native blogging platform.
The only thing that may differ from some Nostr platforms is that Nostr Writer does not provide a specific field for adding hashtags when publishing. Instead, you should incorporate them directly into your text.
Once you've finished crafting your blog post, simply click on the upload icon in the side menu to specify the title, add a summary, and attach a cover image.
When you're ready, click "Confirm and Publish."
Another point to note is the relays indicator in the bottom-left corner. Relay connection may get interrupted if left inactive for a while, but a simple click on the widget will reconnect you to Nostr in no time.
Practice makes perfect
As I mentioned earlier, I find this approach to publishing long-form posts on Nostr the most efficient and convenient. Moreover, there are numerous improvements in the pipeline for the plugin, which is nothing short of exciting.
With that said, it's worth visiting Habla after publishing your post to double-check that everything appears as intended. Initially, you might encounter some formatting peculiarities that you'll need to get accustomed to, but with practice, you'll effortlessly master them. Soon, you won't even have to worry about how the article looks in Nostr clients because you'll be able to visualize every single aspect of your post in your mind.
I hope you found this guide useful and consider utilizing Obsidian for both publishing Nostr posts and elevating your overall productivity. If that's the case, please show your support for nostr:npub10a8kw2hsevhfycl4yhtg7vzrcpwpu7s6med27juf4lzqpsvy270qrh8zkw' work.
Please feel free to share your thoughts and suggestions—I'm always eager to hear from you! Don't forget that my Habla blog page contains a ton of Nostr guides, so you can find answers to almost any Nostr-related questions. If there are specific topics you believe I should cover, do let me know.
See you on the other side of the Nostr rabbit hole.
Tony
P.S. This post was composed, formatted and published to Nostr from Obsidian. No Nostr-related blogging platform was used.
[^1]: Nostr-native syntax, including tagging and Nostr-events embeds, is an exception here. Not all platforms on the Internet currently support Nostr syntax standards like tagging users with their npub, as in
nostr:npub10awzknjg5r5lajnr53438ndcyjylgqsrnrtq5grs495v42qc6awsj45ys7
, so it may not be available for preview. However, tags and embeds will be displayed on Habla. You can learn more about Habla's features in my previous guide here. -
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28sitio
A static site generator that works with imperative code instead of declarative templates and directory structures. It assumes nothing and can be used to transform anything into HTML pages.
It uses React so it can be used to generate single-page apps too if you want -- and normal sites that work like single-page apps.
It also provides helpers for reading Markdown files, like all static site generator does.
A long time after creating this and breaking it while trying to add too many features at once I realized Gatsby also had an imperative engine underlying the default declarative interface that could be used and it was pretty similar to
sitio
. That both made me happy to have arrived at the same results of such an acclaimed tool and sad for the same reason, as Gatsby is the worse static site generator ever created considering user experience. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28comentário pertinente de Olavo de Carvalho sobre atribuições indevidas de acontecimentos à "ordem espontânea"
Eis aqui um exemplo entre outros mil, extraído das minhas apostilas de aulas, de como se analisam as relações entre fatores deliberados e casuais na ação histórica. O sr, Beltrão está INFINITAMENTE ABAIXO da possibilidade de discutir essas coisas, e por isso mesmo me atribui uma simploriedade que é dele próprio e não minha:
Já citei mil vezes este parágrafo de Georg Jellinek e vou citá-lo de novo: “Os fenômenos da vida social dividem-se em duas classes: aqueles que são determinados essencialmente por uma vontade diretriz e aqueles que existem ou podem existir sem uma organização devida a atos de vontade. Os primeiros estão submetidos necessariamente a um plano, a uma ordem emanada de uma vontade consciente, em oposição aos segundos, cuja ordenação repousa em forças bem diferentes.”
Essa distinção é crucial para os historiadores e os analistas estratégicos não porque ela é clara em todos os casos, mas precisamente porque não o é. O erro mais comum nessa ordem de estudos reside em atribuir a uma intenção consciente aquilo que resulta de uma descontrolada e às vezes incontrolável combinação de forças, ou, inversamente, em não conseguir enxergar, por trás de uma constelação aparentemente fortuita de circunstâncias, a inteligência que planejou e dirigiu sutilmente o curso dos acontecimentos.
Exemplo do primeiro erro são os Protocolos dos Sábios de Sião, que enxergam por trás de praticamente tudo o que acontece de mau no mundo a premeditação maligna de um número reduzidos de pessoas, uma elite judaica reunida secretamente em algum lugar incerto e não sabido.
O que torna essa fantasia especialmente convincente, decorrido algum tempo da sua publicação, é que alguns dos acontecimentos ali previstos se realizam bem diante dos nossos olhos. O leitor apressado vê nisso uma confirmação, saltando imprudentemente da observação do fato à imputação da autoria. Sim, algumas das idéias anunciadas nos Protocolos foram realizadas, mas não por uma elite distintamente judaica nem muito menos em proveito dos judeus, cuja papel na maioria dos casos consistiu eminentemente em pagar o pato. Muitos grupos ricos e poderosos têm ambições de dominação global e, uma vez publicado o livro, que em certos trechos tem lances de autêntica genialidade estratégica de tipo maquiavélico, era praticamente impossível que nada aprendessem com ele e não tentassem por em prática alguns dos seus esquemas, com a vantagem adicional de que estes já vinham com um bode expiatório pré-fabricado. Também é impossível que no meio ou no topo desses grupos não exista nenhum judeu de origem. Basta portanto um pouquinho de seletividade deformante para trocar a causa pelo efeito e o inocente pelo culpado.
Mas o erro mais comum hoje em dia não é esse. É o contrário: é a recusa obstinada de enxergar alguma premeditação, alguma autoria, mesmo por trás de acontecimentos notavelmente convergentes que, sem isso, teriam de ser explicados pela forca mágica das coincidências, pela ação de anjos e demônios, pela "mão invisível" das forças de mercado ou por hipotéticas “leis da História” ou “constantes sociológicas” jamais provadas, que na imaginação do observador dirigem tudo anonimamente e sem intervenção humana.
As causas geradoras desse erro são, grosso modo:
Primeira: Reduzir as ações humanas a efeitos de forças impessoais e anônimas requer o uso de conceitos genéricos abstratos que dão automaticamente a esse tipo de abordagem a aparência de coisa muito científica. Muito mais científica, para o observador leigo, do que a paciente e meticulosa reconstituição histórica das cadeias de fatos que, sob um véu de confusão, remontam às vezes a uma autoria inicial discreta e quase imperceptível. Como o estudo dos fenômenos histórico-políticos é cada vez mais uma ocupação acadêmica cujo sucesso depende de verbas, patrocínios, respaldo na mídia popular e boas relações com o establishment, é quase inevitável que, diante de uma questão dessa ordem, poucos resistam à tentação de matar logo o problema com duas ou três generalizações elegantes e brilhar como sábios de ocasião em vez de dar-se o trabalho de rastreamentos históricos que podem exigir décadas de pesquisa.
Segunda: Qualquer grupo ou entidade que se aventure a ações histórico-políticas de longo prazo tem de possuir não só os meios de empreendê-las, mas também, necessariamente, os meios de controlar a sua repercussão pública, acentuando o que lhe convém e encobrindo o que possa abortar os resultados pretendidos. Isso implica intervenções vastas, profundas e duradouras no ambiente mental. [Etc. etc. etc.]
(no facebook em 17 de julho de 2013)
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A podridão
É razoável dizer que há três tipos de reações à menção do nome O que é Bitcoin? no Brasil:
- A reação das pessoas velhas
Muito sabiamente, as pessoas velhas que já ouviram falar de Bitcoin o encaram ou como uma coisa muito distante e reservada ao conhecimento dos seus sobrinhos que entendem de computador ou como um golpe que se deve temer e do qual o afastamento é imperativo, e de qualquer modo isso não as deve afetar mesmo então para que perder o seu tempo. Essas pessoas estão erradas: nem o sobrinho que entende de computador sabe nada sobre Bitcoin, nem o Bitcoin é um golpe, e nem é o Bitcoin uma coisa totalmente irrelevante para elas.
É razoável ter cautela diante do desconhecido, no que as pessoas velhas fazem bem, mas creio eu que também muito do medo que essas pessoas têm vem da ignorância que foi criada e difundida durante os primeiros 10 anos de Bitcoin por jornalistas analfabetos e desinformados em torno do assunto.
- A reação das pessoas pragmáticas
"Já tenho um banco e já posso enviar dinheiro, pra que Bitcoin? O quê, eu ainda tenho que pagar para transferir bitcoins? Isso não é vantagem nenhuma!"
Enquanto querem parecer muito pragmáticas e racionais, essas pessoas ignoram vários aspectos das suas próprias vidas, a começar pelo fato de que o uso dos bancos comuns não é gratuito, e depois que a existência desse sistema financeiro no qual elas se crêem muito incluídas e confortáveis é baseada num grande esquema chamado Banco Central, que tem como um dos seus fundamentos a possibilidade da inflação ilimitada da moeda, que torna todas as pessoas mais pobres, incluindo essas mesmas, tão pragmáticas e racionais.
Mais importante é notar que essas pessoas tão racionais foram também ludibriadas pela difusão da ignorância sobre Bitcoin como sendo um sistema de transferência de dinheiro. O Bitcoin não é e não pode ser um sistema de transferência de dinheiro porque ele só pode transferir-se a si mesmo, não pode transferir "dinheiro" no sentido comum dessa palavra (tenho em mente o dinheiro comum no Brasil, os reais). O fato de que haja hoje pessoas que conseguem "transferir dinheiro" usando o Bitcoin é uma coisa totalmente inesperada: a existência de pessoas que trocam bitcoins por reais (e outros dinheiros de outros lugares) e vice-versa. Não era necessário que fosse assim, não estava determinado em lugar nenhum, 10 anos atrás, que haveria demanda por um bem digital sem utilidade imediata nenhuma, foi assim por um milagre.
Porém, o milagre só estará completo quando esses bitcoins se tornarem eles mesmo o dinheiro comum. E aí assim será possível usar o sistema Bitcoin para transferir dinheiro de fato. Antes disso, chamar o Bitcoin de sistema de pagamentos ou qualquer coisa que o valha é perverter-lhe o sentido, é confundir um acidente com a essência da coisa.
- A reação dos jovens analfabetos
Os jovens analfabetos são as pessoas que usam a expressão "criptos" e freqüentam sítios que dão notícias totalmente irrelevantes sobre "criptomoedas" o dia inteiro. Não sei muito bem como eles vivem porque não lhes suporto a presença, mas são pessoas que estão muito empolgadas com toda a "onda das criptomoedas" e acham tudo muito incrível, tão incrível que acabam se interessando e então comprando todos os tokens vagabundos que inventam. Usam a palavra "decentralizado", um anglicismo muito feio que deveria significar que não existe um centro controlador da moeda x ou y e que o seu protocolo continuaria funcionando mesmo que vários operadores saíssem do ar, mas como o aplicam aos tokens que são literalmente emitidos por um centro controlador com uma figura humana no centro que toma todas as decisões sobre tudo -- como o Ethereum e conseqüentemente todos os milhares de tokens ERC20 criados dentro do sistema Ethereum -- essa palavra não faz mais sentido.
Na sua empolgação e completo desconhecimento sobre como um ente nocivo poderia destruir cadauma das suas criptomoedas tão decentralizadas, ou como mesmo sem ninguém querer uma falha fundamental no protocolo e no sistema de incentivos poderia pôr tudo abaixo, sem imaginar que toda a valorização do token XYZ pode ter sido fabricada de caso pensado pelos seus próprios emissores ou só ser mesmo uma bolha, acabam esses jovens por igualar o token XYZ, ou ETH, BCH ou o que for, ao Bitcoin, ignorando todas as diferenças qualitativas e apenas mencionando de leve as quantitativas.
Misturada à sua empolgação, e como um bônus, surge a perspectiva de ficar rico. Se um desses por algum golpe de sorte surfou em alguma bolha como a de 2017 e conseguiu multiplicar um dinheiro por 10 comprando e vendendo EOS, já começa logo a usar como argumento para convencer os outros de que "criptomoedas são o futuro" o fato de que ele ficou rico. Não subestime a burrice humana.
Há jovens no grupo das pessoas velhas, velhas no grupo das pessoas jovens, pessoas que não estão em nenhum dos grupos e pessoas que estão em mais de um grupo, isso não importa.
-
@ 8fb140b4:f948000c
2023-08-22 12:14:34As the title states, scratch behind my ear and you get it. 🐶🐾🫡
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A crappy course on torrents
In 8 points[^twitterlink]:
- You start seeding a file -- that means you split the file in a certain way, hash the pieces and wait.
- If anyone connects to you (either by TCP or UDP -- and now there's the webRTC transport) and ask for a piece you'll send it.
- Before downloading anything leechers must understand how many pieces exist and what are they -- and other things. For that exists the .torrent file, it contains the final hash of the file, metadata about all files, the list of pieces and hash of each.
- To know where you are so people can connect to you[^nathole], there exists an HTTP (or UDP) server called "tracker". A list of trackers is also contained in the .torrent file.
- When you add a torrent to your client, it gets a list of peers from the trackers. Then you try to connect to them (and you keep getting peers from the trackers while simultaneously sending data to the tracker like "I'm downloading, I have x bytes already" or "I'm seeding").
- Magnet links contain a tracker URL and a hash of the metadata contained in the .torrent file -- with that you can safely download the same data that should be inside a .torrent file -- but now you ask it from a peer before requesting any actual file piece.
- DHTs are an afterthought and I don't know how important they are for the torrent ecosystem (trackers work just fine). They intend to replace the centralized trackers with message passing between DHT peers (DHT peers are different and independent from file-download peers).
- All these things (.torrent files, tracker messages, messages passed between peers) are done in a peculiar encoding format called "bencode" that is just a slightly less verbose, less readable JSON.
[^twitterlink]: Posted first as this Twitter thread. [^nathole]: Also your torrent client must be accessible from the external internet, NAT hole-punching is almost a myth.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28neuron.vim
I started using this neuron thing to create an update this same zettelkasten, but the existing vim plugin had too many problems, so I forked it and ended up changing almost everything.
Since the upstream repository was somewhat abandoned, most users and people who were trying to contribute upstream migrate to my fork too.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Module Linker
A browser extension that reads source code on GitHub and tries to find links to imported dependencies so you can click on them and navigate through either GitHub or package repositories or base language documentation. Works for many languages at different levels of completeness.
-
@ 78733875:4eb851f2
2023-08-17 14:23:51After announcing our first wave of grants for bitcoin[^fn-btc] and nostr[^fn-nostr] projects, we are pleased to announce an additional wave of grants for open-source projects in the space:
[^fn-btc]: First Wave of Bitcoin Grants [^fn-nostr]: First Wave of Nostr Grants
- BDK
- LNbits
- Watchdescriptor
- Stratum V2 Testing \& Benchmarking Tool
- Fedimint Modules and Resources
- Amber: Nostr Event Signer
- Nostr UI/UX Development
- Nostr Use-Case Exploration \& Education
The first five grants are sourced from our General Fund, the last three—being nostr projects—from our Nostr Fund. This brings the total number of OpenSats grants to 41, adding to the grants we previously announced in July.
Once again, let's take a closer look at each of the projects to see how they align with the OpenSats mission.
BDK
Bitcoin Development Kit (BDK) is a set of libraries and tools that allows you to seamlessly build cross-platform on-chain bitcoin wallets without having to re-implement standard bitcoin data structures, algorithms, and protocols. BDK is built on top of the powerful rust-bitcoin and rust-miniscript libraries and adds features for managing descriptor-based wallets, syncing wallets to the bitcoin blockchain, viewing transaction histories, managing and selecting UTXOs to create new transactions, signing, and more. The core BDK components are written in Rust, but the team also maintains Kotlin and Swift language bindings for use in mobile projects. There are also Python bindings, and React Native and Flutter support is being actively developed.
Repository: bitcoindevkit/bdk
License: Apache 2.0 / MITLNbits
LNbits is used by a multitude of projects in the bitcoin space, especially as part of their lightning payments stack. Being easy to build on through its extension framework, LNbits has been pioneering various cutting-edge solutions and experiments in the world of bitcoin, lightning, and nostr.
The project has a thriving maker community building various hardware devices such as Lightning ATMs, point-of-sale devices, DIY hardware wallets, and nostr signing devices. The modular design of LNbits makes it attractive to users and tinkerers alike, as its plugin architecture makes it easy to extend and understand.
Repository: lnbits/lnbits
License: MITWatchdescriptor
watchdescriptor
is a CLN plugin written in Rust that connects a business's treasury wallet to its CLN node. It utilizescln-plugin
and the BDK library to track coin movements in registered wallets and report this information to thebookkeeper
plugin.The plugin enables businesses to design a complete treasury using Miniscript and import the resulting descriptor into CLN. Since
bookkeeper
already accounts for all coin movements internal to CLN, this plugin is the last piece businesses need in order to unify all their bitcoin accounting in one place. This enables businesses to account for all inflows and outflows from their operations, streamlining tax reporting and financial analysis.The
watchdescriptor
project is part of a broader vision to transform the lightning node (particularly CLN) into a financial hub for businesses, enabling them to conduct operations without reliance on any third parties.Repository: chrisguida/watchdescriptor
License: MITStratum V2 Testing & Benchmarking Tool
The Stratum V2 Testing & Benchmarking Tool allows the bitcoin mining industry to test and benchmark Stratum V2 performance against Stratum V1. The tool supports different mining scenarios to help miners make informed decisions and evaluate their profitability. The goal of the project is to motivate miners to upgrade to Stratum V2, increasing their individual profits and making the Bitcoin network more resilient in the process.
Repository: stratum-mining/stratum @GitGab19
License: Apache 2.0 / MITFedimint Modules and Resources
Fedimint is a federated Chaumian e-cash mint backed by sats with deposits and withdrawals that can occur on-chain or via lightning. It can be understood as a scaling and privacy layer as well as an adoption accelerator for Bitcoin.
The goal of this particular project is to improve the Fedimint UI and develop free and open resources for developers and "Guardians" to enable more people to run and develop on Fedimint.
Repository: fedimint/ui @EthnTuttle
License: MITAmber: Nostr Event Signer
Amber is a nostr event signer for Android. It allows users to keep their
nsec
segregated in a single, dedicated app. The goal of Amber is to have your smartphone act as a NIP-46 signing device without any need for servers or additional hardware. "Private keys should be exposed to as few systems as possible as each system adds to the attack surface," as the rationale of said NIP states. In addition to native apps, Amber aims to support all current nostr web applications without requiring any extensions or web servers.Repository: greenart7c3/Amber
License: MITNostr UI/UX Development
The goal of this project is to help improve the UI/UX of major nostr clients, starting with Gossip and Coracle, emphasizing the onboarding process as well as usability and accessibility. One part of onboarding is solving the discoverability problem that nostr has in the first place. Solving the problem of jumping in and out of the nostr world is what motivated the development of
njump
, which was redesigned as part of these efforts and is now live at nostr.com.In addition to client-specific improvements, generic design modules and learnings will be incorporated into the Nostr Design project for others to use.
Activity: github.com/dtonon @dtonon
License: MITNostr Use-Case Exploration & Education
As of today, most nostr clients implement social media applications on top of the nostr protocol. However, nostr allows for various use cases, many of which go beyond social media. Two examples are Listr and Ostrich, a list management tool and job board, respectively.
In addition to use-case exploration, this project will continue to educate users and developers alike, be it via Nostr How or various video series, e.g., explaining how to build upon NDK.
Activity: github.com/erskingardner @jeffg
License: MIT / GPL-3.0 / CC BY-NC-SA 4.0
We have more grants in the pipeline and will provide funding for many more projects in the future. Note that we can only fund those projects which reach out and apply. If you are an open-source developer who is aligned with our mission, don't hesitate to apply for funding.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28"Você só aprendeu mesmo uma coisa quando consegue explicar para os outros"
Mentira. Tá certo que existe um ponto em que você acha que sabe algo mas não consegue explicar, mas não necessariamente isso significa não saber. Conseguir explicar não depende de saber, mas de verbalizar. Podemos saber muitas coisas sem as conseguir verbalizar. Aliás, para a maior parte das experiências humanas verbalizar é que é a parte difícil. Por último, é importante dizer que a verbalização é uma abstração e portanto quando alguém tenta explicar algo e se força a fazer uma abstração está arriscando substituir a experiência concreta ou mesmo o conhecimento difuso de algo por aquela abstração e com isso ficar mais burro -- me parece que esse é risco é maior quanto mais prematura for a tentativa de explicação e quando mais sucesso a abstração improvisada fizer.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Using Spacechains and Fedimint to solve scaling
What if instead of trying to create complicated "layer 2" setups involving noveau cryptographic techniques we just did the following:
- we take that Fedimint source code and remove the "mint" stuff, and just use their federation stuff secure coins with multisig;
- then we make a spacechain;
- and we make the federations issue multisig-btc tokens on it;
- and then we put some uniswap-like thing in there to allow these tokens to be exchanged freely.
Why?
The recent spike in fees caused by Ordinals and BRC-20 shitcoinery has shown that Lightning isn't a silver bullet. Channels are too fragile, it costs a lot to open a channel under a high fee environment, to run a routing node and so on.
People who want to keep using Lightning are instead flocking to the big Lightning custodial providers: WalletofSatoshi, ZEBEDEE, OpenNode and so on. We could leverage that trust people have in these companies (and individuals) operating shadow Lightning providers and turn each of these into a btc-token issuer. Each issue their own token, transactions flow freely. Each person can hold only assets from the issuers they trust more.
-
@ 8fb140b4:f948000c
2023-07-30 00:35:01Test Bounty Note
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Multi-service Graph Reputation protocol
The problem
- Users inside centralized services need to know reputations of other users they're interacting with;
- Building reputation with ratings imposes a big burden on the user and still accomplishes nothing, can be faked, no one cares about these ratings etc.
The ideal solution
Subjective reputation: reputation based on how you rated that person previously, and how other people you trust rated that person, and how other people trusted by people you trust rated that person and so on, in a web-of-trust that actually can give you some insight on the trustworthiness of someone you never met or interacted with.
The problem with the ideal solution
- Most of the times the service that wants to implement this is not as big as Facebook, so it won't have enough people in it for such graphs of reputation to be constructed.
- It is not trivial to build.
My proposed solution:
I've drafted a protocol for an open system based on services publishing their internal reputation records and indexers using these to build graphs, and then serving the graphs back to the services so they can show them to users when it is needed (as HTTP APIs that can be called directly from the user client app or browser).
Crucially, these indexers will gather data from multiple services and cross-link users from these services so the graph is better.
https://github.com/fiatjaf/multi-service-reputation-rfc
The first and single actionable and useful feedback I got, from @bootstrapbandit was that services shouldn't share email addresses in plain text (email addresses and other external relationships users of a service may have are necessary to establish links from users accross services), but I think it is ok if services publish hashes of these email addresses instead. At some point I will update the spec draft and that may have been before the time you're reading this.
Another issue is that services may lie about their reputation records and that will hurt other services and users in these other services that are relying on that data. Maybe indexers will have to do some investigative job here to assert service honesty. Or maybe this entire protocol is just failed and we will actually need a system in which users themselves will publish their own records.
See also
-
@ d3d74124:a4eb7b1d
2023-07-26 02:43:40This plan was GPT generated originally but then tweaked by myself as the idea fleshed itself out. All feedback welcome and encouraged.
Shenandoah Bitcoin
1. Executive Summary
Shenandoah Bitcoin is a for-profit community organization based in Frederick County, VA, uniquely blending the world of agriculture and STEM. Our mission is to foster community spirit, stimulate interest in agricultural technology, and promote understanding of Bitcoin, while providing enriching educational opportunities and ensuring sustainable business operations.
2. Company Description
Shenandoah Bitcoin is committed to delivering value to our local community. Our unique approach intertwines traditional agricultural practices, modern STEM concepts, and the world of digital currencies, specifically Bitcoin. Our activities cater to all age groups, focusing on fostering community engagement, hands-on learning experiences, and contributing to the overall welfare of our community.
What’s in a name?
Shenandoah Bitcoin. Shenandoah - an old and historied land. Bitcoin - a cutting edge technological advancement. Both encompass multiple industries, from energy and manufacturing, to farming and data centers. Both built using Proof of Work.
3. Services
We offer a range of services, including:
Family-friendly events: Agriculture, STEM, and Bitcoin-themed festivals, fairs, workshops, and community gatherings. Educational programs: Classes, seminars, and workshops on agricultural technology, STEM principles, and understanding and using Bitcoin. Facility Rentals: Spaces available for private events, business meetings, and community gatherings.
4. Membership Benefits
We offer tiered membership packages with benefits such as:
a. Silver Membership: Includes access to regular events, discounts on educational programs, and priority booking for facility rentals.
b. Gold Membership: All Silver benefits, free access to select educational programs, and further discounted facility rentals.
c. Platinum Membership: All Gold benefits, free access to all educational programs, highest priority and maximum discounts on facility rentals, and exclusive invitations to special events.
Member’s opting to pay in Bitcoin receive 10% off all pricing.
5. Market Analysis
Our primary market is the local community in Frederick County and Winchester, VA, which consists of various demographic groups. Our secondary market includes neighboring communities, tourists, businesses, and educational institutions interested in the intersection of agriculture, STEM, and Bitcoin. Understanding that facility use and events to be a drawing factor for all demographics, we outline demographic specific analysis below.
STEM professionals in the area may work remotely or commute toward DC and not interact much with their agricultural neighbors, but a desire for good quality food exists for many. In addition to events, drawing the STEM demographic in will be connections to CSAs, ranchers, and homesteaders for access to fresh locally grown food. Offering a child's play room adjacent to some office space, is a compelling benefit for the laptop class that is often in STEM professions.
Non-industrial food producers and homesteaders may not have the focus or resources for marketing and sales. By offering a physical touch point for them and direct connections to consumers, food producers benefit from membership. Having more options for drop off/pick up of various produced goods, makes it attractive for both the consumers and producers as coordination can be asynchronous.
Bitcoiners have a wide range of sub-demographics, including farmers and software engineers. Some travel hours over car and plane to attend bitcoin themed events. The topics of STEM and agriculture are of shared interest to non-trivially sized communities of bitcoiners. Having a physical touch point for bitcoiners will draw in some members just for that. Building fellowship is desired and sought in bitcoin.
5.1 Market Trends
The blending of agriculture, STEM fields, and Bitcoin is a unique concept with increasing interest in sustainable farming and ranching, food sovereignty, and health. Shenandoah Bitcoin is poised to tap into this growing interest and offer unique value to our community.
5.2 Market Needs
Our market requires initiatives that foster community engagement, promote understanding of agri-tech and Bitcoin, and provide a versatile space for events and learning.
6. Marketing and Sales Strategy
We will employ a blend of digital marketing, traditional advertising, and strategic partnerships. Our main marketing channels will be word of mouth, social media, local press, and our website. Partnerships with local small businesses, homesteaders, schools, agricultural organizations, and bitcoin companies will form a key part of our outreach strategy.
7. Organizational Structure
Shenandoah Bitcoin will be led by a CEO, supported by a management team responsible for daily operations, event planning, marketing, and community outreach. Event management and logistics will be handled by part-time staff and volunteers.
8. Financial Projections
Our revenue will be generated from membership fees, charges for events and educational programs, and facility rentals.
9. Funding Request
[If seeking additional funding, describe your needs and how the funds will be used]
10. Exit Strategy
Should it become necessary to dissolve the business, assets such as property, equipment, and any remaining cash reserves after meeting liabilities will be sold. Investors would receive their share of the remaining assets according to their proportion of ownership.
11. Conclusion
Shenandoah Bitcoin is a unique community organization bringing together agriculture, STEM, and Bitcoin in Frederick County, VA. Our distinctive approach is designed to deliver both profits and social impact, resonating strongly with our target market and positioning us for sustainable growth.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Just malinvestiment
Traditionally the Austrian Theory of Business Cycles has been explained and reworked in many ways, but the most widely accepted version (or the closest to the Mises or Hayek views) view is that banks (or the central bank) cause the general interest rate to decline by creation of new money and that prompts entrepreneurs to invest in projects of longer duration. This can be confusing because sometimes entrepreneurs embark in very short-time projects during one of these bubbles and still contribute to the overall cycle.
The solution is to think about the "longer term" problem is to think of the entire economy going long-term, not individual entrepreneurs. So if one entrepreneur makes an investiment in a thing that looks simple he may actually, knowingly or not, be inserting himself in a bigger machine that is actually involved in producing longer-term things. Incidentally this thinking also solves the biggest criticism of the Austrian Business Cycle Theory: that of the rational expectations people who say: "oh but can't the entrepreneurs know that the interest rate is artificially low and decide to not make long-term investiments?" ("and if they don't know they should lose money and be replaced like in a normal economy flow blablabla?"). Well, the answer is that they are not really relying on the interest rate, they are only looking for profit opportunities, and this is the key to another confusion that has always followed my thinkings about this topic.
If a guy opens a bar in an area of a town where many new buildings are being built during a "housing bubble" he may not know, but he is inserting himself right into the eye of that business cycle. He expects all these building projects to continue, and all the people involved in that to be getting paid more and be able to spend more at his bar and so on. That is a bet that may or may not end up paying.
Now what does that bar investiment has to do with the interest rate? Nothing. It is just a guy who saw a business opportunity in a place where hungry people with money had no bar to buy things in, so he opened a bar. Additionally the guy has made some calculations about all the ending, starting and future building projects in the area, and then the people that would live or work in that area afterwards (after all the buildings were being built with the expectation of being used) and so on, there is no interest rate calculations involved. And yet that may be a malinvestiment because some building projects will end up being canceled and the expected usage of the finished ones will turn out to be smaller than predicted.
This bubble may have been caused by a decline in interest rates that prompted some people to start buying houses that they wouldn't otherwise, but this is just a small detail. The bubble can only be kept going by a constant influx of new money into the economy, but the focus on the interest rate is wrong. If new money is printed and used by the government to buy ships then there will be a boom and a bubble in the ship market, and that involves all the parts of production process of ships and also bars that will be opened near areas of the town where ships are built and new people are being hired with higher salaries to do things that will eventually contribute to the production of ships that will then be sold to the government.
It's not interest rates or the length of the production process that matters, it's just printed money and malinvestiment.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28WelcomeBot
The first bot ever created for Trello.
It invited to a public board automatically anyone who commented on a card he was added to.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28hledger-web
A Haskell app that uses Miso and hledger's Haskell libraries plus ghcjs to be compiled to a web page, and then adds optional remoteStorage so you can store your ledger data somewhere else.
This was my introduction to Haskell and also built at a time I thought remoteStorage was a good idea that solved many problems, and that it could use some help in the form of just yet another somewhat-useless-but-cool project using it that could be added to their wiki.
See also
-
@ 52b4a076:e7fad8bd
2023-05-01 19:37:01What is NIP-05 really?
If you look at the spec, it's a way to map Nostr public keys to DNS-based internet identifiers, such as
name@example.com
.If you look at Nostr Plebs:
It's a human readable identifier for your public key. It makes finding your profile on Nostr easier. It makes identifying your account easier.
If you look at basically any client, you see a checkmark, which you assume means verification.
If you ask someone, they probably will call it verification.
How did we get here?
Initially, there was only one client, which was (kind of) the reference implementation: Branle.
When it added support for NIP-05 identifiers, it used to replace the display name with the NIP-05 identifier, and it had to distinguish a NIP-05 from someone setting their display name to a NIP-05. So they added a checkmark...
Then there was astral.ninja and Damus: The former was a fork of Branle, and therefore inherited the checkmark. Damus didn't implement NIP-05 until a while later, and they added a checkmark because Astral and other clients were doing it.
And then came new clients, all copying what the previous ones did... (Snort originally did not have a checkmark, but that changed later.)
The first NIP-05 provider
Long story short, people were wondering what NIP-05 is and wanted it, and that's how Nostr Plebs came to be.
They initially called their service verification. Somewhere between January and February, they removed all mentions to verification except one (because people were searching for it), and publicly said that NIP-05 is not verification. But that didn't work.
Then, there were the new NIP-05 providers, some understood perfectly what a NIP-05 identifier is and applied the correct nomenclature. Others misnamed it as verification, adding confusion to users. This made the problem worse on top of the popular clients showing checkmarks.
(from this point in the article we'll refer to it as a Nostr address)
And so, the scams begin
Spammers and scammers started to abuse Nostr addresses to scam people: - Some providers has been used by fake crypto airdrop bots. - A few Nostr address providers have terminated multitude of impersonating and scam identifiers over the past weeks.
This goes to show that Nostr addresses don't verify anything, they are just providers of human readable handles.
Nostr addresses can be proof of association
Nostr addresses can be a proof of association. The easiest analogy to understand is email:
jack@cash.app -> You could assume this is the Jack that works at Cash App.
jack@nostr-address-provider.example.com -> This could be any Jack.
What now?
We urge that clients stop showing a checkmark for all Nostr addresses, as they are not useful for verification.
We also urge that clients hide checkmarks for all domain names, without exception in the same way we do not show checkmarks for emails.
Lastly, NIP-05 is a nostr address and that is why we urge all clients to use the proper nomenclature.
Signed:
- Semisol, Nostr Plebs (semisol@nostrplebs.com)
- Quentin, nostrcheck.me (quentin@nostrcheck.me)
- Derek Ross, Nostr Plebs (derekross@nostrplebs.com)
- Bitcoin Nostrich, Bitcoin Nostr (BitcoinNostrich@BitcoinNostr.com)
- Remina, zaps.lol (remina@zaps.lol)
- Harry Hodler, nostr-check.com (harryhodler@nostr-check.com)
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Reasons why Lightning is not that great
Some Bitcoiners, me included, were fooled by hyperbolic discourse that presented Lightning as some magical scaling solution with no flaws. This is an attempt to list some of the actual flaws uncovered after 5 years of experience. The point of this article is not to say Lightning is a complete worthless piece of crap, but only to highlight the fact that Bitcoin needs to put more focus on developing and thinking about other scaling solutions (such as Drivechain, less crappy and more decentralized trusted channels networks and statechains).
Unbearable experience
Maintaining a node is cumbersome, you have to deal with closed channels, allocating funds, paying fees unpredictably, choosing new channels to open, storing channel state backups -- or you'll have to delegate all these decisions to some weird AI or third-party services, it's not feasible for normal people.
Channels fail for no good reason all the time
Every time nodes disagree on anything they close channels, there have been dozens, maybe hundreds, of bugs that lead to channels being closed in the past, and implementors have been fixing these bugs, but since these node implementations continue to be worked on and new features continue to be added we can be quite sure that new bugs continue to be introduced.
Trimmed (fake) HTLCs are not sound protocol design
What would you tell me if I presented a protocol that allowed for transfers of users' funds across a network of channels and that these channels would pledge to send the money to miners while the payment was in flight, and that these payments could never be recovered if a node in the middle of the hop had a bug or decided to stop responding? Or that the receiver could receive your payment, but still claim he didn't, and you couldn't prove that at all?
These are the properties of "trimmed HTLCs", HTLCs that are uneconomical to have their own UTXO in the channel presigned transaction bundles, therefore are just assumed to be there while they are not (and their amounts are instead added to the fees of the presigned transaction).
Trimmed HTLCs, like any other HTLC, have timelocks, preimages and hashes associated with them -- which are properties relevant to the redemption of actual HTLCs onchain --, but unlike actual HTLCs these things have no actual onchain meaning since there is no onchain UTXO associated with them. This is a game of make-believe that only "works" because (1) payment proofs aren't worth anything anyway, so it makes no sense to steal these; (2) channels are too expensive to setup; (3) all Lightning Network users are honest; (4) there are so many bugs and confusion in a Lightning Network node's life that events related to trimmed HTLCs do not get noticed by users.
Also, so far these trimmed HTLCs have only been used for very small payments (although very small payments probably account for 99% of the total payments), so it is supposedly "fine" to have them. But, as fees rise, more and more HTLCs tend to become fake, which may make people question the sanity of the design.
Tadge Dryja, one of the creators of the Lightning Network proposal, has been critical of the fact that these things were allowed to creep into the BOLT protocol.
Routing
Routing is already very bad today even though most nodes have a basically 100% view of the public network, the reasons being that some nodes are offline, others are on Tor and unreachable or too slow, channels have the balance shifted in the wrong direction, so payments fail a lot -- which leads to the (bad) solution invented by professional node runners and large businesses of probing the network constantly in order to discard bad paths, this creates unnecessary load and increases the risk of channels being dropped for no good reason.
As the network grows -- if it indeed grow and not centralize in a few hubs -- routing tends to become harder and harder.
While each implementation team makes their own decisions with regard to how to best way to route payments and these decisions may change at anytime, it's worth noting, for example, that CLN will use MPP to split up any payment in any number of chunks of 10k satoshis, supposedly to improve routing success rates. While this often backfires and causes payments to fail when they should have succeeded, it also contributes to making it so there are proportionally more fake HTLCs than there should be, as long as the threshold for fake HTLCs is above 10k.
Payment proofs are somewhat useless
Even though payment proofs were seen by many (including me) as one of the great things about Lightning, the sad fact is that they do not work as proofs if people are not aware of the fact that they are proofs. Wallets do all they can to hide these details from users because it is considered "bad UX" and low-level implementors do not care very much to talk about them at all. There have been attempts from Lightning Labs to get rid of the payment proofs entirely (which at the time to me sounded like a terrible idea, but now I realize they were not wrong).
Here's a piece of anecdote: I've personally witnessed multiple episodes in which Phoenix wallet released the preimage without having actually received the payment (they did receive a minor part of the payment, but the payment was split in many parts). That caused my service, @lntxbot, to mark the outgoing payment as complete, only then to have to endure complaints from the users because the receiver side, Phoenix, had not received the full amount. In these cases, if the protocol and the idea of preimages as payment proofs be respected, should I have been the one in charge of manually fixing user balances?
Another important detail: when an HTLC is sent and then something goes wrong with the payment the channel has to be closed in order to redeem that payment. When the redeemer is on the receiver side, the very act of redeeming should cause the preimage to be revealed and a proof of payment to be made available for the sender, who can then send that back to the previous hop and the payment is proven without any doubt. But when this happens for fake HTLCs (which is the vast majority of payments, as noted above) there is no place in the world for a preimage and therefore there are no proofs available. A channel is just closed, the payer loses money but can't prove a payment. It also can't send that proof back to the previous hop so he is forced to say the payment failed -- even if it wasn't him the one who declared that hop a failure and closed the channel, which should be a prerequisite. I wonder if this isn't the source of multiple bugs in implementations that cause channels to be closed unnecessarily. The point is: preimages and payment proofs are mostly a fiction.
Another important fact is that the proofs do not really prove anything if the keypair that signs the invoice can't be provably attached to a real world entity.
LSP-centric design
The first Lightning wallets to show up in the market, LND as a desktop daemon (then later with some GUIs on top of it like Zap and Joule) and Anton's BLW and Eclair wallets for mobile devices, then later LND-based mobile wallets like Blixt and RawTX, were all standalone wallets that were self-sufficient and meant to be run directly by consumers. Eventually, though, came Breez and Phoenix and introduced the "LSP" model, in which a server would be trusted in various forms -- not directly with users' funds, but with their privacy, fees and other details -- but most importantly that LSP would be the primary source of channels for all users of that given wallet software. This was all fine, but as time passed new features were designed and implemented that assumed users would be running software connected to LSPs. The very idea of a user having a standalone mobile wallet was put out of question. The entire argument for implementation of the bolt12 standard, for example, hinged on the assumption that mobile wallets would have LSPs capable of connecting to Google messaging services and being able to "wake up" mobile wallets in order for them to receive payments. Other ideas, like a complicated standard for allowing mobile wallets to receive payments without having to be online all the time, just assume LSPs always exist; and changes to the expected BOLT spec behavior with regards to, for example, probing of mobile wallets.
Ark is another example of a kind of LSP that got so enshrined that it become a new protocol that depends on it entirely.
Protocol complexity
Even though the general idea of how Lightning is supposed to work can be understood by many people (as long as these people know how Bitcoin works) the Lightning protocol is not really easy: it will take a long time of big dedication for anyone to understand the details about the BOLTs -- this is a bad thing if we want a world of users that have at least an idea of what they are doing. Moreover, with each new cool idea someone has that gets adopted by the protocol leaders, it increases in complexity and some of the implementors are kicked out of the circle, therefore making it easier for the remaining ones to proceed with more and more complexity. It's the same process by which Chrome won the browser wars, kicked out all competitors and proceeded to make a supposedly open protocol, but one that no one can implement as it gets new and more complex features every day, all envisioned by the Chrome team.
Liquidity issues?
I don't believe these are a real problem if all the other things worked, but still the old criticism that Lightning requires parking liquidity and that has a cost is not a complete non-issue, specially given the LSP-centric model.
-
@ 32092ec1:8e9fd13a
2023-04-25 18:02:43Bitcoin maximalism, to many, has evolved from a belief that bitcoin is the only crypto asset worth owning to a belief that bitcoin is the only asset worth owning. In addition, many observe that those who declare themselves to be bitcoin maximalists have also declared their allegiance to many other lifestyle choices. This was well articulated by Jameson Lopp in his blog post “A History of Bitcoin Maximalism.”
While some (myself included for a while) embrace the title of bitcoin maximalist, perhaps out of spite given that it was originally intended to be a derogatory term, I eventually have decided to reject it. While I do not own any other crypto assets, and I do believe that bitcoin offers the best risk adjusted investment across many asset classes, it is not, nor could it ever realistically be, my only investment. Due to the ambiguity of the definition of a “bitcoin maximalist” to some, even the fact that I have equity in my primary residence may call into question my status as a bitcoin maximalist. Although most people likely agree that this point of view is nonsensical, I feel that it is also completely pointless to take the time to defend the purity of my “bitcoin maximalism;” and so, I reject it. In place of the bitcoin maximalist label, I hereby declare myself to be a truth maximalist.
The truth is that 99% of non-bitcoin crypto assets are completely worthless; specifically, what I mean is that they literally add zero net positivity to humanity. The remaining 1% of non-bitcoin crypto assets may indeed serve some limited improvements to existing systems or other alternative benefits; however, these crypto assets are also, in my opinion, already highly over valued given the magnitude of the problems they are intending to solve. Most of the incremental improvements offered by cryptos are related to marginally improving systems based on fiat money and the banking industry both of which bitcoin intends to largely obsolete and are still net negative to humanity and designed to syphon wealth from the powerless to the powerful.
I do believe that open source, decentralized, cryptographic solutions will play additional roles in society and offer an opportunity to disrupt many existing centralized solutions. However, most of these solutions will not be investable assets, just as the internet itself is not an investable asset.
I believe it also to be true that investment properties intended for yield generation come with a host of additional costs and risks that are rarely properly accounted for when assessing the value of these assets. After accounting for the probability weighted costs of evictions, seizures, taxes, maintenance, inflation, and loan interest, I find it difficult to justify these investments over bitcoin held in self-custody.
Many other investable assets that simply sit on a bank’s balance sheet (e.g., stocks, bonds, etc.) can easily be seized if any number of three letter agencies decided that you should not have these assets. This can be done with no due process and does not require any in depth investigation to determine the validity of their claims. I believe that the probability of these types of seizures is much higher for known bitcoiners who are also critical of the government and the media, especially if current financial systems begin to collapse and/or if bitcoin’s value dramatically increases.
Beyond seizure of your individual assets, as the world begins to divest from the highly inflated store of value assets listed on the NYSE and NASDAQ due to seizure risks at the sovereign level, the risk of extreme inflation adjusted devaluation of your portfolio should also be considered. Compounding this risk is the opportunity cost associated with the fact that some of the wealth that may be withdrawn from the US stock market is also likely going to flow into bitcoin. These risks are rarely accounted for or quantified when determining the value of a stock portfolio in contrast to simply holding bitcoin in self-custody.
My opinions on these matters are common among bitcoiners who are largely labeled bitcoin maximalists. However, the title of truth maximalist also fits nicely when trying to counter some of the softer points made in Lopp’s blog post mentioned above. My thesis is that most people who have adopted the bitcoin maximalist “lifestyle” choices have done so as a result of their quest for truth, not as a result of their desire to virtue signal their status as bitcoin maximalists.
The truth is that a carnivore diet is much healthier than governments, media and academia would like you to believe (disclosure: I have never tried a carnivore diet)
The truth is that weightlifting is more beneficial than cardio to overall health (disclosure: I am not into weightlifting)
The truth is that traditional gender roles are beneficial to mental health (disclosure: although my wife is a full-time, stay-at-home mom, my marriage is far from perfect).
The truth is that extreme consequences from greenhouse gasses in the atmosphere are extrapolatory and predictions have historically failed miserably (disclosure: I do not think there is zero risk of extreme disasters from excessive levels of carbon in the atmosphere).
The truth is that seed oils are terrible for you (disclosure: I have ingested seed oils since discovering this).
The truth is that COVID vaccines pointlessly add risk to healthy people and children (disclosure: I received two doses of the COVID vaccine prior to realizing this truth and my wife still disagrees with me on this).
The truth is that natural health solutions are often more beneficial than big pharma would like you to believe (disclosure: I have not used any exotic natural remedies).
The truth is that modern art and architecture is garbage (disclosure: this is only a personal opinion).
The Truth is that Jesus Christ died on a cross and ascended into Heaven in order to save the world from sin (disclosure: although I have always been a Christian, over the last year or so I have dedicated a significant amount of time researching this and so far, everything is supporting this to be literally true and much more easily believable than you might think).
Maybe you disagree with some of these things that I have determined to be true. But let me ask, have done your own research? What is your basis for a counter argument to any of these truths? Did you verify or are you trusting someone else’s opinion? If you are trusting someone else’s research, did you investigate their credibility or determine if they have significant biases? Certainly, my opinions on the topics listed above not always based on exhaustive independent research, but I did approach each with skepticism and did not always believe these things with conviction prior to looking into them. I think any bitcoiner would agree that if you are trusting fiat funded academia, government organizations or mainstream media, you may need to do more research.
The truth is that I will continue to seek the truth and I will encourage others to do the same. The truth is that I welcome new information and listening to opinions that counter my beliefs. When choosing to have strong conviction in minority opinions about the truth, it is extremely valuable to be able to articulate and refute the points that are contrary to your beliefs. It is also important to be able to change your mind when presented with sufficient evidence against your opinion. This is what it means to be a truth maximalist. It just so happens that the truth is that bitcoin is the only crypto asset worth investing in. Don’t believe me? Prove it.
786973
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28O que é Bitcoin?
Todo guia infeliz sobre Bitcoin começa com esta pergunta manjada, e normalmente já vai respondendo que é uma "moeda virtual"[^moeda_virtual], um conceito estúpido que não esclarece nada.
Esqueça esse papo. Bitcoin não é uma moeda. Bitcoin é um protocolo[^protocolo].
Por que então dizem que é uma moeda? Porque essas pessoas muito apressadinhas gostam de dizer que tudo que é facilmente divisível e transferível, e cujas várias unidades são idênticas umas às outras, é uma moeda. Então, nesse sentido, Bitcoin é uma moeda, mas ignore esse papo de moeda.
O protocolo Bitcoin diz que existem "créditos" (ou "pontos", ou "unidades") que podem ser transferidas entre os participantes, e vários computadores, cada um operando independentemente do outro, desde que sigam o protocolo (ou seja: que estejam todos rodando o mesmo programa, ou programas compatíveis), estarão sempre em acordo a respeito de quem gastou cada crédito e como gastou.
É basicamente essa a idéia: um monte de "pontos virtuais" que são transferidos de uns para outros, sem que exista uma entidade organizadora, "o dono do Bitcoin", "o chefe supremo do Bitcoin", que controle nada, coordene nada, ou tenha poder sobre essas transferências.
Como funciona
Imagine vários computadores rodando o mesmo programa (ou programas compatíveis). Agora imagine que esses programas se comunicam entre si através da internet: eles enviam mensagens uns para os outros e esperam respostas. De vez em quando a resposta não vem, ou vem num formato que o programa não entende, isso significa que o outro computador saiu do ar, ou está rodando uma versão incompatível do programa, e aí todos os outros vão ignorá-lo. Mas em geral a resposta vem certinha e todos conseguem falar com todos.
Agora que você imaginou isso, fica fácil imaginar, por exemplo, que cada um desses computadores mantém uma lista de todos os bitcoins existentes e quem tem cada um. Eles pegam a lista dos outros computadores na rede e depois a vão atualizando à medida que novas transações vão sendo feitas. Toda vez que alguém quer fazer alguma transação, ele deve fazê-la por meio de um desses computadores, a pessoa chega no computador que está rodando o programa e diz: "sou fulano, tenho x bitcoins, e quero enviá-los para tal lugar", o programa vai lá e envia essa mensagem para os outros computadores, que atualizam a sua lista. Fim.
Essa seria uma versão ingênua do protocolo, que funcionaria se todos os participantes fossem muito honestos e ninguém jamais tentasse gastar os bitcoins que não têm.
Pra uma coisa dessas funcionar no mundo real teve de entrar a grande invenção do Bitcoin, o insight genial do Satoshi Nakamoto, que é a tal cadeia de blocos, conhecida por aí como blockchain.
Funciona assim: ao invés de cada computador manter uma lista de onde está cada bitcoin, cada computador mantém a tal cadeia de blocos. Um "bloco" é só um nome bonitinho para um conjunto de dados. Cada bloco é composto por uma referência ao bloco anterior e uma lista de transações. Como eles contém uma referência ao anterior, existe uma seqüência, uma fila indiana, e o computador pode ficar tranqüilo sabendo a ordem das transações (as transações que aconteceram no terceiro bloco são posteriores às que aconteceram no segundo bloco, por exemplo) e saber que os mesmos bitcoins não foram gastos duas vezes seguidas pela mesma pessoa, o que seria inválido. Quando aparece um novo bloco, é só todos os computadores conferirem se não existe nenhuma transação inválida ali e, caso exista, rejeitarem aquele bloco por inteiro e esperarem que o próximo descarte aquela transação inválida e venha certinho.
Quem faz os blocos
Em tese, qualquer um dos computadores pode fazer o próximo bloco. A idéia é que cada pessoa que queira fazer uma transação vai lá e usa um computador da rede para enviar a sua proposta de transação ("quero transferir bitcoins para tal lugar e tal") para todos os demais, e que, quando alguém for fazer um bloco, pegue todas essas propostas de transação que forem válidas e as coloque no bloco que então será aceito por todos os outros computadores e incluído na cadeia global de blocos. Essa cadeia global tem que ser exatamente igual em todos os computadores.
Na prática, existe uma regra que faz com que nem todos consigam fazer blocos: é que o hash dos dados do bloco + um número mágico deve ser menor do que um valor muito pequeno
x
. O número mágico é um número qualquer que o computador que está tentando fazer o bloco pode ajustar, por tentativa e erro, para que o hash saia de um jeito que ele queira. Ox
pode ser maior ou menor de acordo com a freqüência dos últimos blocos produzidos. Quanto menor forx
, mais estatisticamente difícil é encontrar um número mágico que, junto com os dados do bloco, tenha um hash menor do quex
.Ou seja: para fazer um bloco, muitos números mágicos diferentes devem ser tentados até que seja encontrado algum que satisfaça as condições.
O que é um hash? Um hash é uma função matemática que é fácil fazer para um lado e difícil de fazer para o outro. A multiplicação, por exemplo, é fácil de fazer e fácil de fazer, e sua operação contrária, a divisão, também (tanto é que qualquer um com papel e caneta consegue, tem aquela coisa de ir passando os números pra baixo e subtraindo e tal). Já uma operação de exponenciação -- um número elevado a 1000, por exemplo -- é fácil de fazer, mas pra desfazer só com tentativa e erro (e é por tentativa e erro que o computador ou a calculadora fazem).
No caso do Bitcoin, o computador que está tentando produzir um bloco tem que achar um número tal que
(esse número mágico + fatores predeterminados do bloco) elevados a 50
resultem num valor menor do quefator de dificuldade
, um outro fator predeterminado pelo estado geral da cadeia de blocos.Suponha que um computador acha um número
1798465042647412146620280340569649349251249
, por exemplo, e ele é menor do que ofator de dificuldade
. Ele então diz para os outros: "aqui está meu bloco, o hash do meu bloco é1798465042647412146620280340569649349251249
, os fatorespredeterminados do bloco
são4
(esses fatores todo mundo pode conferir), e meu número mágico é3
.(4 + 3) elevado a 50
é1798465042647412146620280340569649349251249
, como todos podem conferir, então meu bloco é válido". Então todos aceitam aquele bloco como válido e começam a tentar achar o número mágico para o próximo bloco (e desta vez os fatores do bloco são diferentes, já que um novo bloco foi adicionado à cadeia e fez com que tudo mudasse).As regras para a definição de
x
fazem com que na média cada novo bloco fique pronto em 10 minutos. Logo, se há apenas um computador tentando produzir blocos, o protocolo dirá quex
seja relativamente alto, de modo que esse computador conseguirá, em 10 minutos, na média, encontrar um número mágico. Se, porém, milhares de computadores superpotentes estiverem tentando produzir blocos,x
será ajustado para um valor muito mais baixo, de modo que o esforço de todos esses computadores fazendo milhares de tentativas-e-erros por segundo só conseguirá encontrar um número mágico a cada 10 minutos.Hoje existem computadores feitos especialmente para procurar números mágicos que conseguem calcular hashes muito mais rápido do que o seu computador caseiro, o que torna inviável que qualquer pessoa não especializada tente produzir blocos, veja este gráfico da evolução da quantidade de hashes que são tentados a cada segundo.
Por algum motivo convencionou-se chamar os computadores que se empenham em fazer novos blocos de "mineradores".
Se dois computadores da rede fizerem blocos ao mesmo tempo, qual dos dois vale?
Se você já sabe quem faz os blocos fica fácil imaginar que isso é um pouco improvável. Mas mesmo assim pode acontecer. Mesmo que os blocos não fiquem prontos exatamente no mesmo instante, problemas podem acontecer porque os outros computadores da rede receberão os dois novos blocos em ordens diferentes, e aí não vai dar pra determinar qual vale ou qual deixa de valer assim, pela ordem.
Os computadores então ficam num estado de indeterminação acerca das duas cadeias de blocos possíveis, A e B, digamos, ambas idênticas até o bloco de número 723, mas diferentes no que diz respeito ao bloco 724, para o qual há duas alternativas. O protocolo determina que a cadeia que tenha mais trabalho realizado é a que vale, mas durante algum tempo podemos ter um estado em que alguns computadores da rede só sabem da existência do bloco A, enquanto outros só sabem da existência do bloco B, o que é uma grande confusão que só pode ser resolvida pelo advento do próximo bloco, o 725.
Como cada bloco se refere a um bloco anterior, é necessário que um desses dois blocos 724 seja escolhido pelos mineradores para ser o "pai" do bloco 725 quando o número mágico for encontrado e ele for feito. Mesmo que cada minerador escolha um pai diferente, desse processo sairá provavelmente apenas um bloco 725, e quando ele for espalhado ele determinará, pela sua ascendência, qual foi o bloco 724 que ficou valendo. Caso dois ou mais blocos 725 sejam produzidos ao mesmo tempo, o sistema continua nesse estado de indecisão até o 726, e assim por diante.
Por este motivo não se deve confiar que uma transação está concretizada pra valer mesmo só porque ela foi incluída num bloco. Você não tem como saber se existe um outro bloco alternativo que será preferido ao seu até que pelo menos mais alguns blocos tenham sido adicionados.
Transações
Muitas pessoas acreditam que existem endereços e que esses endereços têm um dono e ele é o dono dos bitcoins. Esta crença errônea é resultado de uma analogia com bancos tradicionais e contas bancárias (as contas são endereços que têm um dono e guardam dinheiro).
Na verdade assim que as transações são incluídas num bloco elas não "ficam em um endereço", mas vagando num grande limbo de transações. Deste limbo elas podem ser retiradas por qualquer pessoa que cumpra as condições que foram previamente especificadas pelo criador da transação.
Uma analogia mais útil do que a analogia das contas bancárias é a analogia do dinheiro: imagine que você tem uma nota de 20 dinheiros e você quer usá-la pra pagar 10 dinheiros a outrem. Você precisa quebrar aquela nota de 20 em duas de 10 e aí uma fica com você e a outra com a outra pessoa, ou, se você tiver duas notas de 5, você pode juntar as duas e dar para a outra pessoa. Todas essas notas que você está gastando têm uma história prévia: elas vieram de algum lugar em algum momento para o seu controle.
Transações com Bitcoin também são assim: você precisa mencionar especificamente uma transação anterior.
Por exemplo,
- Carlos paga 10 bitcoins a Dandara, Dandara agora tem uma transação no valor de 10
- Elisa paga 17 bitcoins a Dandara, Dandara tem uma transação no valor de 10 e uma no valor de 17
- Dandara paga 23 bitcoins a Felipe, ela junta suas duas transações e faz duas novas, uma no valor de 23, que vai para o controle de Felipe, e outra no valor de 4, que volta para o seu controle, Dandara agora tem uma transação no valor de 4, Felipe tem uma transação no valor de 23
- Felipe paga 14 bitcoins a Geraldo, ele divide sua transação em duas, uma no valor de 14 e outra no valor de 9, e assim por diante
Uma diferença, porém, é que no Bitcoin ninguém sabe quem é o dono da nota, você apenas sabe que pode gastá-la, caso você realmente possa (se uma transação prévia especifica uma condição que você pode cumprir, você deve cumprir aquela condição no momento em que estiver mencionando a transação prévia). Por isso uma carteira Bitcoin pode dizer que você "tem" um número x de bitcoins: a carteira sabe quais chaves privadas você controla e quais transações, dentre todas as transações não-gastas de toda a blockchain, podem ser gastas usando aquela chave.
Uma forma comum de transação é que especifica a condição
qualquer pessoa que tiver a chave privada capaz de assinar a chave pública cujo hash vai aqui dito pode gastar esta transação
. Outras condições comuns são as que especificamn
chaves, das quaism
precisam assinar a transação para que ela seja gasta (por exemplo, entre Fulano, Beltrano e Ciclano, quaisquer dois deles precisam concordar, mas não um só), o famoso multisig.Canal de pagamento
Um payment channel, ou via de pagamento, ou canal de pagamento é uma seqüência de promessas de pagamento feitas entre dois usuários de Bitcoin que não precisam ser publicados na blockchain e por isso são instantâneas e grátis.
Antes que você se pergunte o que acontece se alguém descumprir a promessa, devo dizer que "promessa" é um termo ruim, porque promessas de verdade podem ser quebradas, mas estas promessas são auto-cumpríveis, elas são transações assinadas que podem ser resgatadas a qualquer momento pelo destinatário bastando que ele as publique na blockchain.
A idéia é que na maioria das vezes você não vai precisar disso, e pode continuar fazendo transações novas que invalidam as antigas até que você decida publicar a última transação válida. Deste modo seu dinheiro está seguro numa via de pagamento
O grande problema é que caso a outra parte decida roubar e publicar uma transação antiga, você precisa aparecer num espaço de tempo razoável (isto depende do combinado entre os dois usuários, mas acho que o padrão é 24 horas) e publicar a última transação. Existem incentivos para impedir que alguém tente roubar (por exemplo, quem tentar roubar e for pego perde todo o dinheiro que estava naquela via) e outros mecanismos, como as atalaias que vigiam as vias de pagamentos dos outros pra ver se ninguém está roubando.
Exemplo:
- Ângela e Bóris decidem criar uma via de pagamento, pois esperam realizar muitos pagamentos de pequeno valor entre eles, tanto de ida quanto de volta, ao longo de vários meses
- Ângela cria uma transação para um endereço compartilhado entre ela e Bóris, no valor de 1000 satoshis, e desse endereço ela e Bóris criam uma transação devolvendo os 1000 para Ângela
- Ao resolver pagar 200 satoshis para Bóris, eles criam uma nova transação que transfere 800 para Ângela e 200 para Bóris
- Agora Bóris quer pagar 17 satoshis para Ângela, eles criam uma nova transação que transfere 817 para Ângela e 173 para Bóris
- E assim por diante eles vão criando novas transações que invalidam as anteriores e vão alterando o "saldo" da via de pagamento. Quando qualquer um dos dois quiser sacar o dinheiro que tem no saldo é só publicar a última transação e pronto.
A rede Relâmpago é uma grande rede de canais de pagamento que permite que pessoas façam pagamentos para pessoas não diretamente ligadas a elas por canais diretos, mas através de uma rota que percorre vários canais de outrem e ajusta seus saldos.
Existem outras criptomoedas além do Bitcoin?
Pra começar, jamais use essa palavra de novo. "criptomoeda" é ainda pior do que "moeda virtual"[^moeda_virtual].
Agora, respondendo: sim, de certo modo existem, são chamadas as "altcoins" ou "shitcoins" ("moedas de cocô", tradução amigável), porque elas são, de fato, grandes porcarias.
De outro modo, pode-se dizer que elas não são comparáveis ao Bitcoin, porque só pode haver uma moeda num livre mercado de moedas, e esse posto já é do Bitcoin, e também porque o Bitcoin é livre, sem donos, sem grandes poderes que o controlam, o que não se pode dizer de nenhuma altcoin.
Depois que o Bitcoin foi inventado e seu insight genial foi assimilado pela comunidade interessada, milhares de pessoas copiaram o protocolo, com pequenas modificações, para criar suas próprias moedas.
Assim surgiram Litecoin, Ethereum e muitas outras. No fundo são apenas cópias do Bitcoin que tentam melhorá-lo de algum modo ou adicionar outras funções.
Veja também:
- Aos poucos, e aí tudo de uma vez, Parker Lewis
- Não tem solução
- A podridão
- O Bitcoin como um sistema social humano
- Rede Relâmpago
[^protocolo]: Neste contexto, um protocolo é um conjunto de regras (inventadas arbitrariamente ou surgidas dos usos e costumes ao longo do tempo) que permitem que dois computadores diferentes se entendam e saibam que tipo de mensagens e comportamentos esperar dos demais. [^moeda_virtual]: Virtual? Virtual era pra significar uma coisa que não é ainda "atual", ou seja, que ainda não se concretizou na realidade. Mas como nossos amigos falantes da língüa portuguesa quiseram que isso passasse também a significar qualquer coisa que aconteça em um computador, "moeda virtual" ficou sendo uma moeda que existe no computador. O Bitcoin claramente é uma moeda que existe no computador, mas mesmo assim esse conceito é confuso. Uma transferência bancária tradicional também não é "dinheiro virtual"? Ela acontece no computador, mas você ainda não pegou as notas de papel ali na sua mão, então é virtual. Chamar só o Bitcoin de moeda virtual pode talvez criar a impressão de que é o Bitcoin é um brinquedinho, como por exemplo as moedas virtuais que existem dentro do universo de jogos de simulação, como, sei lá, World of Warcraft.
-
@ 82341f88:fbfbe6a2
2023-04-11 19:36:53There’s a lot of conversation around the #TwitterFiles. Here’s my take, and thoughts on how to fix the issues identified.
I’ll start with the principles I’ve come to believe…based on everything I’ve learned and experienced through my past actions as a Twitter co-founder and lead:
- Social media must be resilient to corporate and government control.
- Only the original author may remove content they produce.
- Moderation is best implemented by algorithmic choice.
The Twitter when I led it and the Twitter of today do not meet any of these principles. This is my fault alone, as I completely gave up pushing for them when an activist entered our stock in 2020. I no longer had hope of achieving any of it as a public company with no defense mechanisms (lack of dual-class shares being a key one). I planned my exit at that moment knowing I was no longer right for the company.
The biggest mistake I made was continuing to invest in building tools for us to manage the public conversation, versus building tools for the people using Twitter to easily manage it for themselves. This burdened the company with too much power, and opened us to significant outside pressure (such as advertising budgets). I generally think companies have become far too powerful, and that became completely clear to me with our suspension of Trump’s account. As I’ve said before, we did the right thing for the public company business at the time, but the wrong thing for the internet and society. Much more about this here: https://twitter.com/jack/status/1349510769268850690
I continue to believe there was no ill intent or hidden agendas, and everyone acted according to the best information we had at the time. Of course mistakes were made. But if we had focused more on tools for the people using the service rather than tools for us, and moved much faster towards absolute transparency, we probably wouldn’t be in this situation of needing a fresh reset (which I am supportive of). Again, I own all of this and our actions, and all I can do is work to make it right.
Back to the principles. Of course governments want to shape and control the public conversation, and will use every method at their disposal to do so, including the media. And the power a corporation wields to do the same is only growing. It’s critical that the people have tools to resist this, and that those tools are ultimately owned by the people. Allowing a government or a few corporations to own the public conversation is a path towards centralized control.
I’m a strong believer that any content produced by someone for the internet should be permanent until the original author chooses to delete it. It should be always available and addressable. Content takedowns and suspensions should not be possible. Doing so complicates important context, learning, and enforcement of illegal activity. There are significant issues with this stance of course, but starting with this principle will allow for far better solutions than we have today. The internet is trending towards a world were storage is “free” and infinite, which places all the actual value on how to discover and see content.
Which brings me to the last principle: moderation. I don’t believe a centralized system can do content moderation globally. It can only be done through ranking and relevance algorithms, the more localized the better. But instead of a company or government building and controlling these solely, people should be able to build and choose from algorithms that best match their criteria, or not have to use any at all. A “follow” action should always deliver every bit of content from the corresponding account, and the algorithms should be able to comb through everything else through a relevance lens that an individual determines. There’s a default “G-rated” algorithm, and then there’s everything else one can imagine.
The only way I know of to truly live up to these 3 principles is a free and open protocol for social media, that is not owned by a single company or group of companies, and is resilient to corporate and government influence. The problem today is that we have companies who own both the protocol and discovery of content. Which ultimately puts one person in charge of what’s available and seen, or not. This is by definition a single point of failure, no matter how great the person, and over time will fracture the public conversation, and may lead to more control by governments and corporations around the world.
I believe many companies can build a phenomenal business off an open protocol. For proof, look at both the web and email. The biggest problem with these models however is that the discovery mechanisms are far too proprietary and fixed instead of open or extendable. Companies can build many profitable services that complement rather than lock down how we access this massive collection of conversation. There is no need to own or host it themselves.
Many of you won’t trust this solution just because it’s me stating it. I get it, but that’s exactly the point. Trusting any one individual with this comes with compromises, not to mention being way too heavy a burden for the individual. It has to be something akin to what bitcoin has shown to be possible. If you want proof of this, get out of the US and European bubble of the bitcoin price fluctuations and learn how real people are using it for censorship resistance in Africa and Central/South America.
I do still wish for Twitter, and every company, to become uncomfortably transparent in all their actions, and I wish I forced more of that years ago. I do believe absolute transparency builds trust. As for the files, I wish they were released Wikileaks-style, with many more eyes and interpretations to consider. And along with that, commitments of transparency for present and future actions. I’m hopeful all of this will happen. There’s nothing to hide…only a lot to learn from. The current attacks on my former colleagues could be dangerous and doesn’t solve anything. If you want to blame, direct it at me and my actions, or lack thereof.
As far as the free and open social media protocol goes, there are many competing projects: @bluesky is one with the AT Protocol, nostr another, Mastodon yet another, Matrix yet another…and there will be many more. One will have a chance at becoming a standard like HTTP or SMTP. This isn’t about a “decentralized Twitter.” This is a focused and urgent push for a foundational core technology standard to make social media a native part of the internet. I believe this is critical both to Twitter’s future, and the public conversation’s ability to truly serve the people, which helps hold governments and corporations accountable. And hopefully makes it all a lot more fun and informative again.
💸🛠️🌐 To accelerate open internet and protocol work, I’m going to open a new category of #startsmall grants: “open internet development.” It will start with a focus of giving cash and equity grants to engineering teams working on social media and private communication protocols, bitcoin, and a web-only mobile OS. I’ll make some grants next week, starting with $1mm/yr to Signal. Please let me know other great candidates for this money.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Carl R. Rogers sobre a ciência
Creio que o objetivo primário da ciência é fornecer uma hipótese, uma convicção e uma fé mais seguras e que satisfaçam melhor o próprio investigador. Na medida em que o cientista procura provar qualquer coisa a alguém -- um erro em que incorri mais de uma vez --, creio que ele está se servindo da ciência para remediar uma insegurança pessoal, desviando-a do seu verdadeiro papel criativo a serviço do indivíduo.
Tornar-se Pessoa, página aleatória
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28contratos.alhur.es
A website that allowed people to fill a form and get a standard Contrato de Locação.
Better than all the other "templates" that float around the internet, which are badly formatted
.doc
files.It was fully programmable so other templates could be added later, but I never did. This website made maybe one dollar in Google Ads (and Google has probably stolen these like so many other dollars they did with their bizarre requirements).
-
@ be318ab6:dd25a806
2023-04-09 02:55:48Howdy y'all,
This is the Ramble #001
I was gifted extra spare time (layoffs), and I'm excited to put it into learning about some of the new exciting things swirling around us at an ever-accelerating speed & also about the good old ever-useful manual skills, which are disappearing almost at the same rate.
The focus will be on anti-fragility tools and skills. I'll be touching on the the topics of Nostr, AI, Bitcoin, Privacy and self-sufficiency skills like woodworking or food production. I'll include any building just for the sake of it (I get more value myself and motivation to do more of it if I can show off something tangible, I guess). It's not gonna be the most frontier-breaking r0ckstar knowledge, but it will humbly keep moving forward one step at the time along with my selfish pursuit of improving my knowledge and skills along these vectors. I also won't avoid some rants about the insane corruption in our Money and all the downstream problems it brings.
Why?
1. Why having a blog? * I want to practice my writing. Not to challenge the AI, but just to better remember the learnings, and to clarify & structure my thoughts. * The public record forces accountability on me. I want to commit to a habit of writing a few posts per week.
2. Why it's on Nostr? * When dozens of the smartest and most honest individuals I respect flock to this like a bunch of crazy birds, it's something I don't want to miss out on, and I want to get intimately familiar with the tech. * It's an uncensorable protocol that allows you to own your data and can re-architect many broken things on the current web. * Easy for me to push my tip jar in y'alls faces and see if anyone gets any real value out of this * And, I just wanted to play with it. Some clients still might be buggy, but the speed of development and improvements is lightning miles away from anything I've seen before.
I'll probably write some intro to NOSTR next time around; but in the meantime Gigi will help you out at nostr-resources.com
3. Why including the offline skills and tools? * Humans are not meant to stare into screens all day. * I see there's one of the most competitive edges for the next 10 years.
And, that's about it. I'm really excited about days and weeks, and will have some fun. If you have read all down here, thank you and PV to you!
PS.
Resources/Inspiration for this issue: * Marty's Bent - duh! * BloggingBitcoin - How To Start A Blog Today
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Personagens de jogos e símbolos
A sensação de "ser" um personagem em um jogo ou uma brincadeira talvez seja o mais próximo que eu tenha conseguido chegar do entendimento de um símbolo religioso.
A hóstia consagrada é, segundo a religião, o corpo de Cristo, mas nossa mente moderna só consegue concebê-la como sendo uma representação do corpo de Cristo. Da mesma forma outras culturas e outras religiões têm símbolos parecidos, inclusive nos quais o próprio participante do ritual faz o papel de um deus ou de qualquer coisa parecida.
"Faz o papel" é de novo a interpretação da mente moderna. O sujeito ali é a coisa, mas ele ao mesmo tempo que é também sabe que não é, que continua sendo ele mesmo.
Nos jogos de videogame e brincadeiras infantis em que se encarna um personagem o jogador é o personagem. não se diz, entre os jogadores, que alguém está "encenando", mas que ele é e pronto. nem há outra denominação ou outro verbo. No máximo "encarnando", mas já aí já é vocabulário jornalístico feito para facilitar a compreensão de quem está de fora do jogo.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A estrutura lógica do livro didático
Todos os livros didáticos e cursos expõem seus conteúdos a partir de uma organização lógica prévia, um esquema de todo o conteúdo que julgam relevante, tudo muito organizadinho em tópicos e subtópicos segundo a ordem lógica que mais se aproxima da ordem natural das coisas. Imagine um sumário de um manual ou livro didático.
A minha experiência é a de que esse método serve muito bem para ninguém entender nada. A organização lógica perfeita de um campo de conhecimento é o resultado final de um estudo, não o seu início. As pessoas que escrevem esses manuais e dão esses cursos, mesmo quando sabem do que estão falando (um acontecimento aparentemente raro), o fazem a partir do seu próprio ponto de vista, atingido após uma vida de dedicação ao assunto (ou então copiando outros manuais e livros didáticos, o que eu chutaria que é o método mais comum).
Para o neófito, a melhor maneira de entender algo é através de imersões em micro-tópicos, sem muita noção da posição daquele tópico na hierarquia geral da ciência.
- Revista Educativa, um exemplo de como não ensinar nada às crianças.
- Zettelkasten, a ordem surgindo do caos, ao invés de temas se encaixando numa ordem preexistentes.
-
@ aa55a479:f7598935
2023-02-20 13:44:48Nostrica is the shit.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28 -
@ 634bd19e:2247da9b
2023-02-15 08:00:45かけるのこれ? 日本語入力の取り扱いイベントがおかしいw
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Idéia de um sistema jurídico centralizado, mas com um pouco de lógica
um processo, é, essencialmente, imagino eu na minha ingenuidade leiga, um apelo que se faz ao juiz para que este reconheça certos fatos como probantes de um certo fenômeno tipificado por uma certa lei.
imagino então o seguinte:
uma petição não é mais um enorme documento escrito numa linguagem nojenta com referências a leis e a evidências factuais espalhadas segundo a (in) capacidade ensaística do advogado, mas apenas um esquema lógico - talvez até um diagrama desenhado (ou talvez quem sabe uma série de instruções compreensíveis por um computador?) - mostrando a ligação entre a lei e os fatos e os pedidos, por exemplo:
- a lei tal diz que ninguém pode vender
- fulano vendeu cigarros
- é prova de que fulano vendeu cigarros ia foto tirada na rua tal no dia tal que mostra fulano vendendo cigarros
- a mesma lei pede que fulano pague uma multa
este exemplo está ainda muito verborrágico, mas é só um exemplo simples. coisas mais complicadas precisariam de outras formas de expressão caso queiramos evitar as longas dissertações jurídicas em voga.
a idéia é que o esquema acima vale por si. um proto-juiz pode julgá-lo como válido ou inválido apenas pela sua lógica interna.
a outra parte do julgamento seria a ligação desse esquema com a realidade externa: anexados à petição viriam as evidências. no caso, anexada ao ponto 3 viria uma foto do fulano. ao ponto 1 também precisa ser anexado o texto da lei referida, mas isto pode ser feito automaticamente pelo número da lei.
uma vez que tenhamos um esquema lógico válido um outro proto-juiz, ou vários outros, pode julgar individualmente cada evidência: ver se o texto da lei confere com a interpretação feita no ponto 1, e se a foto anexada ao ponto 3 é mesmo a foto do réu vendendo cigarro e não a de um urso comendo laranjas.
cada um desses julgamentos pode ser feito sem que o proto-juiz tenha conhecimento do resto das coisas do processo: o primeiro proto-juiz não precisa ver a foto ou a lei, o segundo não precisa ver o esquema lógico ou a foto, o terceiro não precisa ver a lei nem o esquema lógico, e mesmo assim teríamos um julgamento de procedência ou não da petição ao final, o mais impessoal e provavelmente o mais justo possível.
a defesa consistiria em apontar erros no esquema lógico ou falhas no nexo entre a realidade é o esquema. por exemplo:
- uma foto assim não é uma prova de que fulano vendeu, ele podia estar só passando lá perto.
- ele estava de fato só passando lá perto. do que é prova este documento mostrando seu comparecimento a uma aula do curso de direito da UFMG no mesmo horário.
perdoem-me se estiver falando besteira, mas são 5h e estou ainda dormindo. obviamente há vários pontos problemáticos aí, e quero entendê-los, mas a forma geral me parece bem razoável.
o que descrevi acima é uma proposta, digamos, de sistema jurídico que não se diferencia em nada do nosso sistema jurídico atual, exceto na forma (não no sentido escolástico). é também uma tentativa de compreender sua essência.
as vantagens desse formato ao atual são muitas:
- menos papel, coisas pra ler, repetição infinita de citações legais e longuíssimas dissertações escritas por advogados analfabetos que destroem a língua e a inteligência de todos
- diminuição drástica do tempo gasto por cada juiz em cada processo
- diminuição do poder de cada juiz (se cada ato de julgamento humano necessário em cada processo pode ser feito por qualquer juiz, sem conhecimento dos outros aspectos do mesmo processo, tudo é muito mais rápido, e cada julgamento desses pode ser feito por vários juízes diferentes, escolhidos aleatoriamente)
- diminuição da pomposidade de casa juiz: com menos poder e obrigações maus simples, um juiz não precisa ser mais uma pessoa especial que ganha milhões, pode ser uma pessoa comum, um proto-juiz, ganhando menos (o que possibilitaria até ter mais desses e aumentar a confiabilidade de cada julgamento)
- os juízes podem trabalhar da casa deles e a qualquer momento
- passa a ter sentido a existência de um sistema digital de processos (porque é ridículo que o sistema digital atual seja só uma forma de passar documentos do Word de um lado para o outro)
- o fim das audiências de conciliação, que são uma monstruosidade criada apenas pela necessidade de diminuir a quantidade de processos em tramitação e acabam retirandobo sentido da justiça (as partes são levemente pressionadas a ignorar a validade ou não das suas posições e fazer um acordo, sob pena de o juiz ficar com raiva delas depois)
milhares de precauções devem ser tomadas caso um sistema desses vá ser implantado (ahahah), talvez manter uma forma de julgamento tradicional, de corpo presente e com um juiz ou júri que tem conhecimento de toda situação, mas apenas para processos que chegarem até certo ponto, e assim por diante.
Ver também
- P2P reputation thing para um fundamento de um sistema jurídico anárquico.