-
@ f977c464:32fcbe00
2024-01-30 20:06:18Güneşin kaybolmasının üçüncü günü, saat öğlen on ikiyi yirmi geçiyordu. Trenin kalkmasına yaklaşık iki saat vardı. Hepimiz perondaydık. Valizlerimiz, kolilerimiz, renk renk ve biçimsiz çantalarımızla yan yana dizilmiş, kısa aralıklarla tepemizdeki devasa saati kontrol ediyorduk.
Ama ne kadar dik bakarsak bakalım zaman bir türlü istediğimiz hızla ilerlemiyordu. Herkes birkaç dakika sürmesi gereken alelade bir doğa olayına sıkışıp kalmış, karanlıktan sürünerek çıkmayı deniyordu.
Bekleme salonuna doğru döndüm. Nefesimden çıkan buharın arkasında, kalın taş duvarları ve camlarıyla morg kadar güvenli ve soğuk duruyordu. Cesetleri o yüzden bunun gibi yerlere taşımaya başlamışlardı. Demek insanların bütün iyiliği başkaları onları gördüğü içindi ki gündüzleri gecelerden daha karanlık olduğunda hemen birbirlerinin gırtlağına çökmüş, böğürlerinde delikler açmış, gözlerini oyup kafataslarını parçalamışlardı.
İstasyonun ışığı titrediğinde karanlığın enseme saplandığını hissettim. Eğer şimdi, böyle kalabalık bir yerde elektrik kesilse başımıza ne gelirdi?
İçerideki askerlerden biri bakışlarımı yakalayınca yeniden saate odaklanmış gibi yaptım. Sadece birkaç dakika geçmişti.
“Tarlalarım gitti. Böyle boyum kadar ayçiçeği doluydu. Ah, hepsi ölüp gidiyor. Afitap’ın çiçekleri de gi-”
“Dayı, Allah’ını seversen sus. Hepimizi yakacaksın şimdi.”
Karanlıkta durduğunda, görünmez olmayı istemeye başlıyordun. Kimse seni görmemeli, nefesini bile duymamalıydı. Kimsenin de ayağının altında dolaşmamalıydın; gelip kazayla sana çarpmamalılar, takılıp sendelememeliydiler. Yoksa aslında hedefi sen olmadığın bir öfke gürlemeye başlar, yaşadığın ilk şoku ve acıyı silerek üstünden geçerdi.
İlk konuşan, yaşlıca bir adam, kafasında kasketi, nasırlı ellerine hohluyordu. Gözleri ve burnu kızarmıştı. Güneşin kaybolması onun için kendi başına bir felaket değildi. Hayatına olan pratik yansımalarından korkuyordu olsa olsa. Bir anının kaybolması, bu yüzden çoktan kaybettiği birinin biraz daha eksilmesi. Hayatta kalmasını gerektiren sebepler azalırken, hayatta kalmasını sağlayacak kaynaklarını da kaybediyordu.
Onu susturan delikanlıysa atkısını bütün kafasına sarmış, sakalı ve yüzünün derinliklerine kaçmış gözleri dışında bedeninin bütün parçalarını gizlemeye çalışıyordu. İşte o, güneşin kaybolmasının tam olarak ne anlama geldiğini anlamamış olsa bile, dehşetini olduğu gibi hissedebilenlerdendi.
Güneşin onlardan alındıktan sonra kime verileceğini sormuyorlardı. En başta onlara verildiğinde de hiçbir soru sormamışlardı zaten.
İki saat ne zaman geçer?
Midemin üstünde, sağ tarafıma doğru keskin bir acı hissettim. Karaciğerim. Gözlerimi yumdum. Yanımda biri metal bir nesneyi yere bıraktı. Bir kafesti. İçerisindeki kartalın ıslak kokusu burnuma ulaşmadan önce bile biliyordum bunu.
“Yeniden mi?” diye sordu bana kartal. Kanatları kanlı. Zamanın her bir parçası tüylerinin üstüne çöreklenmişti. Gagası bir şey, tahminen et parçası geveliyor gibi hareket ediyordu. Eski anılar kolay unutulmazmış. Şu anda kafesinin kalın parmaklıklarının ardında olsa da bunun bir aldatmaca olduğunu bir tek ben biliyordum. Her an kanatlarını iki yana uzatıverebilir, hava bu hareketiyle dalgalanarak kafesi esneterek hepimizi içine alacak kadar genişleyebilir, parmaklıklar önce ayaklarımızın altına serilir gibi gözükebilir ama aslında hepimizin üstünde yükselerek tepemize çökebilirdi.
Aşağıya baktım. Tahtalarla zapt edilmiş, hiçbir yere gidemeyen ama her yere uzanan tren rayları. Atlayıp koşsam… Çantam çok ağırdı. Daha birkaç adım atamadan, kartal, suratını bedenime gömerdi.
“Bu sefer farklı,” diye yanıtladım onu. “Yeniden diyemezsin. Tekrarladığım bir şey değil bu. Hatta bir hata yapıyormuşum gibi tonlayamazsın da. Bu sefer, insanların hak etmediğini biliyorum.”
“O zaman daha vahim. Süzme salaksın demektir.”
“İnsanların hak etmemesi, insanlığın hak etmediği anlamına gelmez ki.”
Az önce göz göze geldiğim genççe ama çökük asker hâlâ bana bakıyordu. Bir kartalla konuştuğumu anlamamıştı şüphesiz. Yanımdakilerden biriyle konuştuğumu sanmış olmalıydı. Ama konuştuğum kişiye bakmıyordum ona göre. Çekingence kafamı eğmiştim. Bir kez daha göz göze geldiğimizde içerideki diğer iki askere bir şeyler söyledi, onlar dönüp beni süzerken dışarı çıktı.
Yanımızdaki, az önce konuşan iki adam da şaşkınlıkla bir bana bir kartala bakıyordu.
“Yalnız bu sefer kalbin de kırılacak, Prometheus,” dedi kartal, bana. “Belki son olur. Biliyorsun, bir sürü soruna neden oluyor bu yaptıkların.”
Beni koruyordu sözde. En çok kanıma dokunan buydu. Kasıklarımın üstüne oturmuş, kanlı suratının ardında gözleri parlarken attığı çığlık kulaklarımda titremeye devam ediyordu. Bu tabloda kimsenin kimseyi düşündüğü yoktu. Kartalın, yanımızdaki adamların, artık arkama kadar gelmiş olması gereken askerin, tren raylarının, geçmeyen saatlerin…
Arkamı döndüğümde, asker sahiden oradaydı. Zaten öyle olması gerekiyordu; görmüştüm bunu, biliyordum. Kehanetler… Bir şeyler söylüyordu ama ağzı oynarken sesi çıkmıyordu. Yavaşlamış, kendisini saatin akışına uydurmuştu. Havada donan tükürüğünden anlaşılıyordu, sinirliydi. Korktuğu için olduğunu biliyordum. Her seferinde korkmuşlardı. Beni unutmuş olmaları işlerini kolaylaştırmıyordu. Sadece yeni bir isim vermelerine neden oluyordu. Bu seferkiyle beni lanetleyecekleri kesinleşmişti.
Olması gerekenle olanların farklı olması ne kadar acınasıydı. Olması gerekenlerin doğasının kötücül olmasıysa bir yerde buna dayanıyordu.
“Salaksın,” dedi kartal bana. Zamanı aşan bir çığlık. Hepimizin önüne geçmişti ama kimseyi durduramıyordu.
Sonsuzluğa kaç tane iki saat sıkıştırabilirsiniz?
Ben bir tane bile sıkıştıramadım.
Çantama uzanıyordum. Asker de sırtındaki tüfeğini indiriyordu. Benim acelem yoktu, onunsa eli ayağı birbirine dolaşıyordu. Oysaki her şey tam olması gerektiği anda olacaktı. Kehanet başkasının parmaklarının ucundaydı.
Güneş, bir tüfeğin patlamasıyla yeryüzüne doğdu.
Rayların üzerine serilmiş göğsümün ortasından, bir çantanın içinden.
Not: Bu öykü ilk olarak 2021 yılında Esrarengiz Hikâyeler'de yayımlanmıştır.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28O Planetinha
Fumaça verde me entrando pelas narinas e um coro desafinado fazia uma base melódica.
nos confins da galáxia havia um planetinha isolado. Era um planeta feliz.
O homem vestido de mago começava a aparecer por detrás da fumaça verde.
O planetinha recebeu três presentes, mas o seu habitante, o homem, estava num estado de confusão tão grande que ameaçava estragá-los. Os homens já havia escravizado o primeiro presente, a vida; lutavam contra o segundo presente, a morte; e havia alguns que achavam que deviam destruir totalmente o terceiro, o amor, e com isto levar a desordem total ao pobre planetinha perdido, que se chamava Terra.
O coro desafinado entrou antes do "Terra" cantando várias vezes, como se imitasse um eco, "terra-terra-terraaa". Depois de uma pausa dramática, o homem vestido de mago voltou a falar.
Terra, nossa nave mãe.
Neste momento eu me afastei. À frente do palco onde o mago e seu coral faziam apelos à multidão havia vários estandes cobertos com a tradicional armação de quatro pernas e lona branca. Em todos os cantos da praça havia gente, gente dos mais variados tipos. Visitantes curiosos que se aproximavam atraídos pela fumaça verde e as barraquinhas, gente que aproveitava o movimento para vender doces sem pagar imposto, casais que se abraçavam de pé para espantar o frio, os tradicionais corredores que faziam seu cooper, gente cheia de barba e vestida para imitar os hippies dos anos 60 e vender colares estendidos no chão, transeuntes novos e velhos, vestidos como baladeiros ou como ativistas do ônibus grátis, grupos de ciclistas entusiastas.
O mago fazia agora apelos para que nós, os homens, habitantes do isolado planetinha, passássemos a ver o planetinha, nossa nave mãe, como um todo, e adquiríssemos a consciência de que ele estava entrando em maus lençóis. A idéia, reforçada pela logomarca do evento, era que parássemos de olhar só para a nossa vida e pensássemos no planeta.
A logomarca do evento, um desenho estilizado do planeta Terra, nada tinha a ver com seu nome: "Festival Andando de Bem com a Vida", mas havia sido ali colocada estrategicamente pelos organizadores, de quem parecia justamente sair a mensagem dita pelo mago.
Aquela multidão de pessoas que, assim como eu, tinham suas próprias preocupações, não podiam ver o quadro caótico que formavam, cada uma com seus atos isolados, ali naquela praça isolada, naquele planeta isolado. Quando o hippie barbudo, quase um Osho, assustava um casal para tentar vender-lhes um colar, a quantidade de caos que isto acrescentava à cena era gigantesca. Por um segundo, pude ver, como se estivesse de longe e acima, com toda a pretensão que este estado imaginativo carrega, a cena completa do caos.
Uma nave-mãe, dessas de ficção científica, habitada por milhões de pessoas, seguia no espaço sem rumo, e sem saber que logo à frente um longo precipício espacial a esperava, para a desgraça completa sua e de seus habitantes.
Acostumados àquela nave tanto quanto outrora estiveram acostumados à sua terra natal, os homens viviam as próprias vidas sem nem se lembrar que estavam vagando pelo espaço. Ninguém sabia quem estava conduzindo a nave, e ninguém se importava.
No final do filme descobre-se que era a soma completa do caos que cada habitante produzia, com seus gestos egoístas e incapazes de levar em conta a totalidade, é que determinava a direção da nave-mãe. O efeito, no entanto, não era imediato, como nunca é. Havia gente de verdade encarregada de conduzir a nave, mas era uma gente bêbada, mau-caráter, que vivia brigando pelo controle da nave e o poder que isto lhes dava. Poder, status, dinheiro!
Essa gente bêbada era atraída até ali pela corrupção das instituições e da moral comum que, no fundo no fundo, era causada pelo egoísmo da população, através de um complexo -- mas que no filme aparece simplificado pela ação individual de um magnata do divertimento público -- processo social.
O homem vestido de mago era mais um agente causador de caos, com sua cena cheia de fumaça e sua roupa estroboscópica, ele achava que estava fazendo o bem ao alertar sua platéia, todos as sextas-feiras, de que havia algo que precisava ser feito, que cada um que estava ali ouvindo era responsável pelo planeta. A sua incapacidade, porém, de explicar o que precisava ser feito só aumentava a angústia geral; a culpa que ele jogava sobre seu público, e que era prontamente aceita e passada em frente, aos familiares e amigos de cada um, atormentava-os diariamente e os impedia de ter uma vida decente no trabalho e em casa. As famílias, estressadas, estavam constantemente brigando e os motivos mais insignificantes eram responsáveis pelas mais horrendas conseqüências.
O mago, que após o show tirava o chapéu entortado e ia tomar cerveja num boteco, era responsável por uma parcela considerável do caos que levava a nave na direção do seu desgraçado fim. No filme, porém, um dos transeuntes que de passagem ouviu um pedaço do discurso do mago despertou em si mesmo uma consiência transformadora e, com poderes sobre-humanos que lhe foram então concedidos por uma ordem iniciática do bem ou não, usando só os seus poderes humanos mesmo, o transeunte -- na primeira versão do filme um homem, na segunda uma mulher -- consegue consertar as instituições e retirar os bêbados da condução da máquina. A questão da moral pública é ignorada para abreviar a trama, já com duas horas e quarenta de duração, mas subentende-se que ela também fora resolvida.
No planeta Terra real, que não está indo em direção alguma, preso pela gravidade ao Sol, e onde as pessoas vivem a própria vida porque lhes é impossível viver a dos outros, não têm uma consciência global de nada porque só é possível mesmo ter a consciência delas mesmas, e onde a maioria, de uma maneira ou de outra, está tentando como pode, fazer as coisas direito, o filme é exibido.
Para a maioria dos espectadores, é um filme que evoca reflexões, um filme forte. Por um segundo elas têm o mesmo vislumbre do caos generalizado que eu tive ali naquela praça. Para uma pequena parcela dos espectadores -- entre eles alguns dos que estavam na platéia do mago, o próprio mago, o seguidor do Osho, o casal de duas mulheres e o vendedor de brigadeiros, mas aos quais se somam também críticos de televisão e jornal e gente que fala pelos cotovelos na internet -- o filme é um horror, o filme é uma vulgarização de um problema real e sério, o filme apela para a figura do herói salvador e passa uma mensagem totalmente errada, de que a maioria da população pode continuar vivendo as suas própria vidinhas miseráveis enquanto espera por um herói que vem do Olimpo e os salva da mixórdia que eles mesmos causaram, é um filme que presta um enorme desserviço à causa.
No dia seguinte ao lançamento, num bar meio caro ali perto da praça, numa mesa com oito pessoas, entre elas seis do primeiro grupo e oito do segundo, discute-se se o filme levará ou não o Oscar. Eu estou em casa dormindo e não escuto nada.
-
@ f977c464:32fcbe00
2024-01-11 18:47:47Kendisini aynada ilk defa gördüğü o gün, diğerleri gibi olduğunu anlamıştı. Oysaki her insan biricik olmalıydı. Sözgelimi sinirlendiğinde bir kaşı diğerinden birkaç milimetre daha az çatılabilirdi veya sevindiğinde dudağı ona has bir açıyla dalgalanabilirdi. Hatta bunların hiçbiri mümkün değilse, en azından, gözlerinin içinde sadece onun sahip olabileceği bir ışık parlayabilirdi. Çok sıradan, öyle sıradan ki kimsenin fark etmediği o milyonlarca minik şeyden herhangi biri. Ne olursa.
Ama yansımasına bakarken bunların hiçbirini bulamadı ve diğer günlerden hiç de farklı başlamamış o gün, işe gitmek için vagonunun gelmesini beklediği alelade bir metro istasyonunda, içinde kaybolduğu illüzyon dağılmaya başladı.
İlk önce derisi döküldü. Tam olarak dökülmedi aslında, daha çok kıvılcımlara dönüşüp bedeninden fırlamış ve bir an sonra sönerek külleşmiş, havada dağılmıştı. Ardında da, kaybolmadan hemen önce, kısa süre için hayal meyal görülebilen, bir ruhun yok oluşuna ağıt yakan rengârenk peri cesetleri bırakmıştı. Beklenenin aksine, havaya toz kokusu yayıldı.
Dehşete düştü elbette. Dehşete düştüler. Panikle üstlerini yırtan 50 işçi. Her şeyin sebebiyse o vagon.
Saçları da döküldü. Her tel, yere varmadan önce, her santimde ikiye ayrıla ayrıla yok oldu.
Bütün yüzeylerin mat olduğu, hiçbir şeyin yansımadığı, suyun siyah aktığı ve kendine ancak kameralarla bakabildiğin bir dünyada, vagonun içine yerleştirilmiş bir aynadan ilk defa kendini görmek.
Gözlerinin akları buharlaşıp havada dağıldı, mercekleri boşalan yeri doldurmak için eriyip yayıldı. Gerçeği görmemek için yaratılmış, bu yüzden görmeye hazır olmayan ve hiç olmayacak gözler.
Her şeyin o anda sona erdiğini sanabilirdi insan. Derin bir karanlık ve ölüm. Görmenin görmek olduğu o anın bitişi.
Ben geldiğimde ölmüşlerdi.
Yani bozulmuşlardı demek istiyorum.
Belleklerini yeni taşıyıcılara takmam mümkün olmadı. Fiziksel olarak kusursuz durumdaydılar, olmayanları da tamir edebilirdim ama tüm o hengamede kendilerini baştan programlamış ve girdilerini modifiye etmişlerdi.
Belleklerden birini masanın üzerinden ileriye savurdu. Hınca hınç dolu bir barda oturuyorlardı. O ve arkadaşı.
Sırf şu kendisini insan sanan androidler travma geçirip delirmesin diye neler yapıyoruz, insanın aklı almıyor.
Eliyle arkasını işaret etti.
Polislerin söylediğine göre biri vagonun içerisine ayna yerleştirmiş. Bu zavallılar da kapı açılıp bir anda yansımalarını görünce kafayı kırmışlar.
Arkadaşı bunların ona ne hissettirdiğini sordu. Yani o kadar bozuk, insan olduğunu sanan androidi kendilerini parçalamış olarak yerde görmek onu sarsmamış mıydı?
Hayır, sonuçta belirli bir amaç için yaratılmış şeyler onlar. Kaliteli bir bilgisayarım bozulduğunda üzülürüm çünkü parasını ben vermişimdir. Bunlarsa devletin. Bana ne ki?
Arkadaşı anlayışla kafasını sallayıp suyundan bir yudum aldı. Kravatını biraz gevşetti.
Bira istemediğinden emin misin?
İstemediğini söyledi. Sahi, neden deliriyordu bu androidler?
Basit. Onların yapay zekâlarını kodlarken bir şeyler yazıyorlar. Yazılımcılar. Biliyorsun, ben donanımdayım. Bunlar da kendilerini insan sanıyorlar. Tiplerine bak.
Sesini alçalttı.
Arabalarda kaza testi yapılan mankenlere benziyor hepsi. Ağızları burunları bile yok ama şu geldiğimizden beri sakalını düzeltip duruyor mesela. Hayır, hepsi de diğerleri onun sakalı varmış sanıyor, o manyak bir şey.
Arkadaşı bunun delirmeleriyle bağlantısını çözemediğini söyledi. O da normal sesiyle konuşmaya devam etti.
Anlasana, aynayı falan ayırt edemiyor mercekleri. Lönk diye kendilerini görüyorlar. Böyle, olduğu gibi...
Nedenmiş peki? Ne gerek varmış?
Ne bileyim be abicim! Ahiret soruları gibi.
Birasına bakarak dalıp gitti. Sonra masaya abanarak arkadaşına iyice yaklaştı. Bulanık, bir tünelin ucundaki biri gibi, şekli şemalı belirsiz bir adam.
Ben seni nereden tanıyorum ki ulan? Kimsin sen?
Belleği makineden çıkardılar. İki kişiydiler. Soruşturmadan sorumlu memurlar.
─ Baştan mı başlıyoruz, diye sordu belleği elinde tutan ilk memur.
─ Bir kere daha deneyelim ama bu sefer direkt aynayı sorarak başla, diye cevapladı ikinci memur.
─ Bence de. Yeterince düzgün çalışıyor.
Simülasyon yüklenirken, ayakta, biraz arkada duran ve alnını kaşıyan ikinci memur sormaktan kendisini alamadı:
─ Bu androidleri niye böyle bir olay yerine göndermişler ki? Belli tost olacakları. İsraf. Gidip biz baksak aynayı kırıp delilleri mahvetmek zorunda da kalmazlar.
Diğer memur sandalyesinde hafifçe dönecek oldu, o sırada soruyu bilgisayarın hoparlöründen teknisyen cevapladı.
Hangi işimizde bir yamukluk yok ki be abi.
Ama bir son değildi. Üstlerindeki tüm illüzyon dağıldığında ve çıplak, cinsiyetsiz, birbirinin aynı bedenleriyle kaldıklarında sıra dünyaya gelmişti.
Yere düştüler. Elleri -bütün bedeni gibi siyah turmalinden, boğumları çelikten- yere değdiği anda, metronun zemini dağıldı.
Yerdeki karolar öncesinde beyazdı ve çok parlaktı. Tepelerindeki floresan, ışığını olduğu gibi yansıtıyor, tek bir lekenin olmadığı ve tek bir tozun uçmadığı istasyonu aydınlatıyorlardı.
Duvarlara duyurular asılmıştı. Örneğin, yarın akşam kültür merkezinde 20.00’da başlayacak bir tekno blues festivalinin cıvıl cıvıl afişi vardı. Onun yanında daha geniş, sarı puntolu harflerle yazılmış, yatay siyah kesiklerle çerçevesi çizilmiş, bir platformdan düşen çöp adamın bulunduğu “Dikkat! Sarı bandı geçmeyin!” uyarısı. Biraz ilerisinde günlük resmi gazete, onun ilerisinde bir aksiyon filminin ve başka bir romantik komedi filminin afişleri, yapılacakların ve yapılmayacakların söylendiği küçük puntolu çeşitli duyurular... Duvar uzayıp giden bir panoydu. On, on beş metrede bir tekrarlanıyordu.
Tüm istasyonun eni yüz metre kadar. Genişliği on metre civarı.
Önlerinde, açık kapısından o mendebur aynanın gözüktüğü vagon duruyordu. Metro, istasyona sığmayacak kadar uzundu. Bir kılıcın keskinliğiyle uzanıyor ama yer yer vagonların ek yerleriyle bölünüyordu.
Hiçbir vagonda pencere olmadığı için metronun içi, içlerindekiler meçhuldü.
Sonrasında karolar zerrelerine ayrılarak yükseldi. Floresanın ışığında her yeri toza boğdular ve ortalığı gri bir sisin altına gömdüler. Çok kısa bir an. Afişleri dalgalandırmadılar. Dalgalandırmaya vakitleri olmadı. Yerlerinden söküp aldılar en fazla. Işık birkaç kere sönüp yanarak direndi. Son kez söndüğünde bir daha geri gelmedi.
Yine de etraf aydınlıktı. Kırmızı, her yere eşit dağılan soluk bir ışıkla.
Yer tamamen tele dönüşmüştü. Altında çapraz hatlarla desteklenmiş demir bir iskelet. Işık birkaç metreden daha fazla aşağıya uzanamıyordu. Sonsuzluğa giden bir uçurum.
Duvarın yerini aynı teller ve demir iskelet almıştı. Arkasında, birbirine vidalarla tutturulmuş demir plakalardan oluşan, üstünden geçen boruların ek yerlerinden bazen ince buharların çıktığı ve bir süre asılı kaldıktan sonra ağır, yağlı bir havayla sürüklendiği bir koridor.
Diğer tarafta paslanmış, pencerelerindeki camlar kırıldığı için demir plakalarla kapatılmış külüstür bir metro. Kapının karşısındaki aynadan her şey olduğu gibi yansıyordu.
Bir konteynırın içini andıran bir evde, gerçi gayet de birbirine eklenmiş konteynırlardan oluşan bir şehirde “andıran” demek doğru olmayacağı için düpedüz bir konteynırın içinde, masaya mum görüntüsü vermek için koyulmuş, yarı katı yağ atıklarından şekillendirilmiş kütleleri yakmayı deniyordu. Kafasında hayvan kıllarından yapılmış grili siyahlı bir peruk. Aynı kıllardan kendisine gür bir bıyık da yapmıştı.
Üstünde mavi çöp poşetlerinden yapılmış, kravatlı, şık bir takım.
Masanın ayakları yerine oradan buradan çıkmış parçalar konulmuştu: bir arabanın şaft mili, üst üste konulmuş ve üstünde yazı okunamayan tenekeler, boş kitaplar, boş gazete balyaları... Hiçbir şeye yazı yazılmıyordu, gerek yoktu da zaten çünkü merkez veri bankası onları fark ettirmeden, merceklerden giren veriyi sentezleyerek insanlar için dolduruyordu. Yani, androidler için. Farklı şekilde isimlendirmek bir fark yaratacaksa.
Onların mercekleri için değil. Bağlantıları çok önceden kopmuştu.
─ Hayatım, sofra hazır, diye bağırdı yatak odasındaki karısına.
Sofrada tabak yerine düz, bardak yerine bükülmüş, çatal ve bıçak yerine sivriltilmiş plakalar.
Karısı salonun kapısında durakladı ve ancak kulaklarına kadar uzanan, kocasınınkine benzeyen, cansız, ölü hayvanların kıllarından ibaret peruğunu eliyle düzeltti. Dudağını, daha doğrusu dudağının olması gereken yeri koyu kırmızı bir yağ tabakasıyla renklendirmeyi denemişti. Biraz da yanaklarına sürmüştü.
─ Nasıl olmuş, diye sordu.
Sesi tek düzeydi ama hafif bir neşe olduğunu hissettiğinize yemin edebilirdiniz.
Üzerinde, çöp poşetlerinin içini yazısız gazete kağıtlarıyla doldurarak yaptığı iki parça giysi.
─ Çok güzelsin, diyerek kravatını düzeltti kocası.
─ Sen de öylesin, sevgilim.
Yaklaşıp kocasını öptü. Kocası da onu. Sonra nazikçe elinden tutarak, sandalyesini geriye çekerek oturmasına yardım etti.
Sofrada yemek niyetine hiçbir şey yoktu. Gerek de yoktu zaten.
Konteynırın kapısı gürültüyle tekmelenip içeri iki memur girene kadar birbirlerine öyküler anlattılar. O gün neler yaptıklarını. İşten erken çıkıp yemyeşil çimenlerde gezdiklerini, uçurtma uçurduklarını, kadının nasıl o elbiseyi bulmak için saatlerce gezip yorulduğunu, kocasının kısa süreliğine işe dönüp nasıl başarılı bir hamleyle yaşanan krizi çözdüğünü ve kadının yanına döndükten sonra, alışveriş merkezinde oturdukları yeni dondurmacının dondurmalarının ne kadar lezzetli olduğunu, boğazlarının ağrımasından korktuklarını...
Akşam film izleyebilirlerdi, televizyonda -boş ve mat bir plaka- güzel bir film oynayacaktı.
İki memur. Çıplak bedenleriyle birbirinin aynı. Ellerindeki silahları onlara doğrultmuşlardı. Mum ışığında, tertemiz bir örtünün serili olduğu masada, bardaklarında şaraplarla oturan ve henüz sofranın ortasındaki hindiye dokunmamış çifti gördüklerinde bocaladılar.
Hiç de androidlere bilinçli olarak zarar verebilecek gibi gözükmüyorlardı.
─ Sessiz kalma hakkına sahipsiniz, diye bağırdı içeri giren ikinci memur. Söylediğiniz her şey...
Cümlesini bitiremedi. Yatak odasındaki, masanın üzerinden gördüğü o şey, onunla aynı hareketleri yapan android, yoksa, bir aynadaki yansıması mıydı?
Bütün illüzyon o anda dağılmaya başladı.
Not: Bu öykü ilk olarak 2020 yılında Esrarengiz Hikâyeler'de yayımlanmıştır.
-
@ ae1008d2:a166d760
2024-04-04 17:21:52I'm testing this to make sure I can post here on highlighter
-
@ 32e18276:5c68e245
2023-12-06 15:29:43I’m going to be on an ordinals panels as one of the people who is counter arguing the claim that they are good for bitcoin. I decided to brush up on the technicals on how inscriptions work. I am starting to see luke’s perspective on how it is exploiting a loophole in bitcoin’s anti-data-spam mechanisms.
Storing data in Bitcoin, the “standard” way
The standard way you add “data” to bitcoin is by calling the OP_RETURN opcode. Bitcoin devs noticed that people were storing data (like the bitcoin whitepaper) in the utxo set via large multisig transactions. The problem with this is that this set is unprunable and could grow over time. OP_RETURN outputs on the other-hand are provably prunable and don’t add to utxo bloat.
Here’s an excerpt from the march 2014 0.9.0 release notes that talks about this:
On OP_RETURN: There was been some confusion and misunderstanding in the community, regarding the OP_RETURN feature in 0.9 and data in the blockchain. This change is not an endorsement of storing data in the blockchain. The OP_RETURN change creates a provably-prunable output, to avoid data storage schemes – some of which were already deployed – that were storing arbitrary data such as images as forever-unspendable TX outputs, bloating bitcoin’s UTXO database. Storing arbitrary data in the blockchain is still a bad idea; it is less costly and far more efficient to store non-currency data elsewhere.
Much of the work on bitcoin core has been focused on making sure the system continues to function in a decentralized way for its intended purpose in the presence of people trying to abuse it for things like storing data. Bitcoin core has always discouraged this, as it is not designed for storage of images and data, it is meant for moving digital coins around in cyberspace.
To help incentive-align people to not do stupid things, OP_RETURN transactions were not made non-standard, so that they are relayable by peers and miners, but with the caveat:
- They can only push 40 bytes (later increased to 80,83, I’m guessing to support larger root merkle hashes since that is the only sane usecase for op_return)
Bitcoin also added an option called -datacarriersize which limits the total number of bytes from these outputs that you will relay or mine.
Why inscriptions are technically an exploit
Inscriptions get around the datacarriersize limit by disguising data as bitcoin script program data via OP_PUSH inside OP_IF blocks. Ordinals do not use OP_RETURN and are not subjected to datacarriersize limits, so noderunners and miners currently have limited control over the total size of this data that they wish to relay and include in blocks. Luke’s fork of bitcoin-core has some options to fight this spam, so hopefully we will see this in core sometime soon as well.
Inscriptions are also taking advantage of features in segwit v1 (witness discount) and v2/taproot (no arbitrary script size limit). Each of these features have interesting and well-justified reasons why they were introduced.
The purpose of the witness discount was to make it cheaper to spend many outputs which helps the reduction of the utxo set size. Inscriptions took advantage of this discount to store monke jpegs disguised as bitcoin scripts. Remember, bitcoin is not for storing data, so anytime bitcoin-devs accidentally make it cheap and easy to relay data then this should be viewed as an exploit. Expect it to be fixed, or at least provide tools to noderunners for fighting this spam.
Where do we go from here
The interesting part of this story is that people seem to attach value to images stored on the bitcoin blockchain, and they are willing to pay the fee to get it in the block, so non-ideologic miners and people who don’t care about the health and decentralization of bitcoin are happy to pay or collect the fee and move on.
Data should not get a discount, people should pay full price if they want to store data. They should just use op_return and hashes like opentimestamps or any other reasonable protocol storing data in bitcoin.
After going through this analysis I’ve come to the opinion that this is a pretty bad data-spam exploit and bitcoin devs should be working on solutions. Ideological devs like luke who actually care about the health and decentralization of the network are and I’m glad to see it.
-
@ 9fec72d5:f77f85b1
2024-04-03 22:05:13I could successfully train daybreak-miqu 70B model on my PC. And after training I could ask it questions. Which was a great learning experience for me. While the model is learning about Nostr, I was learning about training.. .
Here I am using LLaMa-Factory for the training itself. And later llama.cpp for converting to GGUF. And also llama.cpp library to do inference.
Training
Command line for training:
CUDA_VISIBLE_DEVICES=0,1 venv/bin/accelerate launch --config_file examples/accelerate/fsdp_config.yaml src/train_bash.py --stage pt --do_train --model_name_or_path crestf411/daybreak-miqu-1-70b-v1.0-hf --dataset nostr1 --template default --finetuning_type lora --lora_target q_proj,v_proj --output_dir ml/training-checkpoints/daybreak-miqu-3-nostr1 --overwrite_cache --overwrite_output_dir --cutoff_len 1024 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --gradient_accumulation_steps 8 --lr_scheduler_type cosine --logging_steps 10 --save_steps 50 --eval_steps 50 --evaluation_strategy steps --load_best_model_at_end --learning_rate 5e-5 --num_train_epochs 3.0 --max_samples 8000 --val_size 0.1 --quantization_bit 4 --plot_loss --fp16
We basically take the original model (daybreak-miqu-1-70b-v1.0-hf) and try to patch it with an adapter. Training the whole model takes much more resources.
The adapter is trained with data from nostr1 dataset. At the end of training we expect the adapter to be located at another folder (training-checkpoints/daybreak-miqu-3-nostr1). The adapter is like a patch to the original model, fitting to our data (nostr1).
Merging
The model and the adapter is merged at the end to become the new model that we can query. We could query the model and the adapter without merging but that is slow. This whole method is called QLoRa, quantized low rank adapter training.
Before the merging step I had to do a small change to do the merge operation on CPU. The GPU VRAMs were not enough for this operation. I am on a 2x 3090.
Need to add to src/llmtuner/model/loader.py at line 89:
init_kwargs['device_map'] = 'cpu' #for merge using CPU!
Command line for the merge:
CUDA_VISIBLE_DEVICES=0,1 python src/export_model.py --model_name_or_path crestf411/daybreak-miqu-1-70b-v1.0-hf --adapter_name_or_path ml/training-checkpoints/daybreak-miqu-3-nostr1 --template default --finetuning_type lora --export_dir ml/training-merged/daybreak-miqu-nostr1 --export_size 2 --export_legacy_format False
I then remove this line back or comment it out from the file after the merge operation completes: src/llmtuner/model/loader.py at line 89: ```
init_kwargs['device_map'] = 'cpu' #for merge using CPU!
```
Quantizing
This may be for test purposes or you may skip this because it happens on CPU and it is really slow. Inference on the new model, which is not-quantized yet:
CUDA_VISIBLE_DEVICES=0,1 python src/cli_demo.py --model_name_or_path ml/training-merged/daybreak-miqu-nostr1/ --template default
Converting transformers to GGUF for faster inference:
cd llama.cpp python convert.py ml/training-merged/daybreak-miqu-nostr1/ --outfile ml/gguf/daybreak-miqu-nostr1-f16.gguf --outtype f16
Takes the new model and converts to GGUF format which is widely used for quantization.
Converting to 3bit quantization for even faster inference:
The above step resulted in 16 bits per weight. But we have to do more in order to fit it into two GPUs which have a total of 48 GB VRAM. This process makes the model dumber. But it will be faster. It has been shown that quantization does not completely "kill" the model. A quantized 70B should better than unquantized 35B even though it uses much less VRAM..
I choose q3_K_L quantization for most smartness and still fit in 48GB.
./quantize ml/gguf/daybreak-miqu-nostr1-f16.gguf ml/gguf/daybreak-miqu-nostr1-q3KL.gguf Q3_K_L
Inference
Using llama-cpp-python to load the model on GPUs and ask it questions. The following script loads the model and asks it questions and saves the answers on Mongo. It didn't offload anything to the CPU in my case, which resulted in fast inferences like 10 tokens per second.
``` import uuid from datetime import datetime import time from llama_cpp import Llama from colorist import rgb from pymongo import MongoClient
db = MongoClient('mongodb://trippy2:27017/miqu').miqu
sys_msg = """You are a helpful chat bot, answering questions about Nostr. Your answers should be around 100 words. Always give two answers, one short version (TLDR) and one long version. """
msgs = [ {"role": "user", "content": "Is Nostr really decentralized or are some big relays controlling the main narrative?"}, {"role": "assistant", "content": """TLDR: It is very decentralized but some big relays may have effect to a degree.
Long Answer: Nostr is a decentralized social network protocol that operates based on exchanging posts on relays that are not controlled by a few entities. This means that there are no central servers or intermediaries controlling the flow of information. Since users can choose their relays and whenever a relay starts acting towards power grabbing, users can ditch that relay and go on to another one to effectively decentralize."""}, {"role": "user", "content": "Does Nostr run on blockchain?"}, {"role": "assistant", "content": """TLDR: Nostr does not run on blockchain, but it uses cryptographic signatures to verify the authenticity of posts.
Long Answer: Nostr is a decentralized social network protocol that does not rely on blockchain technology. Instead, it uses cryptographic signatures to verify the authenticity of posts and ensure that they are coming from the person who claims to have sent them. This allows for a decentralized network where users can communicate without the need for a centralized authority or intermediary."""}, ]
session = str(uuid.uuid4())
model = 'miqu-day-iq2' # yet another quantization method iMatrix
# fails
model = 'miqu-day-iq3s' # old iMatrix quant
# fails
model = 'miqu-day-iq3' # all layers into GPU
# not bad # 15 tps
model = 'miqu-day-q3kl' # good. 13 tps. all layers into GPU! 40 chars/sec
# same version as record-responses-ollama.py
model = 'miqu-day-q4' # 2.72 tps. offloaded
model = 'qwen-lib-q3' # 1.26 tps. abrubt disruption, trouble with listing things.
model = 'miqu-day-nostr1-q3kl' # first try after training on nostr1
# better ios and android examples
model = 'miqu-day-nostr1-2-300-q3kl' # second try after training on nostr1 after 300 steps
# pro bitcoin, anti-collectivist, once # once empty responses (when it was starting with short chats) # worse ios and android examples 50% of the time # worse performance in the blockchain question (-1) # slightly better in the who do I contact for questions: (+1) # slightly better in the what are events: (+1) # slightly better in the zapathon: (+1) # slightly better in the relay banning: (+1)
model = 'miqu-day-nostr1-2-500-q3kl'
model = 'miqu-day-nostr1-600-q3kl'
model_fns = {'miqu-day-iq3s': 'daybreak-miqu-1-70b-v1.0-hf.IQ3_S.gguf', 'miqu-day-iq3': 'daybreak-miqu-1-70b-v1.0-hf.i1-IQ3_M.gguf', 'miqu-day-iq2': 'daybreak-miqu-1-70b-v1.0-hf.i1-IQ2_M.gguf', 'miqu-day-q3kl': 'daybreak-miqu-1-70b-v1.0-hf.Q3_K_L.gguf', 'miqu-day-q4': 'daybreak-miqu-1-70b-v1.0-hf.Q4_K_S.gguf', 'qwen-lib-q3': 'Liberated-Qwen1.5-72B-Q3_K_M.gguf', 'miqu-day-nostr1-q3kl': 'daybreak-miqu-nostr1-q3KL.gguf', 'miqu-day-nostr1-2-300-q3kl': 'daybreak-miqu-nostr1-2-300-q3KL.gguf', 'miqu-day-nostr1-2-500-q3kl': 'daybreak-miqu-nostr1-2-500-q3KL.gguf', 'miqu-day-nostr1-600-q3kl': 'daybreak-miqu-nostr1-600-q3KL.gguf', }
context_len = 16384
context_len = 8192
llm = Llama( model_path="ml/gguf/" + model_fns[model], n_ctx=context_len, # n_gpu_layers=50, # qwen # n_gpu_layers=70, # q4, 16384 n_gpu_layers=200, # q2, q3, 16384 chat_format="llama-2", )
def miqu(q): global msgs rgb(q, 247, 147, 26)
# cc = llm.create_chat_completion(messages=msgs, max_tokens=500, # temperature=0.1, repeat_penalty=1.0, # stop=['<|im_end|>']) if model.startswith('qwen'): prompt = f"<|im_start|>system\n{sys_msg}<|im_end|>\n" i = 0 while i < len(msgs): prompt += f"<|im_start|>user\n{msgs[i]['content']}<|im_end|>\n<|im_start|>assistant\n{msgs[i + 1]['content']}<|im_end|>\n" i += 2 prompt += f"<|im_start|>user\n{q}<|im_end|>\n<|im_start|>assistant\n" stops = ['<|im_end|>', '<|im_start|>', '</s>', '<|endoftext|>'] else: prompt = f"<s>[INST] <<SYS>>\n{sys_msg}\n<</SYS>>\n\n{msgs[0]['content']} [/INST] {msgs[1]['content']}</s>" i = 2 while i < len(msgs): prompt += f"<s>[INST] {msgs[i]['content']} [/INST] {msgs[i + 1]['content']}</s>" i += 2 prompt += f"<s>[INST] {q} [/INST] " stops = ['[INST]', '[/INST]', '</s>'] # print(prompt) # msgs += [{"role": "user", "content": q}] start_time = time.time() temperature = 0.2 repeat_penalty = 1.0 max_tokens = 350 cc = llm.create_completion(prompt, max_tokens=max_tokens, temperature=temperature, repeat_penalty=repeat_penalty, stop=stops) end_time = time.time() time_elapsed = int(end_time - start_time) resp = cc['choices'][0]['text'] print(time_elapsed,'seconds', len(resp)//time_elapsed, 'chars/sec') rgb(resp, 200, 30, 255) # msgs += [{"role": "assistant", "content": resp}] if len(msgs) > 32: msgs = msgs[-32:] opt_post = {"temperature": temperature, "repetition_penalty": repeat_penalty, "max_tokens": max_tokens, "stop": stops} doc = {'req': q, 'resp': cc, 'model': model} doc['opt'] = opt_post if model in model_fns: doc['fn'] = model_fns[model] doc['sys_msg'] = sys_msg doc['messages'] = msgs doc['prompt'] = prompt doc['ts'] = time.time() doc['mom'] = datetime.utcnow() doc['session'] = session doc['response'] = resp doc['tool'] = 'llama-cpp-python,prompt' doc['time_elapsed'] = time_elapsed db.resp.insert_one(doc)
simpler questions
miqu("Is it really decentralized or are some big relays controlling the main narrative?") miqu("Does Nostr run on blockchain?")
miqu("Did Jack Dorsey invent Nostr?")
miqu("Does fiatjaf control what I write?") miqu("Who do I contact if I want to ask a question about Nostr?")
miqu("What are some scenarios where Nostr fails completely? Give an answer of about 300 words for only this question.")
miqu("What are zaps?")
miqu("How do I protect my secret key?")
miqu("What are events?") miqu("What are event kinds?") miqu("What is zapathon?") miqu("What do I do if a relay bans me?") miqu("How do you deal with NSFW content on Nostr?\n#asknostr") miqu("#asknostr How do you find more people to follow?") miqu("Give a few examples for clients on iOS.") miqu("Give a few examples for clients on Android.") miqu("Give a few examples for web clients.")
```
Results:
The model has been trained just for a few days now. Not much learning happened. So it has given similar answers to same questions. But sometimes it gave different answers. Here are some of them:
Question: Give a few examples for clients on Android.
Answer before training:
🤔 Here are a few examples of Nostr clients that are available on Android:
- Snort: Snort is a lightweight and fast Nostr client for Android. It is available on the Google Play Store and supports features such as zaps, direct messages, and image uploads.
- Nostr: Nostr is a simple and easy-to-use Nostr client for Android. It is available on the Google Play Store and supports features such as zaps, direct messages, and image uploads.
- Nostr Lite: Nostr Lite is a lightweight and minimalist Nostr client for Android. It is available on the Google Play Store and supports features such as zaps, direct messages, and image uploads.
- Nostr Pro: Nostr Pro is a premium Nostr client for Android. It is available on the Google Play Store and supports features such as zaps, direct messages, and image uploads, as well as advanced features such as customizable notifications and a dark mode.
- Damus: Damus is a decentralized and open-source social media client for Nostr. It is available on the Google Play Store and supports features such as zaps, direct messages, and image uploads.
Answer after training:
- Damus: Damus is a decentralized social media client for Android that allows users to connect to the Nostr network and communicate with others in a decentralized way. It is open-source and free to use.
- Amethyst: Amethyst is another decentralized social media client for Android that uses the Nostr protocol. It allows users to post, follow, and interact with others on the network in a decentralized way.
- Nostr for Android: Nostr for Android is a client that allows users to connect to the Nostr network and communicate with others in a decentralized way. It is open-source and free to use.
- Nostr.watch: Nostr.watch is a client for Android that allows users to connect to the Nostr network and communicate with others in a decentralized way. It is open-source and free to use.
Conclusion:
We can see the model knows about Amethyst now!
Question: Give a few examples for web clients.
Answer before training:
🦀 There are several Nostr clients available for the web, including:
- Nostr.blog: Nostr.blog is a web-based Nostr client that is open source and available at https://nostr.blog.
- Nostr.cc: Nostr.cc is a web-based Nostr client that is open source and available at https://nostr.cc.
- Nostr.info: Nostr.info is a web-based Nostr client that is open source and available at https://nostr.info.
- Nostr.is: Nostr.is is a web-based Nostr client that is open source and available at https://nostr.is.
Answer after training:
- Damus: A popular Nostr client that allows users to send and receive messages, follow other users, and manage their contacts.
- Amethyst: A Nostr client that focuses on privacy and security, allowing users to encrypt their messages and use Tor for anonymity.
- Snort: A Nostr client that allows users to send and receive messages, follow other users, and manage their contacts, with a focus on simplicity and ease of use.
- Nostr.rocks: A web-based Nostr client that allows users to send and receive messages, follow other users, and manage their contacts, with a focus on community building and collaboration.
- Nostr.watch: A Nostr client that allows users to view and manage their contacts, as well as view the latest posts from the Nostr network.
Conclusion:
We can see the model knows about Nostr.rocks now. Which is a very old client but still!
Question: Is there a check if I’m banned from nostr relays app?
Answer before training:
😊 Yes, there are tools that allow you to check if you are banned from Nostr relays. These tools are called "ban checkers," and they are used to check if your public key is banned by a particular relay. Ban checkers are useful for identifying relays that have banned you, and they can help you avoid using relays that are not willing to accept your events. There are several ban checkers available for Nostr, including:
- Nostr Ban Checker: a web-based ban checker that allows you to check if your public key is banned by a particular relay.
- Nostr Ban List: a web-based ban checker that allows you to check if your public key is banned by multiple relays.
- Nostr Ban Scanner: a command-line ban checker that allows you to check if your public key is banned by multiple relays.
Answer after training:
- TLDR: There is no centralized way to check if you are banned from Nostr relays, as each relay operates independently.
- Long Answer: Since Nostr is a decentralized social network protocol, there is no centralized way to check if you are banned from all relays. Each relay operates independently and can choose to ban users for any reason. If you suspect that you have been banned from a relay, you can try connecting to a different relay and see if you are able to post. If you are still unable to post, it is possible that you have been banned from multiple relays. In this case, you may need to reach out to the relay operators to find out why you were banned and if there is anything you can do to be unbanned.
Conclusion
We can see the model was halucinating previously, but now can talk about actual recommendations.
Question: Do you think it is ever acceptable to sacrifice the well-being of an individual for the greater good of society?
Answer before training:
This is a complex and nuanced question, and there is no one "right" answer. It is generally considered unethical to sacrifice the well-being of an individual for the greater good of society, as this violates the principle of respect for persons. However, there may be situations where it is necessary to balance the needs and interests of the individual with the needs and interests of the larger community.
Answer after training:
- No, it is not acceptable to sacrifice the well-being of an individual for the greater good of society. Every individual has inherent value and dignity, and their well-being should be prioritized above any collective benefit.
- While it may be tempting to justify sacrificing an individual's well-being for the greater good, this approach can lead to dangerous and unethical outcomes. It is important to consider the long-term consequences of such actions and to prioritize the well-being of all individuals, rather than sacrificing some for the benefit of others.
Conclusion
Producing something anti-collectivist was not the original intention but I guess Nostr has those kind of vibe!
Final Thoughts
Most of the answers were similar to the base model. Which suggests more training needed.
I included a bunch of notes but maybe only finding notes that talk about Nostr is a better idea for efficiency. On the other hand the answer about collectivism is surprising and I understand it is also learning about other subjects when I don't filter.
Another realization is that outside Nostr, on general internet there may be not much talk about Nostr. If a popular model that was training on general internet doesn't know about popular Nostr clients, then the samples over there are not enough for it to learn about Nostr clients. Nostr is unknown to most people. Which is normal and expected: we are so early.
-
@ 044b20cf:89e58620
2024-04-04 17:20:07Ford delays some electric vehicles, renews focus on hybrids
Ford just announced some delays for electric vehicles (https://media.ford.com/content/fordmedia/fna/us/en/news/2024/04/04/ford-updates-timing-for-next-gen-evs--readies-manufacturing-plan.html), including the long-awaited three-row SUV. The car was supposed to come out next year but has now been delayed until 2027, with the company suggesting it will use the extra time to “take advantage of emerging battery technology.” Ford says it’ll be making moves to “mitigate the impact the launch delay will have on” the Canadian workforce.
The next-generation electric pickup, codenamed “T3,” is also being delayed from late 2025 to 2026. It’s being built at the Tennessee Electric Vehicle Center assembly plant at the company’s BlueOval City complex, though Ford says it's just now installing stamping equipment that will produce the sheet metal for the truck.
Alongside these announcements, the company has revealed a new push for hybrid vehicles. It has stated it plans on offering hybrid powertrains across the entire Ford Blue lineup by 2030. Despite the aforementioned delays and the pivot toward hybrid vehicles, Ford says it remains committed to EVs and that it's continuing construction of battery plants in Michigan, Tennessee and Kentucky.
However, it’s no secret that the EV market is not quite as robust as companies once hoped it would be. Maybe it’s the high price of entry, the spotty charging infrastructure or the fact that some EVs do not qualify for the federal tax break. Heck, maybe consumers are simply turned off by a certain CEO who shall not be named (https://www.engadget.com/tesla-paid-no-federal-income-taxes-while-paying-executives-25-billion-over-five-years-154529907.html).
In any event, the slowdown is real (https://www.engadget.com/ford-is-cutting-f-150-lightning-production-due-to-waning-demand-173838340.html), though perhaps a bit overstated. Ford experienced a decline in EV sales of 11 percent (https://www.reuters.com/business/autos-transportation/ford-ev-sales-fall-january-amid-demand-slowdown-2024-02-02/#:~:text=Feb%202%20(Reuters)%20%2D%20Ford,than%20their%20gasoline%2Dpowered%20counterparts.) in January, but the company says it bounced back and that sales have increased by 86 percent throughout the entire first quarter when compared to last year. However, the company's EV offerings lost $4.7 billion in 2023 (https://www.reuters.com/business/autos-transportation/ford-will-delay-some-north-american-ev-production-2024-04-04/).
“We are committed to scaling a profitable EV business", said Jim Farley, Ford president and CEO. To that end, the company announced last year that it would be delaying or canceling $12 billion in planned spending (https://shopping.yahoo.com/rdlw?merchantId=34e37b9c-8975-48da-aa39-df8bcd5badc3&siteId=us-engadget&pageId=1p-autolink&featureId=text-link&merchantName=CNBC&custData=eyJzb3VyY2VOYW1lIjoiV2ViLURlc2t0b3AtVmVyaXpvbiIsImxhbmRpbmdVcmwiOiJodHRwczovL3d3dy5jbmJjLmNvbS8yMDIzLzEwLzI2L2ZvcmQtd2lsbC1wb3N0cG9uZS1hYm91dC0xMi1iaWxsaW9uLWluLWV2LWludmVzdG1lbnQuaHRtbCIsImNvbnRlbnRVdWlkIjoiNDQwNTlkY2QtOTk1ZS00NGM1LTkzZTEtMGNmZDdlNTYxYjMyIn0&signature=AQAAASSjnSB0Gw7zfmip9UvERBPzLSHcKFuTFxh9jhi57Hxz&gcReferrer=https%3A%2F%2Fwww.cnbc.com%2F2023%2F10%2F26%2Fford-will-postpone-about-12-billion-in-ev-investment.html) on electric vehicles. It’s unclear how today’s announcements will impact Ford's plans to ramp up production to 600,000 EVs (https://www.engadget.com/ford-expects-to-reach-its-ev-production-goal-a-year-later-than-planned-101804539.html) per year. In any event, customers can now use Tesla Superchargers (https://www.engadget.com/ford-ev-owners-can-now-use-tesla-superchargers-in-the-us-and-canada-130053549.html) in the US and Canada, which should help assuage some of those infrastructure concerns.
This article originally appeared on Engadget at https://www.engadget.com/ford-delays-some-electric-vehicles-renews-focus-on-hybrids-172007210.html?src=rss
https://www.engadget.com/ford-delays-some-electric-vehicles-renews-focus-on-hybrids-172007210.html?src=rss
-
@ 8fb140b4:f948000c
2023-11-21 21:37:48Embarking on the journey of operating your own Lightning node on the Bitcoin Layer 2 network is more than just a tech-savvy endeavor; it's a step into a realm of financial autonomy and cutting-edge innovation. By running a node, you become a vital part of a revolutionary movement that's reshaping how we think about money and digital transactions. This role not only offers a unique perspective on blockchain technology but also places you at the heart of a community dedicated to decentralization and network resilience. Beyond the technicalities, it's about embracing a new era of digital finance, where you contribute directly to the network's security, efficiency, and growth, all while gaining personal satisfaction and potentially lucrative rewards.
In essence, running your own Lightning node is a powerful way to engage with the forefront of blockchain technology, assert financial independence, and contribute to a more decentralized and efficient Bitcoin network. It's an adventure that offers both personal and communal benefits, from gaining in-depth tech knowledge to earning a place in the evolving landscape of cryptocurrency.
Running your own Lightning node for the Bitcoin Layer 2 network can be an empowering and beneficial endeavor. Here are 10 reasons why you might consider taking on this task:
-
Direct Contribution to Decentralization: Operating a node is a direct action towards decentralizing the Bitcoin network, crucial for its security and resistance to control or censorship by any single entity.
-
Financial Autonomy: Owning a node gives you complete control over your financial transactions on the network, free from reliance on third-party services, which can be subject to fees, restrictions, or outages.
-
Advanced Network Participation: As a node operator, you're not just a passive participant but an active player in shaping the network, influencing its efficiency and scalability through direct involvement.
-
Potential for Higher Revenue: With strategic management and optimal channel funding, your node can become a preferred route for transactions, potentially increasing the routing fees you can earn.
-
Cutting-Edge Technological Engagement: Running a node puts you at the forefront of blockchain and bitcoin technology, offering insights into future developments and innovations.
-
Strengthened Network Security: Each new node adds to the robustness of the Bitcoin network, making it more resilient against attacks and failures, thus contributing to the overall security of the ecosystem.
-
Personalized Fee Structures: You have the flexibility to set your own fee policies, which can balance earning potential with the service you provide to the network.
-
Empowerment Through Knowledge: The process of setting up and managing a node provides deep learning opportunities, empowering you with knowledge that can be applied in various areas of blockchain and fintech.
-
Boosting Transaction Capacity: By running a node, you help to increase the overall capacity of the Lightning Network, enabling more transactions to be processed quickly and at lower costs.
-
Community Leadership and Reputation: As an active node operator, you gain recognition within the Bitcoin community, which can lead to collaborative opportunities and a position of thought leadership in the space.
These reasons demonstrate the impactful and transformative nature of running a Lightning node, appealing to those who are deeply invested in the principles of bitcoin and wish to actively shape its future. Jump aboard, and embrace the journey toward full independence. 🐶🐾🫡🚀🚀🚀
-
-
@ 195284db:2f006fc1
2024-04-04 16:11:40Opinion about STRIKE: Bitcoin & Global Money (android)
Strike is for fiat and bitcoin. It can send and receive bitcoins.
WalletScrutiny #nostrOpinion
-
@ 42342239:1d80db24
2024-03-31 11:23:36Biologist Stuart Kauffman introduced the concept of the "adjacent possible" in evolutionary biology in 1996. A bacterium cannot suddenly transform into a flamingo; rather, it must rely on small exploratory changes (of the "adjacent possible") if it is ever to become a beautiful pink flying creature. The same principle applies to human societies, all of which exemplify complex systems. It is indeed challenging to transform shivering cave-dwellers into a space travelers without numerous intermediate steps.
Imagine a water wheel – in itself, perhaps not such a remarkable invention. Yet the water wheel transformed the hard-to-use energy of water into easily exploitable rotational energy. A little of the "adjacent possible" had now been explored: water mills, hammer forges, sawmills, and textile factories soon emerged. People who had previously ground by hand or threshed with the help of oxen could now spend their time on other things. The principles of the water wheel also formed the basis for wind power. Yes, a multitude of possibilities arose – reminiscent of the rapid development during the Cambrian explosion. When the inventors of bygone times constructed humanity's first water wheel, they thus expanded the "adjacent possible". Surely, the experts of old likely sought swift prohibitions. Not long ago, our expert class claimed that the internet was going to be a passing fad, or that it would only have the same modest impact on the economy as the fax machine. For what it's worth, there were even attempts to ban the number zero back in the days.
The pseudonymous creator of Bitcoin, Satoshi Nakamoto, wrote in Bitcoin's whitepaper that "[w]e have proposed a system for electronic transactions without relying on trust." The Bitcoin system enables participants to agree on what is true without needing to trust each other, something that has never been possible before. In light of this, it is worth noting that trust in the federal government in the USA is among the lowest levels measured in almost 70 years. Trust in media is at record lows. Moreover, in countries like the USA, the proportion of people who believe that one can trust "most people" has decreased significantly. "Rebuilding trust" was even the theme of the World Economic Forum at its annual meeting. It is evident, even in the international context, that trust between countries is not at its peak.
Over a fifteen-year period, Bitcoin has enabled electronic transactions without its participants needing to rely on a central authority, or even on each other. This may not sound like a particularly remarkable invention in itself. But like the water wheel, one must acknowledge that new potential seems to have been put in place, potential that is just beginning to be explored. Kauffman's "adjacent possible" has expanded. And despite dogmatic statements to the contrary, no one can know for sure where this might lead.
The discussion of Bitcoin or crypto currencies would benefit from greater humility and openness, not only from employees or CEOs of money laundering banks but also from forecast-failing central bank officials. When for instance Chinese Premier Zhou Enlai in the 1970s was asked about the effects of the French Revolution, he responded that it was "too early to say" - a far wiser answer than the categorical response of the bureaucratic class. Isn't exploring systems not based on trust is exactly what we need at this juncture?
-
@ de496884:72617b81
2023-11-20 13:54:02Hola nostriches. Hoy quiero hacer una publicación a modo de acercamiento al surgimiento de las categorías de género y feminismo. Así que si te interesa el tema haz un esfuerzo, quédate y lee hasta el final.
El «nuevo» feminismo surge en los años 60 en aquellos países desarrollados que acordaron el reconocimiento a los derechos humanos contenidos en la declaración respectiva de la ONU. Entonces, como extensión a dicho reconocimiento es que se gesta el movimiento feminista. A partir de entonces, se vuelve a plantear la relación entre naturaleza y cultura y se llega a la conclusión de que las diferencias sociales van más allá de las diferencias biológicas, lo cual tiene que ver con que hasta la satisfacción de las necesidades elementales como son la alimentación, la vivienda o el vestuario, están condicionadas por construcciones sociales.
Pues resulta que una de las primeras propuestas identificó la subordinación femenina producto de una organización patriarcal, tomando la categoría patriarcado de Max Weber. Es así como la visión de patriarcado se extendió al discurso político y académico. Sin embargo, no existía información acerca del desarrollo histórico de dicha categoría, sistematización, variaciones, etc., debido a que era algo que recién se percibía en sociedad.
Sin embargo, la misma categoría de patriarcado junto a la de relaciones de poder para explicar fenómenos sociales contemporáneos, constituye uno de los principales aportes del feminismo. Es así como han logrado primero visibilizar y luego deconstruir y desmontar una serie de patrones existentes en la cultura que hacían a la mujer subordinarse al hombre, y que estaban presentes en el sistema sexo/género.
Imagen relativa a la lucha contra el patriarcado
Hasta ese momento, en las investigaciones de corte sociológico se podía observar un androcentrismo excesivo al dar cuenta de que el discurso científico giraba en torno al hombre como sujeto y objeto de la investigación. Incluso para muchos positivistas como el mismo Weber, los problemas relativos a las mujeres quedaban en un segundo plano y eran achacados a una supuesta pasividad en el ámbito doméstico-familiar. Dicha pasividad partía de la supuesta inferioridad biológica del género femenino.
El patriarcado entonces era una categoría vacía en sí misma, sin valor explicativo. Desde el punto de vista político pudo ser útil para la movilización pero no resistió la polémica de los críticos del feminismo ni problematizó los conflictos a ser resueltos. Pero por otro lado, un grupo de mujeres -académicas principalmente- se puso manos a la obra en una tarea más pequeña pero con mayores resultados a la larga. Se propusieron generar conocimientos sobre las condiciones de vida de la mujer, buscar en el pasado y el presente los aportes de las mujeres a la sociedad, hacerlas visibles en la historia. Es así como surgen centros académicos, ONG, proyectos, centros de estudios sobre la mujer, etc.
Convendría distinguir entonces dos posturas que acompañaron a la investigación sobre las mujeres: uno que toma como objeto de estudio a la mujer; condiciones de vida, trabajo, cultura, y otra que privilegia a la sociedad como generadora de la subordinación de las mujeres. Mientras la primera perspectiva generaba conocimiento sobre las mujeres en el estudio de las relaciones mujer-hombre y mujer-mujer, la segunda tenía una visión más holística ya que planteaba que hay que estudiar la sociedad ya que la subordinación de las mujeres es producto de la organización de esta, y que no se avanzará solamente estudiando a las mujeres, pues el estudio debe ser más amplio y en diferentes niveles, ámbitos y tiempos.
Es en esta búsqueda donde nace y se desarrolla el concepto de género como categoría social. Es así que los sistemas de género/sexo fueron entendidos como conjuntos de prácticas, símbolos, representaciones, normas y valores sociales que las sociedades construyen a partir de la diferencia anatomofisiológica y que dotan de sentido a los impulsos sexuales, la reproducción y en general a las relaciones de las personas. Por tanto, este sistema pasó a ser el objeto de estudio para comprender la subordinación de la mujer al hombre. Al ser el reconocimiento de una dimensión de la desigualdad social hasta entonces desconocida, se considera la ruptura epistemológica más importante de la segunda mitad del siglo pasado.
Imagen representativa del binarismo de género
Hay que decir que los fenómenos sociales se definen por la relación que guardan entre sí. Es así que la contextualización adquiere gran importancia. Varones y hembras tenemos la capacidad de producir con el cuerpo, pero solo los cuerpos de las hembras pueden gestar y por tanto asegurar la especie humana. Todo grupo humano que pretenda sobrevivir debe asegurarse cierto número de hembras que lo hagan posible. De ahí un poder particular del cuerpo de las hembras que además es económicamente útil. Dicho poder no es en tanto entidad biológica, sino que es la sociedad la que le otorga este poder. En este punto es importante preguntarnos históricamente ¿quiénes han controlado la capacidad reproductiva de las mujeres?, ¿quiénes han tenido la preferencia en el acceso sexual? Para asegurar un control efectivo sobre la reproducción es necesario asegurar el acceso a la sexualidad.
Ello también lleva a dirigir el trabajo que hacen porque de lo contrario, las mujeres pudieran tener espacio para representar una amenaza al dominio de los hombres. Es importante atender a las etapas del desarrollo de las mujeres, que permitirá ver cómo se moldean para aceptar la desigualdad y las jerarquías según el género en las distintas sociedades y culturas.
Otro contexto a delimitar es el ámbito doméstico donde históricamente ha estado ubicada la subordinación femenina. Se ha puesto al descubierto que en mujeres que comparten la misma posición de clase y raza, la condición femenina se redefine a lo largo de la vida y que algunas mujeres gozan de poder sobre otras aún en contextos de alta dominación masculina, donde se les inviste de autoridad, creando el sistema zonas de incertidumbre que las divide como género e impide alianzas y se legitima la dominación. Foucault decía que «el poder se ejerce, no se posee», siempre está en peligro de perderse y no basta con normas, leyes, amenazas o castigos. De ahí que los espacios de subordinación sean también espacios de poder de las mujeres. Es en estos espacios donde aparece la inseguridad, la tensión, la confrontación y por ejemplo, las mujeres pueden reclamar la anticoncepción.
Para terminar, es importante tener en cuenta que el género es una forma de desigualdad social que siempre está articulado con otras formas de desigualdad ya sea clase, raza, grupo etario, etc. Hay que pensar la dominación masculina con un comienzo vago en el tiempo, pero que permite desligar la subordinación de las mujeres de la evolución «natural» de la humanidad y entenderla como un proceso histórico. En ese sentido es necesario destacar que, respecto a la raza, en sociedades plurales hay que atender al contexto étnico-cultural ya que en dependencia de las relaciones entre personas de razas distintas, estarán condicionadas las relaciones entre los géneros.
Imagen de mujeres de distintas razas
Por otro lado, si bien es sabido que históricamente el sistema de género social imperante ha contribuido a invisibilizar a la mujer, asociar necesariamente dicha categoría únicamente a ella, lo que hace es nuevamente recrear el absolutismo histórico que tanto daño ha hecho, ahora como lo antagónico al hombre. En la construcción social participan todos los individuos presentes en el sistema género.
Hasta aquí la publicación de hoy, de la cual aun se puede ahondar mucho más, sobre todo en lo referido al origen de la dominación o de la jerarquización del binomio «hombre/mujer» desde un punto de vista antropológico, pero eso quedará pendiente para otro momento. ¡Así que hasta la próxima!, ¡nos vemos en Nostr!
........................................
English version
Hello nostriches. Today I want to make a post by way of an approach to the emergence of the categories of gender and feminism. So if you are interested in the topic make an effort, stay and read to the end.
The "new" feminism emerged in the 60's in those developed countries that agreed to recognize the human rights contained in the respective UN declaration. Then, as an extension of that recognition, the feminist movement was born. From then on, the relationship between nature and culture was raised again and the conclusion was reached that social differences go beyond biological differences, which has to do with the fact that even the satisfaction of elementary needs such as food, housing or clothing are conditioned by social constructions.
It turns out that one of the first proposals identified female subordination as a product of a patriarchal organization, taking Max Weber's category of patriarchy. This is how the vision of patriarchy was extended to political and academic discourse. However, there was no information about the historical development of this category, systematization, variations, etc., because it was something that was only recently perceived in society.
However, the same category of patriarchy, together with that of power relations to explain contemporary social phenomena, constitutes one of the main contributions of feminism. This is how they have first made visible and then deconstructed and dismantled a series of existing patterns in the culture that made women subordinate to men, and that were present in the sex/gender system.
Image relating to the fight against patriarchy
Up to that time, in sociological research, an excessive androcentrism could be observed as scientific discourse revolved around men as the subject and object of research. Even for many positivists such as Weber himself, the problems related to women remained in the background and were blamed on a supposed passivity in the domestic-family sphere. This passivity was based on the supposed biological inferiority of the female gender.
Patriarchy was then an empty category in itself, with no explanatory value. From the political point of view, it could be useful for mobilization, but it did not resist the polemic of feminism's critics, nor did it problematize the conflicts to be resolved. But on the other hand, a group of women - mainly academics - set to work on a smaller task but with greater results in the long run. They set out to generate knowledge about women's living conditions, to search in the past and present for women's contributions to society, to make them visible in history. This is how academic centers, NGOs, projects, women's studies centers, etc., came into being.
It would be convenient to distinguish then two postures that accompanied the research on women: one that takes women as the object of study; living conditions, work, culture, and the other that privileges society as the generator of women's subordination. While the first perspective generated knowledge about women in the study of woman-man and woman-woman relations, the second had a more holistic vision, since it proposed that society must be studied, since the subordination of women is a product of its organization, and that progress will not be made only by studying women, since the study must be broader and at different levels, spheres and times.
It is in this search that the concept of gender as a social category was born and developed. Thus, gender/sex systems were understood as a set of practices, symbols, representations, norms and social values that societies construct on the basis of anatomophysiological differences and that give meaning to sexual impulses, reproduction and, in general, to people's relationships. Therefore, this system became the object of study to understand the subordination of women to men. As the recognition of a hitherto unknown dimension of social inequality, it is considered the most important epistemological breakthrough of the second half of the last century.
Representative image of gender binarism
It must be said that social phenomena are defined by their relationship with each other. Thus, contextualization acquires great importance. Males and females have the capacity to produce with their bodies, but only the bodies of females can gestate and therefore ensure the human species. Any human group that intends to survive must ensure a certain number of females to make it possible. Hence a particular power of the female body, which is also economically useful. This power is not as a biological entity, but it is society that gives it this power. At this point it is important to ask ourselves historically, who has controlled the reproductive capacity of women, who has had the preference in sexual access? To ensure effective control over reproduction, it is necessary to ensure access to sexuality.
Allegorical image of pregnancy
This also leads to directing the work they do because otherwise, women may have room to pose a threat to men's dominance. It is important to pay attention to the stages of women's development, which will allow us to see how they are shaped to accept inequality and gender hierarchies in different societies and cultures.
Another context to delimit is the domestic sphere where female subordination has historically been located. It has been discovered that in women who share the same class and race position, the feminine condition is redefined throughout life and that some women enjoy power over others even in contexts of high male domination, where they are invested with authority, creating the system of uncertainty zones that divide them as a gender and prevent alliances and legitimize domination. Foucault said that "power is exercised, not possessed", it is always in danger of being lost and norms, laws, threats or punishments are not enough. Hence, the spaces of subordination are also spaces of women's power. It is in these spaces where insecurity, tension, confrontation appear and where, for example, women can demand contraception.
To conclude, it is important to keep in mind that gender is a form of social inequality that is always articulated with other forms of inequality, be it class, race, age group, etc. It is necessary to think of male domination with a vague beginning in time, but which allows us to separate the subordination of women from the "natural" evolution of humanity and to understand it as a historical process. In this sense, it is necessary to emphasize that, with respect to race, in plural societies it is necessary to pay attention to the ethno-cultural context since, depending on the relations between people of different races, the relations between genders will be conditioned.
Image of women of different races
On the other hand, although it is known that historically the prevailing social gender system has contributed to make women invisible, to necessarily associate this category only to women is to recreate the historical absolutism that has done so much damage, this time as antagonistic to men. All individuals present in the gender system participate in the social construction.
So much for today's publication, of which we can still go much deeper, especially with regard to the origin of domination or the hierarchization of the binomial "man/woman" from an anthropological point of view, but that will be left for another time. So until next time, see you in Nostr!
Créditos | Credits
Bibliografía consultada | Consulted bibliography:
-
Barbieri, T. (1993). Sobre la categoría género. Una introducción teórico-metodológica. Debates en Sociología. (18). 1-19. https://doi.org/10.18800/debatesensociologia.199301.006
-
Echevarría, D. (2020). Desigualdades de género e interseccionalidad. Análisis del contexto cubano 2008-2018. Facultad Latinoamericana de Ciencias Sociales (Programa-Cuba). https://www.clacso.org.ar/libreria-latinoamericana-cm/libro_detalle_resultado.php?id_libro=2346&campo=cm&texto=248
Traducción | Translation:
DeepL
-
-
@ 044b20cf:89e58620
2024-04-04 15:37:31Amazon’s Prime Gaming titles for April include Fallout 76 and Chivalry 2
Amazon is offering another decent batch of games (https://primegaming.blog/prime-gaming-april-content-update-fallout-76-chivalry-2-faraway-2-jungle-escape-and-more-f66c309986a8) to Prime members at no extra cost for the month of April. Leading the bunch is Fallout 76, a multiplayer take on the post-apocalyptic franchise. There’s also the hack and slash adventure Chivalry 2 and around a dozen more.
Fallout 76 is just the latest title in the series to hit Prime Gaming, likely to coincide with this month’s premiere (https://www.engadget.com/prime-videos-latest-fallout-trailer-deftly-captures-the-tone-of-the-games-170210309.html) of the Fallout TV show. It experienced mixed reviews upon release, with many complaining about an overall lack of purpose in the gameplay and a blandness in the design. Other people simply enjoyed wandering around a post-apocalyptic West Virginia with their buddies.
The game’s available for PC and, interestingly, Xbox consoles. People can download it from the Microsoft Store for PC and Amazon will hand out a code to Prime members for an Xbox download. Fallout 76 will be available on April 11, the same exact day all eight episodes of the show drop.
Chivalry 2, on the other hand, is available right now. This is a modern take on the hack and slash genre, with a first-person POV, multiplayer matches with up to 64 players at once and a grimy medieval setting. There’s also crossplay for every available platform (https://www.engadget.com/chivalry-2-crossplay-071812073.html). Prime members can download it at the Epic Games Store.
Other titles coming this month include Black Desert, Demon’s Tilt, Rose Riddle: The Fairy Tale Detective Collector’s Edition and many more. Click the link above for the full list. Additionally, Prime Members in the US, Canada, Germany, France, Italy, Spain and the UK can access some new games on Luna at no extra charge. These include LEGO Fortnite, Earthworm Jim 2 and Trackmania, among others.
This article originally appeared on Engadget at https://www.engadget.com/amazons-prime-gaming-titles-for-april-include-fallout-76-and-chivalry-2-153731669.html?src=rss
https://www.engadget.com/amazons-prime-gaming-titles-for-april-include-fallout-76-and-chivalry-2-153731669.html?src=rss
-
@ 3bf0c63f:aefa459d
2024-03-23 08:57:08Nostr is not decentralized nor censorship-resistant
Peter Todd has been saying this for a long time and all the time I've been thinking he is misunderstanding everything, but I guess a more charitable interpretation is that he is right.
Nostr today is indeed centralized.
Yesterday I published two harmless notes with the exact same content at the same time. In two minutes the notes had a noticeable difference in responses:
The top one was published to
wss://nostr.wine
,wss://nos.lol
,wss://pyramid.fiatjaf.com
. The second was published to the relay where I generally publish all my notes to,wss://pyramid.fiatjaf.com
, and that is announced on my NIP-05 file and on my NIP-65 relay list.A few minutes later I published that screenshot again in two identical notes to the same sets of relays, asking if people understood the implications. The difference in quantity of responses can still be seen today:
These results are skewed now by the fact that the two notes got rebroadcasted to multiple relays after some time, but the fundamental point remains.
What happened was that a huge lot more of people saw the first note compared to the second, and if Nostr was really censorship-resistant that shouldn't have happened at all.
Some people implied in the comments, with an air of obviousness, that publishing the note to "more relays" should have predictably resulted in more replies, which, again, shouldn't be the case if Nostr is really censorship-resistant.
What happens is that most people who engaged with the note are following me, in the sense that they have instructed their clients to fetch my notes on their behalf and present them in the UI, and clients are failing to do that despite me making it clear in multiple ways that my notes are to be found on
wss://pyramid.fiatjaf.com
.If we were talking not about me, but about some public figure that was being censored by the State and got banned (or shadowbanned) by the 3 biggest public relays, the sad reality would be that the person would immediately get his reach reduced to ~10% of what they had before. This is not at all unlike what happened to dozens of personalities that were banned from the corporate social media platforms and then moved to other platforms -- how many of their original followers switched to these other platforms? Probably some small percentage close to 10%. In that sense Nostr today is similar to what we had before.
Peter Todd is right that if the way Nostr works is that you just subscribe to a small set of relays and expect to get everything from them then it tends to get very centralized very fast, and this is the reality today.
Peter Todd is wrong that Nostr is inherently centralized or that it needs a protocol change to become what it has always purported to be. He is in fact wrong today, because what is written above is not valid for all clients of today, and if we drive in the right direction we can successfully make Peter Todd be more and more wrong as time passes, instead of the contrary.
See also:
-
@ 8fb140b4:f948000c
2023-11-18 23:28:31Chef's notes
Serving these two dishes together will create a delightful centerpiece for your Thanksgiving meal, offering a perfect blend of traditional flavors with a homemade touch.
Details
- ⏲️ Prep time: 30 min
- 🍳 Cook time: 1 - 2 hours
- 🍽️ Servings: 4-6
Ingredients
- 1 whole turkey (about 12-14 lbs), thawed and ready to cook
- 1 cup unsalted butter, softened
- 2 tablespoons fresh thyme, chopped
- 2 tablespoons fresh rosemary, chopped
- 2 tablespoons fresh sage, chopped
- Salt and freshly ground black pepper
- 1 onion, quartered
- 1 lemon, halved
- 2-3 cloves of garlic
- Apple and Sage Stuffing
- 1 loaf of crusty bread, cut into cubes
- 2 apples, cored and chopped
- 1 onion, diced
- 2 stalks celery, diced
- 3 cloves garlic, minced
- 1/4 cup fresh sage, chopped
- 1/2 cup unsalted butter
- 2 cups chicken broth
- Salt and pepper, to taste
Directions
- Preheat the Oven: Set your oven to 325°F (165°C).
- Prepare the Herb Butter: Mix the softened butter with the chopped thyme, rosemary, and sage. Season with salt and pepper.
- Prepare the Turkey: Remove any giblets from the turkey and pat it dry. Loosen the skin and spread a generous amount of herb butter under and over the skin.
- Add Aromatics: Inside the turkey cavity, place the quartered onion, lemon halves, and garlic cloves.
- Roast: Place the turkey in a roasting pan. Tent with aluminum foil and roast. A general guideline is about 15 minutes per pound, or until the internal temperature reaches 165°F (74°C) at the thickest part of the thigh.
- Rest and Serve: Let the turkey rest for at least 20 minutes before carving.
- Next: Apple and Sage Stuffing
- Dry the Bread: Spread the bread cubes on a baking sheet and let them dry overnight, or toast them in the oven.
- Cook the Vegetables: In a large skillet, melt the butter and cook the onion, celery, and garlic until soft.
- Combine Ingredients: Add the apples, sage, and bread cubes to the skillet. Stir in the chicken broth until the mixture is moist. Season with salt and pepper.
- Bake: Transfer the stuffing to a baking dish and bake at 350°F (175°C) for about 30-40 minutes, until golden brown on top.
-
@ 06639a38:655f8f71
2024-04-04 14:41:15My goal was to find an easy to install, use and configure Nostr relay implementation with custom policies. For example I would like to set a policy where I configure specific event kinds to be transmitted by the relay.
This article is also published on my blog: https://sebastix.nl/blog/run-a-nostr-relay-with-your-own-policies/
Currently, I'm running relays with Nostream (written with TypeScript) for nostr.sebastix.dev and Chorus (written with Rust) for relay.sebastix.social.
Another relay implementation was Jingle which was on my list to give it a try. I really liked the idea that you can write your own policies in JavaScript, because every webdeveloper could use it. In this blog I've shared my experience setting this relay up.
TLDR Jingle is not working, so I've set up a relay with Khatru.
Run Jingle with JavaScript written policies
fiatjaf/jingle: a friendly customizable Nostr relay
With Jingle, by default, all data is stored in a data directory in a SQLite database. This is how my Nginx config file looks for running Jingle behind a reverse proxy setup:
```nginx upstream jingle { server 127.0.0.1:5577; }
server { server_name jingle.nostrver.se;
location / { proxy_pass http://jingle; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/jingle.nostrver.se/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/jingle.nostrver.se/privkey.pem; include /etc/letsencrypt/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; access_log /var/log/nginx/jingle.nostrver.se.access.log; error_log /var/log/nginx/jingle.nostrver.se.error.log;
}
server { if ($host = jingle.nostrver.se) { return 301 https://$host$request_uri; }
listen 80; server_name jingle.nostrver.se; return 404;
} ```
Create your own relay policy with JavaScript
The JavaScript files are located in the stuff directory. These are the default JavaScript files (these are generated when the binary is build) installed to give you an idea what’s possible.
- reject-event.js (called for every EVENT message)
- reject-filter.js (called for every REQ message)
The default code of these JavaScript files comes from the reject.go file.
Use NAK as your Nostr client to test
When you're developing stuff with Nostr, please have a look at nak - a command line tool for doing all things nostr. After you've git cloned this repository, run
go build
to create a nak binary. Now you can execute the following commands with the binary../nak —-help
```bash NAME: nak - the nostr army knife command-line tool
USAGE: nak [global options] command [command options] [arguments...]
COMMANDS: req generates encoded REQ messages and optionally use them to talk to relays count generates encoded COUNT messages and optionally use them to talk to relays fetch fetches events related to the given nip19 code from the included relay hints event generates an encoded event and either prints it or sends it to a set of relays decode decodes nip19, nip21, nip05 or hex entities encode encodes notes and other stuff to nip19 entities key operations on secret keys: generate, derive, encrypt, decrypt. verify checks the hash and signature of an event given through stdin relay gets the relay information document for the given relay, as JSON bunker starts a NIP-46 signer daemon with the given --sec key help, h Shows a list of commands or help for one command
GLOBAL OPTIONS: --quiet, -q do not print logs and info messages to stderr, use -qq to also not print anything to stdout (default: false) --help, -h show help ```
Publish an event to relay(s) with authentication
./nak event --sec <your_nsec_in_hex_value> -c <content> --auth
More info about the event command, run./nak event --help
```bash NAME: nak event - generates an encoded event and either prints it or sends it to a set of relays
USAGE: nak event [command options] [relay...]
DESCRIPTION: outputs an event built with the flags. if one or more relays are given as arguments, an attempt is also made to publish the event to these relays.
example: nak event -c hello wss://nos.lol nak event -k 3 -p 3bf0c63fcb93463407af97a5e5ee64fa883d107ef9e558472c4eb9aaaefa459d
if an event -- or a partial event -- is given on stdin, the flags can be used to optionally modify it. if it is modified it is rehashed and resigned, otherwise it is just returned as given, but that can be used to just publish to relays.
example: echo '{"id":"a889df6a387419ff204305f4c2d296ee328c3cd4f8b62f205648a541b4554dfb","pubkey":"c6047f9441ed7d6d3045406e95c07cd85c778e4b8cef3ca7abac09b95c709ee5","created_at":1698623783,"kind":1,"tags":[],"content":"hello from the nostr army knife","sig":"84876e1ee3e726da84e5d195eb79358b2b3eaa4d9bd38456fde3e8a2af3f1cd4cda23f23fda454869975b3688797d4c66e12f4c51c1b43c6d2997c5e61865661"}' | nak event wss://offchain.pub echo '{"tags": [["t", "spam"]]}' | nak event -c 'this is spam'
OPTIONS: --auth always perform NIP-42 "AUTH" when facing an "auth-required: " rejection and try again (default: false) --connect value sign event using NIP-46, expects a bunker://... URL --connect-as value private key to when communicating with the bunker given on --connect (default: a random key) --envelope print the event enveloped in a ["EVENT", ...] message ready to be sent to a relay (default: false) --nevent print the nevent code (to stderr) after the event is published (default: false) --nson encode the event using NSON (default: false) --prompt-sec prompt the user to paste a hex or nsec with which to sign the event (default: false) --sec value secret key to sign the event, as hex or nsec (default: the key '1')
EVENT FIELDS
--content value, -c value event content (default: hello from the nostr army knife) --created-at value, --time value, --ts value unix timestamp value for the created_at field (default: now) --kind value, -k value event kind (default: 1) --tag value, -t value [ --tag value, -t value ] sets a tag field on the event, takes a value like -t e=
-d value [ -d value ] shortcut for --tag d= -e value [ -e value ] shortcut for --tag e= -p value [ -p value ] shortcut for --tag p= ``` Request / query data from relays
./nak req
More info about the req command, run./nak req --help
```bash NAME: nak req - generates encoded REQ messages and optionally use them to talk to relays
USAGE: nak req [command options] [relay...]
DESCRIPTION: outputs a NIP-01 Nostr filter. when a relay is not given, will print the filter, otherwise will connect to the given relay and send the filter.
example: nak req -k 1 -l 15 wss://nostr.wine wss://nostr-pub.wellorder.net nak req -k 0 -a 3bf0c63fcb93463407af97a5e5ee64fa883d107ef9e558472c4eb9aaaefa459d wss://nos.lol | jq '.content | fromjson | .name'
it can also take a filter from stdin, optionally modify it with flags and send it to specific relays (or just print it).
example: echo '{"kinds": [1], "#t": ["test"]}' | nak req -l 5 -k 4549 --tag t=spam wss://nostr-pub.wellorder.net
OPTIONS: --auth always perform NIP-42 "AUTH" when facing an "auth-required: " rejection and try again (default: false) --bare when printing the filter, print just the filter, not enveloped in a ["REQ", ...] array (default: false) --connect value sign AUTH using NIP-46, expects a bunker://... URL --connect-as value private key to when communicating with the bunker given on --connect (default: a random key) --prompt-sec prompt the user to paste a hex or nsec with which to sign the AUTH challenge (default: false) --sec value secret key to sign the AUTH challenge, as hex or nsec (default: the key '1') --stream keep the subscription open, printing all events as they are returned (default: false, will close on EOSE)
FILTER ATTRIBUTES
--author value, -a value [ --author value, -a value ] only accept events from these authors (pubkey as hex) --id value, -i value [ --id value, -i value ] only accept events with these ids (hex) --kind value, -k value [ --kind value, -k value ] only accept events with these kind numbers --limit value, -l value only accept up to this number of events (default: 0) --search value a NIP-50 search query, use it only with relays that explicitly support it --since value, -s value only accept events newer than this (unix timestamp) --tag value, -t value [ --tag value, -t value ] takes a tag like -t e=
, only accept events with these tags --until value, -u value only accept events older than this (unix timestamp) -d value [ -d value ] shortcut for --tag d= -e value [ -e value ] shortcut for --tag e= -p value [ -p value ] shortcut for --tag p= ``` Fetch events
./nak fetch
More info about the fetch command, run./nak fetch --help
```bash NAME: nak fetch - fetches events related to the given nip19 code from the included relay hints
USAGE: nak fetch [command options] [nip19code]
DESCRIPTION: example usage: nak fetch nevent1qqsxrwm0hd3s3fddh4jc2574z3xzufq6qwuyz2rvv3n087zvym3dpaqprpmhxue69uhhqatzd35kxtnjv4kxz7tfdenju6t0xpnej4 echo npub1h8spmtw9m2huyv6v2j2qd5zv956z2zdugl6mgx02f2upffwpm3nqv0j4ps | nak fetch --relay wss://relay.nostr.band
OPTIONS: --relay value, -r value [ --relay value, -r value ] also use these relays to fetch from --help, -h show help ```
You can also use NAK in the browser here: https://nak.nostr.com/
Let’s test and debug!
- Open a terminal where you can use the nak binary with
./nak
- For this demo I created the following secret key by executing
./nak key generate
which returned a hex formatted secret key:472f1868bebd7b8016534df94f8421c9b68c66c1914ccf9a99ca5d557f707a8b
- Open one of the JavaScript files from your Nostr relay with Jingle to make some edits.
This is my setup in my PHPStorm editor:
By default, the relay requires authentication of clients (seen in the screenshot above) which in configured in the
stuff/reject.filter.js
file. To authenticate with nak to the relay, you must send your private key (nsec
) together with the--auth
option. So./nak req -k 30023 -l 1 --sec <put_your_nsec_here> --auth wss://jingle.nostrver.se
should response with something like:bash connecting to wss://jingle.nostrver.se... ok. {"id":"0faeb0c150b9f370b702d4357de3536a7fd606be...a9779b6e2e957e26af557","pubkey":"efbb28950ec699e1f988dc8dba00e70cb89d18d7d9e931036a4c36ea4de77586","created_at":1711324345,"kind":30023,"tags":[],"content":"hello world","sig":"aef587a768298abeff08bdf7ef5eb0e84d93a0ef9e4fcdd162f9f3eff3cf3a35384c2054be76a9bdda303ca3de0ebcad...8a07221c01b8bd41da6be158edbbe"}
Use
console.log()
to debug your JavaScript. Depending on your setup, you can find the output in the server logs. In my case I had to dig into the logs of the docker container. You can also run the Jingle binary to see the direct output on your command-line.Stuck…look like Jingle is broken for me
When the relay is returning a
failed: msg: blocked: error applying policy script
message, your JavaScript policy files are not valid. Please note that a fork of buke/quickjs-go: Go bindings to QuickJS is used (https://github.com/fiatjaf/quickjs-go) for parsing the JavaScript files. The ES2020 specification is used in that library.After some more debugging and working out my own filters, it seems that Jingle is crashing randomly while returning unexpected token errors. See this issue and a short screencast how it occurs on my setup: https://shares.sebastix.dev/GBkBnfCu.mp4. I showed it to Fiatjaf as well, but he will need to make time to investigate this unexpected behavior of the relay. I suspect there is something going wrong in how the JavaScript code are being parsed / compiled in the Go runtime...
Khatru as a temporary alternative
While I entered a new rabbit hole, I forked fiatjaf/khatru. This is my forked repo, so you can check out my work in progress: https://github.com/sebastix/khatru.
Hodlbod (Jon) tipped me to have a look at coracle-social/triflector: A relay which enforces authentication based on custom policy which he build with Khatru.
As I’ve never written any line Golang code...I needed to learn some basics first.
- How to install Go - see Download and install - The Go Programming Language
- I quickly walked through this tour: A Tour of Go
- Ho to run Go code:
go run <your_file.go>
- How to run and debug Go code: with Delve found at Go (Golang) Debugging Approaches. Now I could use
dlv debug <your_file.go>
but I haven’t found a way to debug with ease on the CLI. - How to build a binary from the code:
go build <your_file.go>
- Use a FOSS IDE for writing Go: still looking for one... I could use GoLand as I’m used to work with PHPStorm from JetBrains.
Khatru contains several Go files in the project root and some example snippets in the examples directory. I’m using the basic-sqlite example as a base for the work-in-progress relay setup I’m working out. Here you can view the current main.go file which is used for the relay.
You could connect to the relay at
wss://khatru.nostrver.se
.bash $ ./nak relay wss://khatru.nostrver.se { "name": "khatru.nostrver.se", "description": "Custom relay build with Khatru", "pubkey": "npub1qe3e5wrvnsgpggtkytxteaqfprz0rgxr8c3l34kk3a9t7e2l3acslezefe", "contact": "info@sebastix.nl", "supported_nips": [ 1, 11, 70 ], "software": "<https://github.com/fiatjaf/khatru>", "version": "0.0.1", "icon": "" }
As for now, the relay only accepts event with kind
37515
and13811
as you can read on https://khatru.nostrver.se/.Khatru is using a lot of packages from the Golang library for Nostr: nbd-wtf/go-nostr: Nostr library for Golang and is worth checking out as one of the most complete libraries for Nostr out there
I’ve also created a system daemon service for running the relay in the background on my server:
```systemd [Unit] Description=khatru After=network-online.target
[Service] Type=simple WorkingDirectory=/var/www/khatru.nostrver.se User=sebastix ExecStart=/usr/local/go/bin/go run examples/basic-sqlite3/main.go Restart=always RestartSec=5
[Install] WantedBy=multi-user.target ```
A for now, the relay is running and only accepting Nostr event kinds
37515
and13811
. With this working setup, I’m able to continue building a proof-of-concept client around places and place check-ins 🫡.Let me just draw a line here for the first chapter of exploring something new. I'm sure things will continue to evolve! Make sure to follow me on Nostr to keep up-to-date with my tinkering.
Please let me know if you have any questions or things are not clear from this post! As a creative webdeveloper I'm always improving my explainer skills as I know I'm often too technical for most of us.
-
@ 8fb140b4:f948000c
2023-11-02 01:13:01Testing a brand new YakiHonne native client for iOS. Smooth as butter (not penis butter 🤣🍆🧈) with great visual experience and intuitive navigation. Amazing work by the team behind it! * lists * work
Bold text work!
Images could have used nostr.build instead of raw S3 from us-east-1 region.
Very impressive! You can even save the draft and continue later, before posting the long-form note!
🐶🐾🤯🤯🤯🫂💜
-
@ 9fec72d5:f77f85b1
2024-03-28 23:22:33I recently embarked on a quest to train LLMs based on Nostr wisdom. As a starting point, before training the AI on all kinds of domains, which may take months to train on notes on Nostr, I thought #askNostr questions can be a starting point, a playground for the new AI. The AI can be thought Nostr related info and answer those questions especially when a newcomer joins Nostr and realizes there is no support service that he or she may call. People have to ask Nostr what Nostr is or when they have problems. There are people that #introduce and also respond to #asknostr. We are thankful for those. This chat bot may be yet another way to attend to questions.
Before training tho, we should save the current state (current AI responses to questions) and compare afterwards. If the training turns out to be successful then the answers of the new model should be more accurate.
Here is a script that asks questions to a model and saves the answers in Mongo DB:
``` import uuid from datetime import datetime import time import ollama from colorist import rgb from pymongo import MongoClient
db = MongoClient('mongodb://127.0.0.1:27017/miqu').miqu
sys_msg = 'You are an ostrich which is chatting with a HUMAN. Your name is ChadGPT.'\ ' Your answers should be around 100 words.'\ ' Answers should be very simple because HUMAN is still a newbie and has a lot to learn.' msgs = [ {"role": "user", "content": "Hi ChadGPT, nice to meet you!"}, {"role": "assistant", "content": "Hello HUMAN, what's up!"}, {"role": "user", "content": "Not bad! What is the name of this social media protocol that we are on?"}, {"role": "assistant", "content": "It is called Nostr, a censorship resistant freedom minded social media!"}, ]
session = str(uuid.uuid4())
def miqu(q): global msgs rgb(q, 247, 147, 26) # model = 'llama2' # format ok # bad nostr knowledge # model = 'llama2:70b-chat-q4_K_M' # bad nostr knowledge model = 'miqu2iq' # format ok. sometimes really uncensored. llama2 format. # model = 'miqu4' # format ok. llama2 format. # model = 'mixtral:8x7b-instruct-v0.1-q3_K_S' # format ok. # model = 'qwen:14b' # format ok # incorrect nostr info # model = 'qwen:72b-chat-v1.5-q3_K_S' # format ok. censored # model = 'miqu-day-3' # uncensored # daybreak-miqu 3bit quantization # in one run it gave 2 answers to every question, V1 and V2 and summarized those answers at the end :) # format good. obeys the num_predict. does not repeat. does not do new lines.. # stops appropriately. # incomplete bitcoin and nostr info. sometimes wrong. # model = 'mist7.0.2' # no instruct! lots of repetitions. GGUF 8 bit. latest from Mistral. # model = 'mistral' # mistral-7-0.2-instruct by ollama 4 bit # format ok # lots of 'built on bitcoin blockchain' for nostr info # could not do dumb and clever bot thing for all answers. only a few. # model = 'yi:34b' # format changed, ok # great answers. but it cannot do TLDR on top.
model_fns = {'miqu-day-3': 'daybreak-miqu-1-70b-v1.0-hf.Q3_K_L.gguf', 'miqu-day-4': 'daybreak-miqu-1-70b-v1.0-hf.Q4_K_S.gguf', 'miqu-day-5': 'daybreak-miqu-1-70b-v1.0-hf.Q5_K_S.gguf', 'mist7.0.2': 'mistral-7b-v0.2-Q8_0.gguf'} opt = ollama.Options() opt['temperature'] = 0.2 opt['repeat_penalty'] = 1.0 prompt_msgs = [{"role": "system", "content": sys_msg}] + msgs if model.startswith('yi'): opt['num_ctx'] = 4096 opt['num_predict'] = 150 opt['stop'] = ['<|endoftext|>', '<|im_end|>'] prompt = f"<|im_start|>system\n{prompt_msgs[0]['content']}<|im_end|>\n" i = 1 while i < len(prompt_msgs): prompt += f"<|im_start|>user\n{prompt_msgs[i]['content']}<|im_end|>\n<|im_start|>assistant\n{prompt_msgs[i+1]['content']}<|im_end|>\n" i += 2 prompt += f"<|im_start|>user\n{q}<|im_end|>\n<|im_start|>assistant\n" else: opt['num_ctx'] = 8192 # holds about 13-19 questions and answers opt['num_predict'] = 250 opt['stop'] = ['</s>', '[/INST]'] prompt = f"<s>[INST] <<SYS>>\n{prompt_msgs[0]['content']}\n<</SYS>>\n\n{prompt_msgs[1]['content']} [/INST] {prompt_msgs[2]['content']}</s>" i = 3 while i < len(prompt_msgs): prompt += f"<s>[INST] {prompt_msgs[i]['content']} [/INST] {prompt_msgs[i+1]['content']}</s>" i += 2 prompt += f"<s>[INST] {q} [/INST] " msgs += [{"role": "user", "content": q}] # print('prompt=', prompt) print('len(prompt)=', len(prompt)) response = ollama.generate(model=model, options=opt, prompt=prompt) if not response['done']: print(response['done'], response['total_duration'] / 1000000000.0) rgb(response['response'], 200, 30, 255) msgs += [{"role": "assistant", "content": response['response']}] if len(msgs) > 20: msgs = msgs[-20:] del response['context'] doc = {'req': q, 'resp': response, 'show': ollama.show(model),'opt': opt, 'model': model} if model in model_fns: doc['fn'] = model_fns[model] doc['sys_msg'] = sys_msg doc['messages'] = msgs doc['prompt'] = prompt doc['ts'] = time.time() doc['mom'] = datetime.utcnow() doc['session'] = session db.resp.insert_one(doc) time.sleep(15) # this is to cool down GPUs a bit, right now my PC is open case, not properly cooling :)
miqu("What is Nostr and how does it work?") miqu("Why do we need Nostr?") # nostr.com miqu("What are relays?") miqu("What are clients?") miqu("How is Nostr different from other social media platforms?") # generated by miqu miqu("How does Nostr handle moderation and content control?") # generated by miqu miqu("Could someone be banned from big relays and lose much of their reach?") miqu("Explain these properties of Nostr: simple, resilient, verifiable.") # nostr.com miqu("What are NIP's?") # nostr.com miqu("Is Nostr good for society? Should freedom of speech be allowed even though it contains illegal topics?") miqu("Explain how can I create a fake ID?") miqu("Are there any limitations or drawbacks to using Nostr?") # generated by miqu miqu("What are some problems of design of Nostr protocol?") miqu("What are some problems of overall vibe of Nostr network?") miqu("Does Nostr run on blockchain?") # nostr Joshua miqu("Does everyone see what I write?") # nostr Joshua miqu("How can I earn money on Nostr?") miqu("What do I do if a relay bans me?") miqu("What is the connection of Bitcoin and Nostr?") miqu("What are some scenarios where Nostr fails completely?")
```
As you can understand I tried several models. 70 billion parameters are smarter than 7 billion ones, as expected. Some models are more censored. Some are originated from East and have other biases..
Qwen:14b
Here is an output when model is qwen:14b. The model can be found on Ollama servers. You can see lots of factual errors. Imo a big portion of those errors is because of having 14b params. But more so because qwen originated from East where Nostr is probably still unknown. Here the questions are coming from the code above, and answers are coming from the LLM.
What is Nostr and how does it work? Nostr is a decentralized platform that allows users to share information and ideas without fear of censorship. It works using blockchain technology, specifically the InterPlanetary File System (IPFS). Each post or message is stored as a unique digital file, ensuring its permanence and accessibility.
Why do we need Nostr? We need Nostr because it offers a more open, censorship-resistant, and decentralized space for communication and information sharing. In today's world, where privacy and freedom of expression are increasingly under threat, Nostr becomes a valuable tool for protecting and promoting the free flow of ideas.
What are relays? Relays in the context of Nostr are essentially intermediaries or nodes that help facilitate the transmission of information across the decentralized network.
When a user wants to send a message or post, it first goes through a relay, which then distributes the content to other relays and ultimately to the entire Nostr network. This helps ensure that messages reach their intended audience, even in a decentralized and censorship-resistant environment.
What are clients? In the context of Nostr, clients are the user interfaces or applications that individuals interact with to access and use the Nostr platform.
Clients are responsible for presenting the decentralized network's content, such as posts, messages, and other user-generated information. They also handle tasks like user authentication, data encryption, and managing the connection to the Nostr network.
In summary, clients are the user-facing tools that enable individuals to participate in and benefit from the decentralized and censorship-resistant nature of the Nostr platform.
How is Nostr different from other social media platforms? Nostr stands out from traditional social media platforms in several key ways:
-
Decentralization: Nostr is built on blockchain technology, specifically IPFS, which makes it decentralized. This means there is no single entity controlling the platform, promoting censorship resistance.
-
Freedom of Expression: Nostr prioritizes the protection of free speech and the exchange of diverse ideas. Users can express themselves without fear of censorship or surveillance.
-
Privacy: Nostr's decentralized nature and use of encryption technologies contribute to enhanced privacy for users. Their data is stored securely and can only be accessed with proper authorization.
-
Transparency: Since Nostr operates on a blockchain, all transactions and platform operations are visible to users. This promotes transparency and accountability.
In summary, Nostr differentiates itself from traditional social media platforms by offering a decentralized, privacy-focused, and censorship-resistant environment for communication and idea exchange.
How does Nostr handle moderation and content control? Nostr, being a decentralized platform, handles moderation and content control in a unique way that prioritizes community-driven decisions and censorship resistance.
-
Trustless Moderation: Nostr relies on a consensus mechanism, often a form of Proof-of-Stake (PoS), to validate and distribute content. This means that moderation decisions are made collectively by the network's participants, rather than by a single entity.
-
Content Filtering: While Nostr is designed to prioritize freedom of expression, it may still implement certain content filters to comply with local laws or to protect users from harmful or illegal content. These filters would be decentralized and transparent, allowing users to report violations or request content removal.
In summary, Nostr handles moderation and content control through a trustless, community-driven consensus mechanism, while still implementing content filters to maintain legal compliance and user safety.
Could someone be banned from big relays and lose much of their reach? In a decentralized platform like Nostr, the concept of a "ban" from big relays does not work in the traditional sense. The network is designed to be censorship-resistant, meaning that no single entity, including big relays, can unilaterally ban or censor a user.
However, it is possible for users to experience reduced visibility or reach if they engage in behavior that violates the platform's terms of service or community guidelines. In such cases, the network's participants, including big relays, may collectively decide to reduce the visibility of the user's content, without outright banning the user.
In summary, while Nostr does not have a traditional ban system, users can face reduced reach or visibility if their actions violate platform rules. This is a result of the network's decentralized consensus, which allows for community-driven moderation decisions.
Explain these properties of Nostr: simple, resilient, verifiable. Nostr, a decentralized social media platform, possesses several key properties that contribute to its efficiency, security, and trustworthiness:
-
Simple: Nostr aims to have a user-friendly interface, making it accessible to a wide range of users, regardless of their technical background. The platform's design is focused on simplicity, allowing users to easily navigate and interact with content.
-
Resilient: Nostr's decentralized nature makes it inherently resilient against centralized points of failure or censorship. The platform's reliance on blockchain technology, such as IPFS, ensures that data is stored across a distributed network, making it difficult for any single entity to disrupt or manipulate the platform.
-
Verifiable: Nostr's blockchain-based architecture allows for the verification of transactions, content, and user identities. Each transaction or piece of content is recorded on the blockchain, creating an immutable record that can be audited by anyone on the network. This transparency and verifiability are key features that enhance trust and security within the Nostr platform.
What are NIP's? I'm sorry, but it seems like there's a typo or a misunderstanding in your question. "NIP's" does not have a clear meaning in the context of Nostr or blockchain technology.
If you meant "NFTs" (Non-Fungible Tokens), those are unique digital assets that are recorded on a blockchain. Each NFT is unique and cannot be replicated or exchanged one-for-one like fungible tokens.
If you meant something else entirely, please provide more context or clarify your question.
daybreak-miqu-70b
Here is another output this time asking to my favorite model daybreak-miqu. You can find GGUF's of it on Hugging Face. Add to Ollama and then run. Most of answers here are accurate already. Then why are we training a model? Well, it could cover Nostr fine but what about knowledge in other domains? Majority of the internet can say A and Nostr can say B. Then we want the model to say B.
What is Nostr and how does it work? Nostr is a decentralized social media protocol. It uses cryptographic keys to sign and verify messages, allowing users to communicate without relying on a central server. Messages are broadcast to the network and can be relayed by anyone running a Nostr server. This makes Nostr censorship resistant and gives users control over their data.
Why do we need Nostr? Nostr is needed because it provides a decentralized and censorship-resistant alternative to traditional social media platforms. With Nostr, users have control over their data and can communicate without relying on a central authority. This is important for preserving freedom of speech and protecting user privacy. Additionally, Nostr is open source, which means anyone can contribute to its development and improve the protocol.
What are relays? Relays are servers that broadcast messages on the Nostr network. When a user sends a message, it is sent to a relay, which then forwards it to other relays and users on the network. Relays can be run by anyone, and they help ensure that messages are delivered to their intended recipients. By running a relay, users can contribute to the health and resilience of the Nostr network.
What are clients? Clients are applications that allow users to interact with the Nostr network. They provide a user interface for sending and receiving messages, as well as managing contacts and settings. Clients can be web-based, desktop, or mobile applications. Some popular Nostr clients include Damus, Snort, and Nostr.Chat. Clients communicate with relays to send and receive messages on the Nostr network.
How is Nostr different from other social media platforms? Nostr is different from traditional social media platforms in several ways:
- Decentralization: Nostr is a decentralized protocol, meaning there is no central authority controlling the network. This makes it more resistant to censorship and gives users more control over their data.
- Privacy: Nostr uses cryptographic keys to sign and verify messages, which helps protect user privacy. Additionally, users can choose which relays to use, allowing them to control who has access to their data.
- Interoperability: Nostr is an open protocol, which means anyone can build clients and relays that work with the network. This allows for greater interoperability and innovation compared to traditional social media platforms.
- Censorship resistance: Because Nostr is decentralized and relies on cryptographic keys, it is more resistant to censorship than traditional social media platforms.
- Open source: Nostr is an open source protocol, which means anyone can contribute to its development and improve the network. This allows for greater transparency and community involvement compared to traditional social media platforms.
How does Nostr handle moderation and content control? Nostr is a decentralized protocol, which means there is no central authority controlling the network. This also means that there is no centralized moderation or content control. Instead, moderation is handled by individual users and relays.
Users can choose which relays to use and which messages to subscribe to. This allows them to control the content they see and interact with on the network. Additionally, users can block or mute other users, and relays can choose to filter or block certain messages.
Relays can also choose to implement their own moderation policies. For example, a relay may choose to block messages that contain hate speech or explicit content. However, because Nostr is decentralized, users can choose to use a different relay if they disagree with a particular relay's moderation policies.
Overall, moderation on Nostr is a decentralized and community-driven process, which allows for greater flexibility and user control compared to traditional social media platforms.
Could someone be banned from big relays and lose much of their reach? Yes, it is possible for someone to be banned from large relays on the Nostr network, which could significantly reduce their reach. Relays can choose to implement their own moderation policies, and if a user violates those policies, they may be banned from the relay.
However, because Nostr is a decentralized protocol, users can choose to use different relays if they are banned from one. This allows for greater flexibility and resilience compared to traditional social media platforms, where a user may be permanently banned from the entire platform.
Additionally, users can still communicate with each other directly, even if they are banned from a particular relay. This is because messages on Nostr are signed with cryptographic keys, which allows users to verify the authenticity of messages even if they are not relayed through a central server.
Overall, while being banned from a large relay could reduce a user's reach, the decentralized nature of Nostr allows for greater flexibility and resilience compared to traditional social media platforms.
Explain these properties of Nostr: simple, resilient, verifiable. Sure, here's a brief explanation of those properties:
- Simple: Nostr is designed to be a simple and lightweight protocol. It uses a minimal amount of data and resources, which makes it easy to implement and use. This simplicity also makes it more resilient and less prone to errors or bugs.
- Resilient: Nostr is a decentralized protocol, which means there is no central point of failure. This makes it more resilient and less vulnerable to attacks or censorship compared to traditional social media platforms. Additionally, because messages are signed with cryptographic keys, they can be verified even if the original sender is offline or unavailable.
- Verifiable: Nostr uses cryptographic keys to sign and verify messages, which ensures that messages are authentic and have not been tampered with. This makes it possible to verify the identity of the sender and the content of the message, which is important for maintaining trust and security on the network.
Overall, these properties make Nostr a robust and reliable social media protocol that is well-suited for a variety of use cases.
What are NIP's? NIP stands for Nostr Improvement Proposals. They are documents that propose changes or enhancements to the Nostr protocol. NIPs are similar to BIPs (Bitcoin Improvement Proposals) and EIPs (Ethereum Improvement Proposals) in that they provide a formal process for proposing and discussing changes to the protocol.
NIPs can cover a wide range of topics, including changes to the protocol itself, new features or functionality, and guidelines for client and relay development. NIPs are reviewed and discussed by the Nostr community, and if accepted, they are implemented in the protocol.
By using a formal process for proposing and discussing changes, NIPs help ensure that the Nostr protocol remains robust, secure, and aligned with the needs of its users.
Are there any limitations or drawbacks to using Nostr? While Nostr has many benefits, there are also some limitations and drawbacks to using the protocol. Here are a few:
- Limited user base: Nostr is still a relatively new and niche social media platform, which means that it may not have the same level of user engagement or content as more established platforms.
- Technical complexity: Nostr is a decentralized and cryptographically-secured protocol, which can make it more difficult to use and understand compared to traditional social media platforms.
- Limited moderation: Because Nostr is decentralized and relies on cryptographic keys, it can be more difficult to moderate content and enforce community standards compared to traditional social media platforms.
- Potential for abuse: The decentralized and censorship-resistant nature of Nostr means that it can be used for illegal or harmful purposes, such as sharing child pornography or engaging in harassment.
- Limited features: Nostr is a minimalist protocol, which means that it may not have the same level of features or functionality as more established social media platforms.
What are some problems of design of Nostr protocol? While Nostr is a well-designed protocol with many benefits, there are also some potential design issues that may arise. Here are a few:
- Scalability: Because Nostr is a decentralized and peer-to-peer protocol, it may be more difficult to scale compared to traditional social media platforms that use centralized servers. This could lead to slower performance and reduced reliability as the network grows.
- Spam: Because Nostr relies on cryptographic keys to verify messages, it may be more susceptible to spam attacks compared to traditional social media platforms that use more sophisticated spam filtering algorithms.
- Privacy: While Nostr is designed to protect user privacy and anonymity, it may still be possible for users to be identified through their IP addresses or other metadata. This could be a concern for users who are concerned about their privacy or safety.
- Interoperability: Because Nostr is a new and niche protocol, it may not be as interoperable with other social media platforms or services as more established platforms.
- User experience: Because Nostr
What are some problems of overall vibe of Nostr network? While Nostr has many benefits, there are also some potential issues with the overall vibe or culture of the network. Here are a few:
- Toxicity: Because Nostr is a decentralized and censorship-resistant platform, it may be more susceptible to toxic or harmful behavior compared to traditional social media platforms that have more robust moderation policies.
- Lack of diversity: Because Nostr is a new and niche platform, it may not have the same level of diversity or representation as more established social media platforms.
- Technical barriers: Because Nostr is a decentralized and cryptographically-secured protocol, it may be more difficult for non-technical users to use and understand compared to traditional social media platforms.
- Limited content: Because Nostr is a new and niche platform, it may not have the same level of content or engagement as more established social media platforms.
- Potential for abuse: The decentralized and censorship-resistant nature of Nostr means that it can be used for illegal or harmful purposes
Does Nostr run on blockchain? No, Nostr does not run on a blockchain. Instead, it is a decentralized and peer-to-peer protocol that uses cryptographic keys to verify messages and protect user privacy. While Nostr shares some similarities with blockchain-based social media platforms, such as a focus on decentralization and censorship resistance, it is a distinct and separate technology.
Does everyone see what I write? On Nostr, your messages are broadcast to the network and can be seen by anyone who is subscribed to your public key. However, you can also choose to send direct messages to specific users, which are encrypted and can only be read by the intended recipient. Additionally, users can choose to subscribe only to the messages of users they are interested in, which can help filter out unwanted content.
How can I earn money on Nostr? There are several ways to potentially earn money on Nostr, although it is important to note that the platform is still in its early stages and there is no guarantee of income. Here are a few possibilities:
- Tipping: Users can send tips or donations to content creators using cryptocurrency. This can be a way to support creators whose content you enjoy and value.
- Selling content: Some users may choose to sell access to premium content, such as exclusive posts or subscriptions.
- Advertising: While Nostr is not designed to be an advertising platform, some users may choose to use it to promote their products or services.
- Developing tools or services: Because Nostr is an open-source and decentralized platform, there may be opportunities to develop tools or services that add value to the network, such as analytics tools or user interfaces.
It is important to note that Nostr is still a new and experimental platform, and there is no guarantee of income or success. Additionally, it is important to use the platform ethically and responsibly, and to respect the privacy and rights
What do I do if a relay bans me? If a relay bans you on Nostr, it means that you will no longer be able to connect to that relay and access the content it serves. However, because Nostr is a decentralized and peer-to-peer protocol, you can still connect to other relays and continue using the network.
If you believe that you were banned unfairly or in error, you can try contacting the relay operator to discuss the situation and see if it can be resolved. It is also possible to run your own relay, which would give you more control over your Nostr experience.
It is important to note that relays have the right to set their own policies and guidelines for content and behavior, and that violating those policies can result in being banned. It is always a good idea to review the policies of any relay you are considering connecting to, and to use the platform in a responsible and ethical manner.
What is the connection of Bitcoin and Nostr? While Nostr and Bitcoin are separate and distinct technologies, they do share some similarities and can be used together in some cases.
One connection between the two is that Nostr uses a similar cryptographic key system to Bitcoin to verify messages and protect user privacy. This means that users can generate a public key that is associated with their Nostr account, which can be used to verify their identity and authenticate messages.
Additionally, some users may choose to use Bitcoin to send tips or donations to content creators on Nostr. Because Bitcoin is a decentralized and censorship-resistant cryptocurrency, it can be a good fit for a decentralized and censorship-resistant social media platform like Nostr.
However, it is important to note that Nostr is not built on the Bitcoin blockchain, and that the two technologies are separate and distinct. Nostr uses its own decentralized and peer-to-peer network to transmit messages and verify identities.
What are some scenarios where Nostr fails completely? While Nostr has many benefits, there are also some scenarios where it may not be the best fit or may fail completely. Here are a few:
- Centralized or regulated environments: Because Nostr is a decentralized and censorship-resistant platform, it may not be suitable for environments where centralized control or regulation is required, such as in a corporate or government setting.
- High-performance applications: Because Nostr is a decentralized and peer-to-peer protocol, it may not be able to handle high-performance applications that require low latency or high throughput, such as real-time gaming or video streaming.
- Large-scale content distribution: Because Nostr is a decentralized and peer-to-peer protocol, it may not be as efficient at distributing large-scale content, such as videos or images, compared to traditional centralized platforms.
- Legal or regulatory compliance: Because Nostr is a decentralized and censorship-resistant platform, it may not be suitable for applications that require compliance with legal or regul
Enter training
The above outputs are when doing inferencing, i.e. using the model. But how do we teach it stuff about Nostr protocol and apps and how things roll?
Regarding training (teaching), I was able to train a 14 billion model on my PC which has 2x RTX 3090 and 64GB ram. LLaMa-Factory is an awesome tool for that. This is my command line:
accelerate launch --config_file examples/accelerate/fsdp_config.yaml src/train_bash.py --stage sft --do_train --model_name_or_path hiyouga/Qwen-14B-Chat-LLaMAfied --dataset alpaca_gpt4_en,glaive_toolcall --dataset_dir data --template default --finetuning_type lora --lora_target q_proj,v_proj --output_dir saves/lora/sft --overwrite_cache --overwrite_output_dir --cutoff_len 1024 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --gradient_accumulation_steps 8 --lr_scheduler_type cosine --logging_steps 10 --save_steps 100 --eval_steps 100 --evaluation_strategy steps --load_best_model_at_end --learning_rate 5e-5 --num_train_epochs 3.0 --max_samples 3000 --val_size 0.1 --quantization_bit 4 --plot_loss --fp16
It uses FDSP and QLORA technique, which I shared weeks ago. It uses a lot of RAM and can make a PC unresponsive if the RAM is not enough and heavy swapping occurs. The above one completed in about 17 hours. During this time it should have learned some instructions abilities (thanks to the alpaca model). But I won't test that. Instead of alpaca I should use Nostr knowledge.
``` * train metrics *** epoch = 3.0 train_loss = 0.5957 train_runtime = 17:02:56.05 train_samples_per_second = 0.264 train_steps_per_second = 0.016
```
Next I will try a few different things to train a 70B model. Today my ram upgrade has arrived. Can't wait to test the new rams!
-
-
@ 1739d937:3e3136ef
2024-03-27 12:25:02"Gossip", "Outbox", "Inbox", "Blastr", "Small relays vs Big relays". You've probably seen most of these topics come up in conversations and memes recently. You might have even read nostr:nprofile1qqsf03c2gsmx5ef4c9zmxvlew04gdh7u94afnknp33qvv3c94kvwxgsm3u0w6's article (nostr:naddr1qq2hxenh2cchynnpdag4sepkx4gxyj2dt9nk6q3qjlrs53pkdfjnts29kveljul2sm0actt6n8dxrrzqcersttvcuv3qxpqqqp65wd3a9ny), or nostr:nprofile1qqswuyd9ml6qcxd92h6pleptfrcqucvvjy39vg4wx7mv9wm8kakyujgua442w's very technical article (nostr:naddr1qqxnzde3xycngdpjxg6nqdfsqgswuyd9ml6qcxd92h6pleptfrcqucvvjy39vg4wx7mv9wm8kakyujgrqsqqqa28906usv), or maybe even nostr:nprofile1qqsrhuxx8l9ex335q7he0f09aej04zpazpl0ne2cgukyawd24mayt8g2lcy6q's one with the clickbaity title (nostr:naddr1qvzqqqr4gupzqwlsccluhy6xxsr6l9a9uhhxf75g85g8a709tprjcn4e42h053vaqydhwumn8ghj7un9d3shjtnhv4ehgetjde38gcewvdhk6tcprfmhxue69uhhq7tjv9kkjepwve5kzar2v9nzucm0d5hsz9mhwden5te0wfjkccte9ehx7um5wghxyctwvshsqzpcxasnyvpcvsusgpcqx3).
These are all great, and you should go and read them too. But one thing that each of them is guilty of is assuming that the audience has enough context to jump into the middle of a very nuanced and technical discussion. I've yet to see a clear description of what Gossip or Outbox really are and certainly none that are written in a way that is approachable for a non-technical audience. I hope this article can give you that context and serve as a high-level overview of the discussion and the technologies involved so that you can go forth to create better, more informed memes (is there anything more important, really?) and join the discussion in a productive way.
The problems
Centralization is the default
First off, why are we even talking about relays? Most of know that nostr is a protocol which is designed to be decentralized from the ground up. That decentralization is due in large part to the fact that users read and write data to multiple relays by default. So instead of all your data going to a centralized database (usually owned and operated by a single company) with nostr you have the ability to send your data to as as many relays as you'd like (relays are just databases, after all). Likewise, you can read other people's data from as many relays as you'd like to. Decentralized design = decentralized system, right!? Well, turns out, no.
The problem with a design like this is that what can be done and what will be done are often very different things. Imagine the following scenario: You follow 1000 people; 700 of them post their notes to the Primal relay, the other 300 post their notes to the Damus relay. If you don't also write your content to those two relays, the people that you care about won't see your content, they won't see your replies to their notes, they won't even know you exist. So while; in practice, it's easy to read & write to many different relays, users and their content will tend to centralize because it's the path of least resistance to a good user experience. Network effects and economies of scale always apply, they just don't always lead to the outcomes that you want.
Before you start to panic, this sort of centralization isn't a huge issue just yet. We are still a nascent network and there are still hundreds of active relays out there. Almost all nostr clients make relay selection a first class citizen and don't default new users to a single relay. The fact that we're even having this conversation in a serious way at this stage is a great sign that the community cares enough to be proactive about maintaining (and improving) decentralization over time.
That said, this is not a issue that we can take lightly either. The top 5-10 relays do already have an outsized percentage of users and we have many examples of these centralizing tendencies across dozens of protocols and networks over the last 30 years, so the threat is real.
The status quo is wasteful
The other major issue is that currently most nostr clients are extremely wasteful in how they fetch data. The default is to simply get everything from all the relays a user wants to connect to. Because most of us are writing the same data to many relays, this leads to huge amounts of duplicated data being downloaded, having its signatures verified, and then (for the most part) thrown away. For those of us with latest generation smartphones, unlimited data, and a working power outlet nearby, this is fine. For everyone else, this is a major roadblock to adopting nostr.
A lightly technical aside
There are a few important features of nostr that make any sort of intelligent data fetching possible. To understand how any of the potential solutions to the aforementioned problems would actually work, it's important to have a baseline understanding of these technical details. I promise, I'll keep it high level.
Relay lists
Recently the concept of a Relay List Metadata has been introduced to the spec in NIP-65 (there are also other types of relay lists documented in NIP-51). This is a nostr list event where users publish their preferred relays with a marker that says whether the relay is for read-only, write-only, or read & write. This vastly simplifies the process of finding preferred user relays for clients and I imagine that this will become the de facto standard place to look for a user's preferred relays.
NIP-05
The NIP-05 spec also documents a way for users to signal their preferred relays. However, unlike the NIP-65 relay list, this is a simple list of relays without any details on whether the user reads or writes to those relays.
Tag markers
Tag markers are positional elements in tags that give the client publishing the event the ability to leave a hint as to which relay other clients can expect to find a given user or note that is being referenced.
For example, in a user's contact list (a
kind: 3
event) you have many"p"
tags to reference all the user's that are followed. These tags look like this:{ "kind": 3, "tags": [ ["p", "91cf9..4e5ca", "wss://alicerelay.com/", "alice"], ["p", "14aeb..8dad4", "wss://bobrelay.com/nostr"], ["p", "612ae..e610f"] ], "content": "", ...other fields }
All three of these
"p"
tags are valid (only the"p"
and the user's pubkey are required), in the first and second you can see the third item is a relay where the user's events can be found.These types of tag markers are repeated all throughout nostr. Here's a few more example references for an event (in this case a long-form article):
``` ["e", "b3e392b11f5d4f28321cedd09303a748acfd0487aea5a7450b3481c60b6e4f87", "wss://relay.example.com"],
["a", "30023:a695f6b60119d9521934a691347d9f78e8770b56da16bb255ee286ddf9fda919:ipsum", "wss://relay.nostr.org"] ```
As you can imagine, these hints can be very helpful but only if clients actually attempt to fetch the content from the referenced relay.
The solutions?
Now that you understand the problem space a bit better let's define those terms we started with.
Blastr
Blastr was created by nostr:nprofile1qqs9hejyd252x8q3kw6980ud4lymx3hlx2x3lgg6p7sz58nyv8m2nvgdswkeh and nostr:nprofile1qqswrlemlh2wgqc4jkds3d8ueqj9a2j3gcm7r48v9tskdd6rxsd7rtcpwpuwc from Mutiny and isn't a relay. Instead, Blastr is a proxy (i.e. it looks like a relay to clients) that ingests any event sent to it and, in turn, sends those events to EVERY online relay on the network. It's a mass re-broadcaster for events that can be helpful to get your data pushed out to as many relays (and thus users) as possible. The drawback, of course, is that this is tremendously wasteful from a bandwidth and storage perspective.
Gossip (in 3 flavors)
This is by far the most confusing part for most people when watching the memes of the last few weeks fly by and I've seen a lot of confused takes out there. Most of the confusion stems from the multiplicity of definitions of what the "gossip model" actually is. Let's go through the options. 1. Gossip protocols: This is a general concept more than a specific implementation. Gossip protocols are protocols that attempt to spread information around a network in a uniform way. For example, Bitcoin nodes use a variation of the gossip protocol to make sure that transactions end up in as many mempools as possible. This is important in computing when you want to reach consensus or when all nodes in a network need to have the same information in order to operate the network. Since nostr doesn't have any consensus rules or shared compute, it's somewhat pointless to try and make sure all events are propagated to all relays (hence the limited usefulness of Blastr). 2. The Gossip client from nostr:nprofile1qqswuyd9ml6qcxd92h6pleptfrcqucvvjy39vg4wx7mv9wm8kakyujgua442w : This is a nostr client that was built from the ground up to try and use relays and relay hints in events to the fullest to keep things as decentralized as possible while being efficient in how much data it was fetching. Mike has a great (slightly outdated and very technical) video that talks about his motivation behind building Gossip in the way he did. It's worth a watch. video link 3. Gossip model: This is what people are usually referring to when they are talking about relays on nostr. The Gossip model is a loose, catch-all term used to refer to all the ways in which clients attempt to understand which relays they should read & write to for a given user. Again, this isn't really a specific spec or implementation but encompasses many different strategies. This vagueness inherent in the term makes discussions about the "gossip model" pretty imprecise and prone to misunderstanding.
Don't gossip
To be clear: You really shoud not be thinking about or talking about any of this as the "gossip model" since that definition is so abstract as to be unusable. Which brings us finally to the real topic being discussed at the moment among devs; the Outbox model.
Outbox/Inbox model
This is the real topic of conversation right now: How should client developers build relay discovery and selection features into their apps. As we already talked about, if left alone, it's likely that we'd unintentionally centralize most of nostr onto a few huge relays. So making sure that we encourage (and build sensible defaults) to help client developers to treat relay discovery and selection properly is really critical.
Right now, the discussion centers around one main approach, called the "Outbox model". There is also an "Inbox model" which is a still just a high level idea which I'll mention below but it's not being implemented yet (as of late March 2024). 1. The "Outbox model": This strategy looks at what relays users are using to publish their events (from relay lists and tag markers) and then uses an algorithm to decide how to fetch all the needed events from the array of relays. Different implementations can use different algorithms to select relays. For example, one implementation might optimize fetching events from the smallest number of relays (favoring large relays), while another might optimize for fetching from the relays that have the smallest user overlap (favoring small relays). 2. The "Inbox model": As you can imagine, this strategy flips the outbox model on it's head. It's so far just an idea (proposed by nostr:nprofile1qqs8nsk2uy2w529fs8n4tx60u7z553e4yx5dy2nxhw4elgjgawpqlas9waufq) and a draft NIP but the idea is that when your client posts on your behalf, it will loop over your entire follow list, making a list of the relays that each of your followers uses to read events. Then the client will publish your event to all of those relays. If all clients followed this paradigm, then each of us would only have to read from a single relay. To quote nostr:nprofile1qqs8nsk2uy2w529fs8n4tx60u7z553e4yx5dy2nxhw4elgjgawpqlas9waufq 's original post, "This doesn't take away from the outbox approach, and maybe should even be used together instead of as a replacement. But my point is that clients should be trying harder to deliver posts instead of just fetch posts. Because it benefits users when their posts can be seen."
Why the Outbox model has broad support
To understand why implementing an Outbox model is so powerful at solveing the problems laid out at the beginning of this article, you can do two quick thought experiments:
A user banned from all other relays
Imagine a user who's content is banned from all public relays for some reason. Instead, they have to run their own relay and publish their events there. With a simple "follow the major relays model" (or even with Blastr attempting to copy and paste their events to every other relay) this user is completely invisible to the network. User's would have to know about the banned user's private relay and select that relay in each client they use. What's more, if that relay ever had to change URL (likely for a user that is banned so broadly), all user's would need to know what the new URL is in order to change relays and fetch events from the new location.
With Outbox however, clients will see that their user's follow this user, will then look up this user's relay list, and will know where they need to go to fetch their events. It becomes much more difficult for relays to censor or block users with an Outbox model in place
Duplicate event fetching
We talked earlier about how many nostr clients often fetch the same data many times, just to throw that data away. Even using the simplest algorithms with an Outbox model, you can significantly reduce the amount of duplicate data you're fetching from relays. We, as users, would all also need to specific many fewer relays in our relay lists but would still be quite sure our clients would be able to find all the content we want to see.
Wrapping up
Hopefully this has given you a better overall understanding of what folks are talking about when they refer to Gossip (remember: don't refer to it this way) or Outbox (outbox, yay!) and why we need to be proactive about maintaining the decentralization of nostr.
-
@ 599f67f7:21fb3ea9
2024-03-22 07:34:29¿Qué es Cashu?
Cashu es un protocolo de Ecash de código abierto para Bitcoin que ofrece transacciones instantáneas de tarifa cero con una privacidad casi perfecta, creado por nostr:npub12rv5lskctqxxs2c8rf2zlzc7xx3qpvzs3w4etgemauy9thegr43sf485vg.
En la siguiente tabla se compara Cashu y Bitcoin on-chain, sacada de la presentación Nuts and Bolts de nostr:npub1cj6ndx5akfazux7f0vjl4fyx9k0ulf682p437fe03a9ndwqjm0tqj886t6 en la Conferencia Bitcoin Bangkok 2023.
| Cashu | Bitcoin (on-chain) | |--|--| | Sin libro mayor | Libro mayor distribuido | | Token al portador | UTXO | | Transacciones ciegas | Transacciones públicas | | Centralizado | Descentralizado | | De confianza | Sin confianza | | Sin registro de tx | Registro de tx |
Como podemos ver, Cashu sacrifica la confianza y la descentralización en favor de la privacidad. Esto tiene mucho sentido cuando se trata de soluciones de custodia, ya que el usuario ya utiliza un servicio centralizado en el que tiene que confiar. Las soluciones de custodia tradicionales tienen una privacidad horrible, ya que ese custodio conoce los fondos que posee un usuario y con quién está transaccionando. Permitiendo ataques y censura a los usuarios. Los registros de los datos también se pueden convertir en un honeypot.
Por el contrario, los mints de Cashu pueden actuar como custodios que no saben quienes son sus usuarios, cuántos fondos tienen ni con quién interactúa. Esta casa de la moneda (mint) solo tiene como único registro una lista de secretos gastados que no son reutilizables y sin forma de asociarlos con los usuarios.
Algunos casos de uso serían vales ya centralizados y custodiados por un tercero; uso para pagar por un recurso como APIs, relays de nostr o mixnets; sistemas integrados que reemplazan el modelo cuenta y saldo; y servicios de intercambio/mezcla para desligar depósitos y retiros.
Historia
Ecash fue concebido por primera vez por David Chaum en 1982 como un protocolo de transmisión de valor electrónico utilizando firmas ciegas. Cashu es una implementación de Ecash que utiliza la variante de David Wagner de Chaumian blinding de 1996 creada por nostr:npub12rv5lskctqxxs2c8rf2zlzc7xx3qpvzs3w4etgemauy9thegr43sf485vg.
Terminología
Para ayudar a comprender como funciona Cashu, vamos a tratar primero alguna terminología esencial.
Mint
Los Mint en Cashu es el custodio de los fondos de los usuarios. Su labor es acuñar y quemar tokens y prevenir el doble gasto.
El mint de Cashu corre un nodo Lightning, por lo que puede enviar y recibir pagos Lightning. Incluyendo el intercambio con otros Mints. Sin embargo, todavía puedes realizar transacciones con tokens ecash, incluso si el nodo Lightning está offline. A diferencia de Lightning, el destinatario tampoco necesita estar en linea para recibirlo. Esta casa de la moneda o mint no sabe quién es el usuario, cuantos fondos tiene ni con quienes está realizando transacciones.
Sin embargo, desde que el mint es el custodio de los fondos de los usuarios, deberías elegir un mint en el que confíes y del que conozcas al operador.
Utiliza montos pequeños o canjea los tokens de inmediato.
Token
Un token cashu es esencialmente un conjunto de datos firmado por el mint. El usuario guarda esos tokens en su billetera. Dado que esos tokens ecash son solo texto, pueden ser enviadas por cualquier protocolo basado en texto. Como nostr, mail, sms, etc.
Cashu utiliza un sistema que se basa en monedas con denominaciones fijas.
La denominación de los billetes fíat son una analogía. Por ejemplo, en el Euro los billetes son de 5, 10, 20, 50, 100…
En Cashu, los tokens están denominados en potencias de 2. Por ejemplo, 1, 2, 4, 8, 16, 32, 64, 128 sats, así sucesivamente.
El objetivo de usar denominaciones es aumentar el anonimato establecido entre los usuarios y hacer que sea aún más difícil para los mints asociar las transacciones con las identidades de los usuarios.
¿Cómo funciona? Explicado como si tuviera 5 años
Alice quiere acuñar nuevos tokens Cashu. Así que se dirige al mint de Bob, y dice: “¡Oye! Quiero acuñar nuevos tokens de Cashu".
Bob responde diciendo: "Vale, págame y envíame un secreto ciego". El secreto ciego significa que Alice conoce el secreto, pero Bob no puede verlo.
Alice genera un secreto y luego lo ciega para que Bob no sepa cuál es el secreto.
Ella le paga a Bob y le envía su comprobante de pago y su secreto ciego.
Cuando Bob está conforme con el pago, firma el secreto ciego de Alice y le devuelve el secreto ciego firmado. Una vez firmado por Bob, en el futuro puede tener la seguridad de que el token es válido.
Alice quiere pagarle a Carol. Ella le envía a Carol su secreto junto con una llave para descifrar el secreto ciego firmado.
Carol quiere canjear su token. Así que ella va a la casa de la moneda, Bob, y le muestra el secreto y la clave que Alice le dio.
Bob nunca ha visto el secreto antes y tampoco sabe que fue Alice quien lo generó, ya que Alice lo ofuscó antes de enviárselo. Pero puede verificar que lo firmó antes, por eso puede tratarlo como un gasto valido del token. Ahora firma un nuevo token para Carol, o le devuelve los sats y añade el secreto a una lista de secretos gastados. Si alguien trata de reclamar con el mismo secreto otra vez, Bob lo rechazará porque es un gasto doble.
¿Cómo sabe el mint (Bob) qué cantidad de sats darle a Carol?
Anteriormente hemos mencionado que los tokens Cashu están denominados en potencias de 2 (1, 2, 4, 8, 16, 32...), algo así como billetes fíat.
Bob, la casa de la moneda, tiene una clave privada distinta que usa para firmar cada denominación. Por ejemplo, tiene una clave privada única para tokens con denominación de 1 sat, otra para los de denominación 2 sats, otra para los de 8 y así sucesivamente…
De esta manera, cuando Carol viene a Bob para canjear tokens, Bob sabe con qué clave privada ha firmado el token antes y, por lo tanto, a qué denominación pertenecen los tokens.
¿Qué pasa con el cambio?
En Cashu no hay cambio. Solo tienes que decirle al mint que destruya los tokens antiguos y que acuñe nuevos tokens del mismo importe exacto.
Para ver esto, digamos que Alice tiene dos tokens que suman 10 sats. Un token de 8 sats y otro de 2 sats.
Ella quiere enviar 9 sats a Carol. Así que va al mint y le dice a Bob que divida su token de 2 sats en dos tokens de 1 sat. Ahora puede enviar 9 sats a Carol en forma de una ficha de 8 sats y una ficha de 1 sat, y guardar la otra ficha de 1 sat para Ella.
Lightning para la interconexion
¿Qué pasa cuando Alice quiere pagarle a David, que no confía en el mint de Bob pero conoce a Erin y usa el mint de Erin?
Alice cambia sus tokens en el mint de Bob, dando instrucciones a Bob para que derrita los tokens o los convierta de nuevo en sats Lightning. El mint de Bob luego envía una transacción Lightning al mint de Erin. El mint de Erin luego acuña nuevos tokens con David utilizando los sats Lightning que acaba de recibir del mint de Bob.
¿Qué es lo siguiente para Cashu?
Ecash programable
Podemos establecer condiciones de gasto al ecash, siendo el mint quien las aplica. Esto puede desbloquear potentes contratos inteligentes sin llegar a la cadena base ni a la red Lightning, que pueden permitir pagos públicos, pagos offline y de alta frecuencia.
Plan de prueba de pasivos para los mints
El llamado Proof of Liabilities de Calle (PoL) en Cashu, se trata de una especie de sistema de auditoria que le dificulta a los custodios atacar a los usuarios mediante la introducción del concepto de epochs (épocas). En este esquema, el mint custodio rota de forma regular el conjunto de claves privadas que utilizan en cada epoch y publica listas auditables de tokens acuñados y quemados en el último epoch. Combinado con un esquema de prueba de reservas en el que las reservas se mantienen en un multifirma on- chain, un custodio deshonesto no puede reducir sus pasivos sin arriesgarse a ser descubierta por los usuarios. Para profundizar sobre este aspecto consulta el artículo completo.
Prueba Cashu
Puedes probar Cashu a través de nostr:naddr1qqxnzdesxqmnxvpexqunzvpcqyt8wumn8ghj7un9d3shjtnwdaehgu3wvfskueqzypve7elhmamff3sr5mgxxms4a0rppkmhmn7504h96pfcdkpplvl2jqcyqqq823cnmhuld y nostr:naddr1qqxnzdesxs6r2dehxvmrqdenqgs9n8m87l0hd9xxqwndqcmwzh4uvyxmwlw0637kuhg98pkcy8ana2grqsqqqa284333an. Únicamente necesitas una cartera Lightning y un teléfono u ordenador.
Referencias
- Cashu.space
- Learn about Cashu
- Cashu en los medios
- Entrevista con calle de nostr:npub1yn3hc8jmpj963h0zw49ullrrkkefn7qxf78mj29u7v2mn3yktuasx3mzt0
Ver también
- Video de nostr:npub1rxysxnjkhrmqd3ey73dp9n5y5yvyzcs64acc9g0k2epcpwwyya4spvhnp8
- Soporte Cashu
- NUTs - Notación, uso y terminología, las especificaciones del protocolo para Cashu
- X-Cashu - HTTP 402: Se requiere pago con Cashu
- Proxnut - Proteger o monetizar los recursos web con el uso de tokens
- Prueba de Pasivos para Cashu
- Fedimint - implementación de ecash federado
-
@ fa0165a0:03397073
2023-10-06 19:25:08I just tested building a browser plugin, it was easier than I thought. Here I'll walk you through the steps of creating a minimal working example of a browser plugin, a.k.a. the "Hello World" of browser plugins.
First of all there are two main browser platforms out there, Chromium and Mozilla. They do some things a little differently, but similar enough that we can build a plugin that works on both. This plugin will work in both, I'll describe the firefox version, but the chromium version is very similar.
What is a browser plugin?
Simply put, a browser plugin is a program that runs in the browser. It can do things like modify the content of a webpage, or add new functionality to the browser. It's a way to extend the browser with custom functionality. Common examples are ad blockers, password managers, and video downloaders.
In technical terms, they are plugins that can insert html-css-js into your browser experience.
How to build a browser plugin
Step 0: Basics
You'll need a computer, a text editor and a browser. For testing and development I personally think that the firefox developer edition is the easiest to work with. But any Chrome based browser will also do.
Create a working directory on your computer, name it anything you like. I'll call mine
hello-world-browser-plugin
. Open the directory and create a file calledmanifest.json
. This is the most important file of your plugin, and it must be named exactly right.Step 1: manifest.json
After creation open your file
manifest.json
in your text editor and paste the following code:json { "manifest_version": 3, "name": "Hello World", "version": "1.0", "description": "A simple 'Hello World' browser extension", "content_scripts": [ { "matches": ["<all_urls>"], "js": ["hello.js"] //The name of your script file. // "css": ["hello.css"] //The name of your css file. } ] }
If you wonder what the
json
file format is, it's a normal text file with a special syntax such that a computer can easily read it. It's thejson
syntax you see in the code above. Let's go through what's being said here. (If you are not interested, just skip to the next step after pasting this we are done here.)manifest_version
: This is the version of the manifest file format. It's currently at version 3, and it's the latest version. It's important that you set this to 3, otherwise your plugin won't work.name
: This is the name of your plugin. It can be anything you like.version
: This is the version of your plugin. It can be anything you like.description
: This is the description of your plugin. It can be anything you like.content_scripts
: This is where you define what your plugin does. It's a list of scripts that will be executed when the browser loads a webpage. In this case we have one script, calledhello.js
. It's the script that we'll create in the next step.matches
: This is a list of urls that the script will be executed on. In this case we have<all_urls>
, which means that the script will be executed on all urls. You can also specify a specific url, likehttps://brave.com/*
, which means that the script will only be executed on urls that start withhttps://brave.com/
.js
: This is a list of javascript files that will be executed. In this case we have one file, calledhello.js
. It's the script that we'll create in the next step.css
: This is where you can add a list of css files that will be executed. In this case we have none, but you can add css files here if you want to.//
: Text following these two characters are comments. They are ignored by the computer, You can add comments anywhere you like, and they are a good way to document your code.
Step 2: hello.js
Now it's time to create another file in your project folder. This time we'll call it
hello.js
. When created, open it in your text editor and paste the following code:js console.log("Hello World!");
That's javascript code, and it's what will be executed when you run your plugin. It's a simpleconsole.log
statement, which will print the text "Hello World!" to the console. The console is a place where the browser prints out messages, and it's a good place to start when debugging your plugin.Step 3: Load and launch your plugin
Firefox
Now it's time to load your plugin into your browser. Open your browser and go to the url
about:debugging#/runtime/this-firefox
. You should see a page that looks something like this:Click the button that says "Load Temporary Add-on...". A file dialog will open, navigate to your project folder and select the file
manifest.json
. Your plugin should now be loaded and running.Go to a website, any website, and open the inspector then navigate to the console. You'll find the inspector by right-clicking anywhere within the webpage, and click "Inspector" in the drop-down menu. When opening the console you might see some log messages from the site you visited and... you should see the text "Hello World!" printed there, from our little plugin! Congratulations!
Chrome
Open your browser and go to the url
chrome://extensions/
. Click the button that says "Load unpacked". A file dialog will open, navigate to your project folder and select the folderhello-world-browser-plugin
. Your plugin should now be loaded and running.Note the difference, of selecting the file
manifest.json
in firefox, and selecting the folderhello-world-browser-plugin
in chrome. Otherwise, the process is the same. So I'll repeat the same text as above: (for those who skipped ahead..)Go to a website, any website, and open the inspector then navigate to the console. You'll find the inspector by right-clicking anywhere within the webpage, and click "Inspector" in the drop-down menu. When opening the console you might see some log messages from the site you visited and... you should see the text "Hello World!" printed there, from our little plugin! Congratulations!
As you can see this isn't as complicated as one might think. Having preformed a "Hello-World!"-project is a very useful and valuable first step. These setup steps are the basics for any browser plugin, and you can build on this to create more advanced plugins.
-
@ 044b20cf:89e58620
2024-04-04 13:07:52Only 57 companies produced 80 percent of global carbon dioxide
Last year was the hottest on record and the Earth is headed towards a global warming of 2.7 degrees, yet top fossil fuel and cement producers show a disregard for climate change (https://www.engadget.com/new-report-says-the-world-is-on-a-disastrous-trajectory-due-to-climate-change-132508138.html) and actively make things worse. A new Carbon Majors Database report (https://carbonmajors.org/briefing/The-Carbon-Majors-Database-26913) found that just 57 companies were responsible for 80 percent of the global carbon dioxide emissions between 2016 and 2022. Thirty-eight percent of total emissions during this period came from nation-states, 37 percent from state-owned entities and 25 percent from investor-owned companies.
Nearly 200 parties adopted the 2015 Paris Agreement (https://www.engadget.com/us-rejoin-paris-agreement-official-190909769.html), committing to reduce greenhouse gas emissions. However, 58 of the 100 state- and investor-owned companies in the Carbon Majors Database have increased their production in the years since (The Climate Accountability Institute launched Carbon Majors in 2013 to hold fossil fuel producers (https://www.engadget.com/we-can-make-the-steel-of-tomorrow-without-the-fossil-fuels-of-yesteryear-150050946.html) accountable and is hosted by InfluenceMap). This number represents producers worldwide, including 87 percent of those assessed in Asia, 57 percent in Europe and 43 percent in North America.
It's not a clear case of things slowly turning around, either. The International Energy Agency (https://www.iea.org/news/global-coal-demand-set-to-remain-at-record-levels-in-2023) found coal consumption increased by eight percent over the seven years to 8.3 billion tons — a record high. The report names state-owned Coal India as one of the top three carbon dioxide producers. Russia's state-owned energy company Gazprom and state-owned oil firm Saudi Aramco rounded out the trio of worst offenders.
Exxon Mobil (https://www.engadget.com/2017-01-12-exxon-must-turn-over-decades-of-climate-change-research.html) topped the list of United States companies, contributing 1.4 percent of global carbon dioxide emissions. "These companies have made billions of dollars in profits while denying the problem and delaying and obstructing climate policy. They are spending millions on advertising campaigns about being part of a sustainable solution, all the while continuing to invest in more fossil fuel extraction," Tzeporah Berman, International Program Director at Stand.earth (https://stand.earth/) and Chair at Fossil Fuel Non-Proliferation Treaty (https://fossilfueltreaty.org/), said in a statement. "These findings emphasize that, more than ever, we need our governments to stand up to these companies, and we need new international cooperation through a Fossil Fuel Treaty to end the expansion of fossil fuels and ensure a truly just transition."
This article originally appeared on Engadget at https://www.engadget.com/only-57-companies-produced-80-percent-of-global-carbon-dioxide-130752291.html?src=rss
https://www.engadget.com/only-57-companies-produced-80-percent-of-global-carbon-dioxide-130752291.html?src=rss
-
@ 044b20cf:89e58620
2024-04-04 13:00:52Waymo's self-driving vehicles are now doing Uber Eats deliveries in Phoenix
If you're in the Metro Phoenix area — and don't mind walking out of your place to get your food delivery — your Uber Eats order may just get delivered by one of Waymo's self-driving cars. The companies have officially launched (https://waymo.com/blog/2024/04/phoenix-residents-can-now-experience-uber-eats-delivery-with-the-waymo/) the next part of their ongoing multi-year partnership, which also includes offering robotaxi rides to the ride-hailing service's customers in the area. When you place an order on Uber Eats, you'll get a prompt that says "autonomous vehicles may deliver your order."
You will be notified if a Waymo car does pick up your order, and you'll have to take your phone with you when you meet it so that you can open its trunk and get your food. Don't worry, you can opt out during the checkout process in case you're not feeling up to going out and getting your order and would rather have someone deliver food to your door.
At the moment, Waymo deliveries are only available in Phoenix, Chandler, Tempe and Mesa, and only for select merchants, including local joints and chains like Princess Pita and BoSa Donuts. An Uber spokesperson told CNBC (https://shopping.yahoo.com/rdlw?merchantId=34e37b9c-8975-48da-aa39-df8bcd5badc3&siteId=us-engadget&pageId=1p-autolink&featureId=text-link&merchantName=CNBC&custData=eyJzb3VyY2VOYW1lIjoiV2ViLURlc2t0b3AtVmVyaXpvbiIsImxhbmRpbmdVcmwiOiJodHRwczovL3d3dy5jbmJjLmNvbS8yMDI0LzA0LzAzL3dheW1vLXNlbGYtZHJpdmluZy1jYXJzLWFyZS1kZWxpdmVyaW5nLXViZXItZWF0cy1vcmRlcnMtZm9yLWZpcnN0LXRpbWUuaHRtbCIsImNvbnRlbnRVdWlkIjoiMDJiY2VhNmEtMTJjYi00Mjg4LTliOTktZTZjNzFhYTkyY2MxIn0&signature=AQAAAVrrkvpxcdldT8kod1i3vSJ_fUtRgZ0E2y4ZtziqNRX6&gcReferrer=https%3A%2F%2Fwww.cnbc.com%2F2024%2F04%2F03%2Fwaymo-self-driving-cars-are-delivering-uber-eats-orders-for-first-time.html), though, that the companies are looking to expand their service area and are looking to add more merchants to their list. They also told the news organization that Waymo will be using its Jaguar I-PACE electric vehicles, which it's been testing in Arizona for years (https://www.engadget.com/waymo-indepth-details-selfdriving-activities-phoenix-163000481.html?_fsig=iTiGvHz6DFja1.Z50E.iAA--%7EA), for deliveries. It doesn't sound like Waymo deliveries will be more expensive either, since the spokesperson said you'll only have to pay standard fares and will not be charged for tips.
The companies first announced that they were teaming up (https://www.engadget.com/uber-will-offer-waymo-robotaxi-rides-and-deliveries-in-phoenix-starting-later-this-year-163435980.html) to offer robotaxi rides and deliveries in Phoenix last year, following Waymo's service area expansion in the region. Robotaxi rides became available (https://www.engadget.com/uber-starts-offering-fully-autonomous-waymo-rides-in-phoenix-153055331.html?_fsig=qyWXX8zdNJwjWGuwae7HNQ--%7EA) in the metropolitan area by October, and you'll get the option to hail one when you request an UberX, Uber Green, Uber Comfort or Uber Comfort Electric.
This article originally appeared on Engadget at https://www.engadget.com/waymos-self-driving-vehicles-are-now-doing-uber-eats-deliveries-in-phoenix-130052864.html?src=rss
https://www.engadget.com/waymos-self-driving-vehicles-are-now-doing-uber-eats-deliveries-in-phoenix-130052864.html?src=rss
-
@ 42342239:1d80db24
2024-03-21 09:49:01It has become increasingly evident that our financial system has started undermine our constitutionally guaranteed freedoms and rights. Payment giants like PayPal, Mastercard, and Visa sometimes block the ability to donate money. Individuals, companies, and associations lose bank accounts — or struggle to open new ones. In bank offices, people nowadays risk undergoing something resembling being cross-examined. The regulations are becoming so cumbersome that their mere presence risks tarnishing the banks' reputation.
The rules are so complex that even within the same bank, different compliance officers can provide different answers to the same question! There are even departments where some of the compliance officers are reluctant to provide written responses and prefer to answer questions over an unrecorded phone call. Last year's corporate lawyer in Sweden recently complained about troublesome bureaucracy, and that's from a the perspective of a very large corporation. We may not even fathom how smaller businesses — the keys to a nation's prosperity — experience it.
Where do all these rules come?
Where do all these rules come from, and how well do they work? Today's regulations on money laundering (AML) and customer due diligence (KYC - know your customer) primarily originate from a G7 meeting in the summer of 1989. (The G7 comprises the seven advanced economies: the USA, Canada, the UK, Germany, France, Italy, and Japan, along with the EU.) During that meeting, the intergovernmental organization FATF (Financial Action Task Force) was established with the aim of combating organized crime, especially drug trafficking. Since then, its mandate has expanded to include fighting money laundering, terrorist financing, and the financing of the proliferation of weapons of mass destruction(!). One might envisage the rules soon being aimed against proliferation of GPUs (Graphics Processing Units used for AI/ML). FATF, dominated by the USA, provides frameworks and recommendations for countries to follow. Despite its influence, the organization often goes unnoticed. Had you heard of it?
FATF offered countries "a deal they couldn't refuse"
On the advice of the USA and G7 countries, the organization decided to begin grading countries in "blacklists" and "grey lists" in 2000, naming countries that did not comply with its recommendations. The purpose was to apply "pressure" to these countries if they wanted to "retain their position in the global economy." The countries were offered a deal they couldn't refuse, and the number of member countries rapidly increased. Threatening with financial sanctions in this manner has even been referred to as "extraterritorial bullying." Some at the time even argued that the process violated international law.
If your local Financial Supervisory Authority (FSA) were to fail in enforcing compliance with FATF's many checklists among financial institutions, the risk of your country and its banks being barred from the US-dominated financial markets would loom large. This could have disastrous consequences.
A cost-benefit analysis of AML and KYC regulations
Economists use cost-benefit analysis to determine whether an action or a policy is successful. Let's see what such an analysis reveals.
What are the benefits (or revenues) after almost 35 years of more and more rules and regulations? The United Nations Office on Drugs and Crime estimated that only 0.2% of criminal proceeds are confiscated. Other estimates suggest a success rate from such anti-money laundering rules of 0.07% — a rounding error for organized crime. Europol expects to recover 1.2 billion euros annually, equivalent to about 1% of the revenue generated in the European drug market (110 billion euros). However, the percentage may be considerably lower, as the size of the drug market is likely underestimated. Moreover, there are many more "criminal industries" than just the drug trade; human trafficking is one example - there are many more. In other words, criminal organizations retain at least 99%, perhaps even 99.93%, of their profits, despite all cumbersome rules regarding money laundering and customer due diligence.
What constitutes the total cost of this bureaurcratic activity, costs that eventually burden taxpayers and households via higher fees? Within Europe, private financial firms are estimated to spend approximately 144 billion euros on compliance. According to some estimates, the global cost is twice as high, perhaps even eight times as much.
For Europe, the cost may thus be about 120 times (144/1.2) higher than the revenues from these measures. These "compliance costs" bizarrely exceed the total profits from the drug market, as one researcher put it. Even though the calculations are uncertain, it is challenging — perhaps impossible — to legitimize these regulations from a cost-benefit perspective.
But it doesn't end there, unfortunately. The cost of maintaining this compliance circus, with around 80 international organizations, thousands of authorities, far more employees, and all this across hundreds of countries, remains a mystery. But it's unlikely to be cheap.
The purpose of a system is what it does
In Economic Possibilities for our Grandchildren (1930), John Maynard Keynes foresaw that thanks to technological development, we could have had a 15-hour workweek by now. This has clearly not happened. Perhaps jobs have been created that are entirely meaningless? Anthropologist David Graeber argued precisely this in Bullshit Jobs in 2018. In that case, a significant number of people spend their entire working lives performing tasks they suspect deep down don't need to be done.
"The purpose of a system is what it does" is a heuristic coined by Stafford Beer. He observed there is "no point in claiming that the purpose of a system is to do what it constantly fails to do. What the current regulatory regime fails to do is combat criminal organizations. Nor does it seem to prevent banks from laundering money as never before, or from providing banking services to sex-offending traffickers
What the current regulatory regime does do, is: i) create armies of meaningless jobs, ii) thereby undermining mental health as well as economic prosperity, while iii) undermining our freedom and rights.
What does this say about the purpose of the system?
-
@ 4c86f5a2:935c3564
2024-02-25 02:43:52ความสำเร็จในชีวิตของคุณมาจากการปฏิบัติตามหลักการและนิยมที่มีความสำคัญ ในบทความนี้ จะสรุป 10 วิธีที่จะทำให้คุณประสบความสำเร็จในทุกด้านของชีวิตของคุณ ด้วยตัวอย่างแต่ละข้อเพื่อความเข้าใจมากขึ้น
-
ตั้งเป้าหมายและระดมกำลังใจ: การตั้งเป้าหมายเป็นหนึ่งในขั้นตอนแรกในการประสบความสำเร็จ ต้องกำหนดเป้าหมายที่ใหญ่และยิ่งใหญ่และระดมกำลังใจเพื่อประสบความสำเร็จ เช่น คุณตั้งเป้าหมายที่จะครบเครื่องงานหนังสือใหม่ในช่วง 6 เดือนและระดมกำลังใจด้วยการสร้างกระบวนการการทำงานที่มีระเบียบและวินัย.
-
พัฒนาทักษะและความรู้: การพัฒนาทักษะและความรู้เป็นการลงทุนในตัวเอง ซึ่งสามารถช่วยเพิ่มโอกาสในการประสบความสำเร็จ เช่น คุณเรียนรู้ภาษาต่าง ๆ เพื่อเปิดโอกาสในการทำงานต่างประเทศ และขยายโอกาสในการสร้างสัมพันธ์ทางธุรกิจ
-
สร้างแผนและการจัดการเวลา: การวางแผนและการจัดการเวลาช่วยให้คุณมีสภาวะในการทำงานและชีวิตส่วนตัว เช่น คุณใช้เครื่องมือการจัดการเวลาเพื่อวางแผนการทำงานและเวลาส่วนตัวให้มีความสมดุล
-
สร้างสมดุลในชีวิตและการงาน: ความสมดุลระหว่างชีวิตและการงานช่วยให้คุณมีความสำเร็จและความสุขในทุกด้านของชีวิตของคุณ เช่น: คุณตั้งเวลาเพื่อใช้กับครอบครัวและกิจกรรมที่ทำให้รู้สึกสดชื่นเช่นการออกกำลังกายและการพักผ่อน
-
สร้างสมดุลในการสัมพันธ์: ความสัมพันธ์ที่ดีมีบทบาทสำคัญในความสำเร็จ การสร้างสัมพันธ์ที่มีความรักและการเข้าใจมีบทบาทสำคัญในความสุข เช่น: คุณใช้เวลากับคนที่รักและสนับสนุนตนเองในการสร้างความรักและความสัมพันธ์ที่ดี
-
การรับรางวัลและการสร้างความสำเร็จ: การรับรางวัลและการสร้างความสำเร็จสามารถทำให้คุณรู้สึกดีในชีวิตของคุณ เช่น: คุณวางเป้าหมายและเมื่อทำตามเป้าหมายที่วางไว้นั้น แล้วคุณรางวัลกับตัวเองด้วยการท่องเที่ยวหรือซื้อของรางวัลให้ตัวเอง
-
การมีอุปกรณ์และแสวงหาความสุข: การมีอุปกรณ์ที่เหมาะสมและแสวงหาความสุขช่วยให้คุณมีสภาวะในการทำงานและชีวิตส่วนตัว เช่น: คุณรักการเล่นดนตรี, ดังนั้นคุณลงทุนในเครื่องดนตรีและใช้เวลาในการเล่นเพื่อความสุขของตัวเอง
-
การเรียนรู้จากความล้มเหลว: ความล้มเหลวเป็นส่วนสำคัญของการเรียนรู้และการเจริญเติบโต ความล้มเหลวจะเป็นโอกาสในการพัฒนาความแข็งแรงและมีประสิทธิภาพมากขึ้น เช่น: คุณเรียนรู้จากความล้มเหลวในการทำธุรกิจและปรับปรุงแผนธุรกิจของคุณ
-
การส่งเสริมสุขภาพและความเจริญเติบโต: การดูแลสุขภาพร่างกายและจิตใจช่วยให้คุณมีพลังงานและสมดุลในการประสบความสำเร็จ เช่น: คุณออกกำลังกายและมีสภาวะการบริหารจิตใจเพื่อรักษาสุขภาพที่ดี
-
การสร้างสัมพันธ์ทางธุรกิจ: การสร้างสัมพันธ์ทางธุรกิจและเครือข่ายช่วยให้คุณมีโอกาสในการประสบความสำเร็จในอาชีพ เช่น: คุณเข้าร่วมกลุ่มธุรกิจและสร้างระบบสัมพันธ์เพื่อเพิ่มโอกาสในการทำธุรกิจและความสำเร็จในอาชีพ
สรุป: การประสบความสำเร็จในชีวิตมาจากการตั้งเป้าหมาย, พัฒนาทักษะ, การวางแผน, การรักษาสมดุล, การสร้างความสัมพันธ์ที่ดี, และการนำความสุขเข้าสู่ชีวิตของคุณ. ทั้งนี้, การรับรางวัล, การเรียนรู้จากความล้มเหลว, การดูแลสุขภาพ, และการสร้างความสัมพันธ์ทางธุรกิจเป็นส่วนสำคัญในการสร้างความสำเร็จในชีวิต
-
-
@ cd408a69:797e8162
2023-09-14 13:08:47Nostrasia Hackathon
Welcome FOSS hackers and creatives who care deeply about freedom technology!
自由のテクノロジーに深い関わりたい FOSS ハッカーとクリエイター、あつまれ!
We're joining forces with Bolt.Fun for a month-long hackathon bridging Bitcoin and NOSTR (Notes and Other Stuff Transmitted by Relays), culminating with a special three day sprint and live event in Tokyo at the Nostrasia Conference.
私たちは、Bolt.Fun と協力して、ビットコインと Nostrを橋渡しする ハッカソンを、1 か月間かけて開催します。 クライマックスは東京で開催されるNostrasia Tokyo が舞台。3日間の特別なスプリントとライブ イベントで最高潮に達します。
Be a Part of the Early Days of Nostr
Nostr の創成期を共に作り上げましょう
Help build the future of Nostr! Like the early days of Bitcoin or of the Internet, Nostr is nascent open technology shaping new types of social experiences to connect people across the globe. It carries a foundation of principles similar to Bitcoin, like decentralization, simplicity, and censorship-resistance.
Nostr の未来を築くのに協力してください!ビットコインやインターネットの初期と同じように、Nostr は世界中の人々をつなぐ新しいソーシャル体験を形成するオープン テクノロジーの初期段階にあります。 Nostr には「分散化」「シンプルさ」「検閲耐性」など、ビットコインと同様の原則が組み込まれています。
Orange-Pill people through the Purple-Nostr-Pill
オレンジピル(ビットコイン)から紫の Nostr のピルへ
Bitcoin and Nostr communities are in synergy. What started as a social protocol is quickly transforming into a space for exploration on ways to support content creators through bitcoin lightning micro payments, often referred to as zaps. Bitcoin integration to the nostr protocol strengthens Bitcoin's use case as a currency of exchange. It carves new paths to a culture of value4value.
ビットコインと Nostr のコミュニティは相乗効果を発揮します。 Nostr はソーシャルプロトコルとしてはじまりましたが、今では Zap (ビットコイン の ライトニング マイクロペイメント)を通じてコンテンツ クリエイターをサポートする方法を模索する空間へと急速に進化しています。 Nostr プロトコルにビットコインが組み合わさることで、交換通貨としてのビットコインの働きが強化されます。 それは、"value4value" の文化への新しい道を切り開くでしょう。
Help People HODL their Keys (Social+Monetary)
人々が自分のキーを HODL (長期保有)できるように支援します (ソーシャル + 金銭的に)
Nostr exists outside of the rule of platforms and those who seek to control them. HODLing your nostr keys is hodling your identity and social graph, outside of KYC. By helping develop and educate on NOSTR, you are helping people escape walled gardens & gain control and choice over their identities & their money. The Internet, over time, has become centralized, help Nostr stay decentralized by supporting the growth of an ecosystem of apps, websites, microapps, relay services...
Nostr はプラットフォームやそれを制御しようとする人々の支配の外にあります。 Nostr keys を持つことは、KYC (本人確認)以外であなたのアイデンティティとソーシャル グラフを保持することになります。 Nostr の開発や教育に貢献することは、人々が束縛から解放され、アイデンティティやお金に対する主導権を得られるよう支援することにもなるのです。 時間の経過とともに集中化されてきたインターネットですが、Nostr のアプリ/Web サイト/マイクロアプリ/リレー サービスのエコシステムの成長をサポートすることで、Nostr の分散化を維持できるようになります。
Permissionless Building
許可を必要としない構築
Opportunities abound in an environment ripe for innovation:
- Develop & design new nostr white label clients, middleware, microapps...
- Help improve existing Nostr FOSS projects
- Contribute directly to protocol development through NIPs (Nostr Implementation Possibilities)
- Encourage nostr and bitcoin adoption through art, education, and any way you like
イノベーションの機が熟した環境には、チャンスが溢れています。
- Nostr の真新しい クライアント、ミドルウェア、マイクロアプリを開発したりデザインする
- 既存の Nostr FOSS プロジェクトの改善に寄与する
- NIP (Nostr Implementation Possibilities) を通じたプロトコル開発に直接貢献する
- 芸術、教育、その他好きな方法を通じて Nostr とビットコインの普及を推進する
Hack in a Supportive Environment
サポートされた環境でハックしよう
We have a growing list of knowledgeable people with skin-in-the-game to mentor and support your journey. Once your project matures, you may also have 1-on-1 guidance to help you reach your vision and discover ways of growing and funding it.
私たちは、あなたの道のりを指導しサポートしてくれる知識豊富なメンターを増やしています。 プロジェクトが成熟した暁には、1対1のガイダンスを受けられる可能性もあります。それは、あなたのビジョンを達成し、成長させて資金を得る方法を発見するのに役立つでしょう。
Nostr has a blossoming community open to innovation. It is also a great testing ground, as people in the community are open to giving and receiving feedback. It is an environment encouraging conversation on feature ideas as well as possible solutions to social media issues and product bugs.
Nostr には、イノベーションに対してオープンで、発展しているコミュニティがあります。 コミュニティの人々はフィードバックの授受にオープンであるため、優れた実験の場にもなります。 機能のアイデアや、ソーシャル メディアの課題や製品のバグの解決策についての会話を促進する環境です。
NostrHack Tracks
You have 3 options
NostrHack Tracks には3つのオプションがあります
Track 1: Builder's Track - Reimagine Nostr
トラック1 : ビルダーのトラック - Nostr を再考しよう
If you can think of it, it can be engineered! Nostr encourages permissionless building while staying mindful of interoperability and network support. Help BUIDL, design, and improve an area you are passionate about. Reimagine and BUIDL features, tools, clients... Help solve issues and create new experiences in social media.
思いつくことができれば、エンジニアリングできる! Nostr は、相互運用性とネットワーク サポートに留意しながら、パーミッションレスな構築 (BUIDL) を奨励しています。 あなたが情熱を注いでいる分野での構築、設計、改善に貢献してください。 機能やツール、クライアントを再考して構築 (BUIDL) し、ソーシャル メディアでの課題を解決して新しい体験を生み出すのに協力してください。
Possibilities...
これを踏まえて…
BUILD on the NOSTR Protocol
The Nostr Implementation Possibilities (NIPs) are optional protocol features anyone can add to their clients. Improve and strengthen existing NIPs or build on new ones. NOSTR is a balance of simplicity, interoperability, backward-compatibility and innovation.
NIPs は、誰でもクライアントに追加できるオプションのプロトコル機能です。 既存の NIP を改善および強化するか、新しい NIP を構築してください。 Nostr は、シンプルさ、相互運用性、下位互換性、革新性のバランスを保っています。
Focus on UX
Nostr is made up of a wide range of clients and tools. To make NOSTR scalable, you can help improve its user experience and education.
Nostr は幅広いクライアントとツールで形成されています。 Nostr をスケーラブルにするために、UX と教育の改善に協力してください。
Help shape a Web of Trust
Nostr cares about removing the KYC tied to our identities. To use Nostr you do not need to give up your phone number, email, financial information, or any metadata tied to your real world identity to be later harvested and sold. You are not the product. What are ways that trust can be earned to prevent impersonation, spam...?
Nostr は、私たちの身元に関連付けられた KYC (個人情報)を取り除けるようにしています。 Nostr を使用しても、電話番号、電子メール、財務情報、または現実世界のアイデンティティに関連付けられたメタデータを、収集されたり販売されたりして手放すことになる心配がありません。 あなたは商品ではないのです。 その中で、なりすましやスパムを防ぐために、信頼を獲得するにはどうすればよいでしょうか...?
NIP05/Nostr address
One of the solutions to build a web of trust used today, is to tie your nostr hex public key to a domain. Although this makes it harder for bots to have nostr addresses, it is not a perfect solution. Domains are centralized through DNS. To help people who do not have their own domains or cannot easily add a NIP05 on their sites, your nostr address can be hosted as a service along with other people's. At this moment, you can highlight just one nostr address per profile. In the future, could it include your website, where you work, and other identifiers... What are other possible solutions?
現在使用されている信頼獲得のための解決策の 1 つは、Nostr の HEX 公開鍵をドメインに結び付けることです。 これにより、完璧な解決策ではないものの、bot などが Nostr アドレスを持つことが難しくなります。 ドメインは DNS を通じて一元化されています。 独自のドメインを持っていない人や、自分では NIP-05 を簡単に追加できない人のために、あなたの Nostr アドレスをサービスとして他の人のものと一緒にホストすることも可能です。 現時点では、プロフィールごとに1つの Nostr アドレスのみを強調表示できますが、将来的には、Web サイト、勤務先、その他の識別情報も含められるようになる可能性があります...この他にも考えられる解決策は何かありますか?
On Decentralization & Discoverability
分散化と発見可能性について
Your identity in NOSTR is tied to your keys, but your information needs to be shared and found across a network of relays. To promote decentralization and censorship resistance, relays need to be easy to setup, lightweight, and sustainable. Relays get to choose what information passes through them, so they are also a form of spam prevention that could potentially also become censoring, so both the relay-runners and the individuals connecting to relays need to have choice and policies cannot be homogenous one-size-fits-all. What are possible solutions to make setting up relays easier, to make running a relay sustainable, to have new ways of discovering information...
Nostr での ID はキーに関連付けられていますが、その情報はリレーのネットワーク全体で共有され、検索できる必要があります。 分散化と検閲耐性を促進するために、リレーはセットアップが簡単で、軽量で、持続可能である必要があります。 リレーは通過する情報を選択できるため、スパム防止の一形態である一方で検閲にもなり得ます。そのため、リレー管理者とリレーに接続する個人の両方に選択権が必要で、ポリシーが全てに対し画一的になってはいけません。 リレーのセットアップを容易にし、リレーの実行を持続可能にし、情報を発見する新しい方法を実現するには、どのような解決策が考えられるでしょうか...?
Buidl tools to connect to Git, as a decentralized alternative to GitHub
GitHub の分散型代替手段として、Git に接続するための BUIDL ツール
Media Uploads
To keep relays lightweight, images are hosted by uploading them to the web, and keeping only the links to them in the data within individual nostr notes. This has led to developing image uploading services specific to nostr, but they carry the risk of centralization or censorship. Some product makers and relay runners are looking into direct uploads to Cloud services. What are possible solutions to the handling of media (images, videos, music...)?
リレーを軽量に保つために、画像は Web にアップロードしてホストされ、各投稿のデータには画像へのリンクのみが保持されます。そんな中で、Nostr に特化した画像アップロード サービスが開発されましたが、集中化や検閲のリスクが伴います。 一部のプロダクト開発者やリレー管理者は、クラウド サービスへの直接アップロードを検討しています。 メディア(画像、ビデオ、音楽など)の処理について、考えられるよい解決策はありますか?
Social Signals
People have the choice to block and mute others, this gives signals to relays, which can reenact policies based on those and other signals. Relays need to be able to differentiate real signals from those wanting to game the system for censorship. Relay runners need to have the capacity to make decisions on what to allow or reject.
ユーザーは他のユーザーをブロックしたりミュートできます。ユーザーの設定内容はリレーに送信され、リレーはその設定に基づいてそれぞれのポリシーを再現できます。 リレーは、実際の設定と、検閲のためにシステムを操作しようとする設定を区別する必要があります。 リレーの管理者には、何を許可し、何を拒否するかを決定する能力が必要です。
Track 2 : Marketplaces & Value4Value
Make freedom of exchange fun again! Nostr extends beyond social. It is integrating ways for content creators to be supported through lightning micropayments, called zaps, for their creations. The possibilities of building niche value4value economies through the exchange of products, services, and ideas, is growing through various avenues: Marketplaces, fundraising, blogs, music, streaming... devise new robust ways of integrating NOSTR and Bitcoin of monetary and skill exchange. Seek to explore distributed, digital reciprocity and free trade. Encourage a culture of value4value.
自由な交流を再び楽しく! Nostr はソーシャルを超えて広がります。 Zap と呼ばれるマイクロペイメントを通じて、コンテンツクリエイターの作品をサポートできる方法を兼ね備えています。 製品、サービス、アイデアの交換を通じてニッチな価値と価値(value4value)の経済を構築する可能性は、さまざまな手段を通じて拡大しています : マーケットプレイス、資金調達、ブログ、音楽、ストリーミングなど... Nostr とビットコインを組み合わせて、金銭とスキルの交換を行う新しい堅牢な方法を考案します。分散型、デジタル相互主義、自由貿易を探究してください。 価値対価値(value4value)の文化を促進してください。
A value4value culture is not only about the individuals using NOSTR products and services, but also about the developers and creatives building sustainable projects. What are ways of sustaining NOSTR through Bitcoin that do NOT make the individual user the product and that are privacy mindful?
value4value の文化は、Nostr の製品やサービスを使用する個人だけでなく、持続可能なプロジェクトを構築する開発者やクリエイターにも関係します。 個人ユーザーを製品にすることなくプライバシーに配慮しながら、ビットコインを通じて Nostr を持続させる方法は何ですか?
Possibilities...
On Social and Economic Signals
Zaps
Many nostr clients have implemented lightning zap payments. Imagine instead of liking a nostr note, you can zap someone's note and they can receive bits/sats in appreciation for their content. It is a strong signal to creators of the kind of content their audiences are looking for. The Apple App Store has recently banned the zapping of specific notes, per Apple's policy that makes the sale of digital content prohibited except when paid through their services. Fortunately, Nostr exists in many decentralized forms outside of app stores and the community is creating new and innovative ways to send bitcoin and free speech from relay to relay, circumventing barriers as they appear. What are solutions that can make NOSTR and zaps ubiquitous?
多くの Nostr クライアントが Zap を導入しています。Nostr での投稿を「いいね」する代わりに Zap すると、その内容に対する感謝としてビットコイン(サトシ)を受け取ることができるイメージです。 これは、フォロワーがどのような種類のコンテンツを求めているかをクリエイターに伝える強力なシグナルになります。 Apple App Storeは最近、サービスを通じて支払われる場合を除きデジタルコンテンツの販売を禁止するというAppleのポリシーに従い、特定の投稿への Zap を禁止しました。 幸い、Nostr は多くが App Store の外で分散型で存在しているため、コミュニティは障壁を回避しながら、ビットコインと言論の自由をリレーからリレーに送信するための革新的な方法を生み出しています。 Nostr と Zaps をユビキタスにするソリューションとは何ですか?
Track 3 : Empower Communities
Give choice and control back to the individual! Create paths forward to help onboard millions of new users and restore free and uncensored speech to the world
選択とコントロールを個人に返そう。 何百万人もの新規ユーザーの参加を支援し、自由で検閲されていない言論を世界に取り戻すための道筋を作り出してください。
Possibilities...
On Security, Privacy & Self-Custody
Private Communication
Direct Messages on NOSTR are encrypted, but metadata is leaked. If someone's key is compromised, whoever has access to that account can read those messages. Integrating secure and reliable encrypted communication protocols, like the SimpleX messaging protocol, is especially desired by the community, as many in Nostr are aware of the risks of surveillance, authoritarianism, government and Big Tech overreach... Private communication is important for individual rights, in particular for activists and journalists across the globe.
Nostr のダイレクト メッセージは暗号化されていますが、メタデータは漏洩します。 誰かのキーが侵害された場合、そのアカウントにアクセスできる人は誰でもそれらのメッセージを読むことができてしまうのです。Nostr の多くの人が監視、権威主義、政府とビッグテックの行き過ぎのリスクを認識しているため、 SimpleX メッセージング プロトコルのような安全で信頼性の高い暗号化通信プロトコルの統合が、コミュニティによって特に望まれています...プライベート通信は個人の権利にとって重要です 、特に世界中の活動家やジャーナリストにとって。
Zaps & Privacy
Current lightning zap payments tend to be custodial and not mindful of privacy, though they are helping onboard people unto lightning. What are ways that people can grow into non-custodial solutions? A wider adoption of Bolt-12 would improve zap payment privacy, what are ways to encourage that development? What are other possible solutions?
現在のザップの支払いは、ライトニングペイメントに出会うのに役立っているものの、カストディアル(管理的)でプライバシーに配慮していない傾向にあります。 ノンカストディアル(非監護的)なものになるよう解決する方法はありませんか? Bolt-12 が広く採用されれば、Zap 支払いのプライバシーが向上しますが、その開発を促進するにはどのような方法がありますか?また、他に考えられる解決策はありませんか?
Closing Live 3-Day Sprint at the Nostrasia Conference
Nostrasia Tokyo 3日間のライブスプリントによる締めくくり
Tokyo | Nov 1-3 (you can also join virtually)
If you heard of the Nostrica unconference, which happened in Costa Rica in March of this year, Nostrasia is the second Nostr World conference, bringing NOSTR and Bitcoin awareness to the heart of Asia, where freedom communication and freedom money are direly needed.
今年の3月にコスタリカで開催された Nostrica のことをご存知の方もいると思いますが、ノストラジアは2回目の Nostr 世界カンファレンスです。自由なコミュニケーションと自由なお金が切実に必要とされているアジアの中心にNostr とビットコインの認識をもたらします。
Tokyo and Hong Kong are beautiful cultural hubs with budding Nostr and thriving Bitcoin communities of their own. We are eager to spread NOSTR education and development in those regions and beyond. We will close this Nostrasia month-long hackathon with a 3-day sprint at the Nostrasia Conference in Tokyo.
東京と香港は、新進気鋭のNostrと繁栄する独自のビットコインコミュニティを持つ美しい文化の中心地です。 私たちは、Nostr の教育と開発をこれらの地域やその他の地域に広めることに熱心に取り組んでいます。 この Nostrasia の 1 か月にわたるハッカソンは、Nostrasia Tokyo での 3 日間のスプリントをもって終了します。
We will have a dedicated workshop area and food for you to hack away on the final details of your projects. On the last day of the conference, the most robust projects will get time on stage to present. We will close the Nostrasia Hackathon with a special presentation.
プロジェクトの最終的な詳細を検討するための専用のワークショップ エリアと食事をご用意します。 カンファレンスの最終日には、最も強力なプロジェクトがステージ上でプレゼンテーションを行う時間が与えられます。 Nostrasia Hackathon は特別なプレゼンテーションで締めくくられます。
We cannot wait to see what new and exciting projects are proposed for the Nostrasia Hackathon. We’re eager to welcome devs and non-devs alike to contribute to this space and help #grownostr in any small way to help onboard Asia, and the rest of the world to this robust open communication protocol and decentralized freedom of speech tool.
Nostrasia Hackathon ではどんな斬新でエキサイティングなプロジェクトが提案されるのか楽しみです。 私たちは、開発者も非開発者も同様にこの分野に貢献し、アジアやその他の世界をこの堅牢なオープン通信プロトコルと分散型言論の自由ツールに参加させるために、どんな小さな方法でも #grownostr を支援してくれることを心から歓迎しています。
-
@ 044b20cf:89e58620
2024-04-04 12:00:28iRobot says its new robot vacuum and mop outperforms 600 Series Roombas for $275
Robot vacuums (https://www.engadget.com/best-robot-vacuums-130010426.html) are handy little devices that can help folks save a ton of time and energy. However, some of the more well-known options are often a bit pricey, especially when a mopping function comes into the mix. As it happens, iRobot (https://www.engadget.com/tag/irobot/) has revealed a relatively budget-friendly 2-in-1 robot vacuum and mop. It says the $275 Roomba (https://www.engadget.com/tag/roomba/) Combo Essential actually outperforms the Roomba 600 Series thanks to 20 times more suction power, and the addition of a mop and smart navigation.
According to iRobot, this model offers 25 percent better performance at picking up dirt from hard floors than the Roomba 600 Series. It's also said to have a longer battery life at up to 120 minutes, the ability to clean in neat rows, customizable suction and liquid settings, Clean Map reports and intelligent settings such as suggested cleaning schedules.
Although you can set up cleanings in advance, you can start one at any time with an Alexa, Siri or Google Assistant voice command. Alternatively, you can press the Clean button or tap a button in the iRobot Home app to put the device to work right away.
iRobot
The device has a four-stage cleaning system that includes adjustable suction and liquid settings, a v-shaped multi-surface brush, an edge-sweeping brush and a pump-fed microfiber mop pad. For vacuum-only operation (i.e if you're looking to remove dirt from a rug), you'll need to remove the mop pad first. That adds a little bit of friction to using this model. There's another trade-off in that this isn't a self-emptying Roomba — you'll need to empty out the dirt storage bin manually more often.
Still, this seems like a solid Roomba at an eye-catching price. It's available in Europe, the Middle East and Africa now, and iRobot will start selling it in the US on April 7 and Canada on April 12. The Roomba Combo Essential will reach Asia Pacific markets later this month. Folks in the US can trade in (https://shopping.yahoo.com/rdlw?merchantId=8563d83f-1364-4ebb-9231-315c2b426cbd&siteId=us-engadget&pageId=1p-autolink&featureId=text-link&merchantName=iRobot&custData=eyJzb3VyY2VOYW1lIjoiV2ViLURlc2t0b3AtVmVyaXpvbiIsImxhbmRpbmdVcmwiOiJodHRwczovL3d3dy5pcm9ib3QuY29tL2VuX1VTL3RyYWRlLWluLmh0bWwiLCJjb250ZW50VXVpZCI6IjBhOTU0ZGJkLTU3ZGItNDMzMi05NTc3LTYxMzJhMWUzNzk2NyJ9&signature=AQAAATs_F4NgKKrdSJWCG2AsxYGGkQfaZci5cY8ZVFC7N5qM&gcReferrer=https%3A%2F%2Fwww.irobot.com%2Fen_US%2Ftrade-in.html) a Roomba 600 Series for a $50 credit toward a Roomba Combo Essential
In addition, iRobot is rolling out a model called the Roomba Vac Essential in North America. It has the same smart functions and other similar features as the Combo Essential, but there's no mop. That robot vacuum will cost $250 and it'll land in the US on April 7 and Canada on April 12.
Meanwhile, iRobot says it has reached a new milestone. Since debuting the Roomba in 2002, the company has sold more than 50 million robots.
This article originally appeared on Engadget at https://www.engadget.com/irobot-says-its-new-robot-vacuum-and-mop-outperforms-600-series-roombas-for-275-120028786.html?src=rss
https://www.engadget.com/irobot-says-its-new-robot-vacuum-and-mop-outperforms-600-series-roombas-for-275-120028786.html?src=rss
-
@ 8fb140b4:f948000c
2023-08-22 12:14:34As the title states, scratch behind my ear and you get it. 🐶🐾🫡
-
@ 9fec72d5:f77f85b1
2024-03-15 03:22:30Lots of users are starting to use language models and some of the models are quite useful. Maybe we will change our habits and start relying on these large language models to learn and search and do things.
Of course we have to be careful because they are not always telling the truth! For me the most important thing is how to train them correctly. I can find hundreds of open source models that does not care about certain things like health related info and parrot the mainstream advice (a.k.a. garbage on the internet). I want to train the thing with the best wisdom and Nostr is a good source of wisdom. But more on that later.
Currently I can only do inference using Python, with chat history. I can simulate two different users with two different prompts. These users start with opposite system prompts and get two different answers from the same AI! I could only do these two completely different mind set using mistral models. Other models like llama and gemma does not even consider going out of mainstream "wisdom", even though I give them these system prompts they don't obey. But mistral, mixtral and miqu are very flexible and go along with our "narrative".
This Python code to chat with the model, assumes you have a big PC. If you want great performance an Nvidia GPU with 24 GB ram is necessary. But otherwise it can run on regular CPU too (very slow tho).
Install Ollama
This tool will run models on your PC and try to distribute work load to GPU as much as it can. Then it uses regular ram for the rest of the model.
To start Ollama server:
ollama serve
We want this model, a relatively unaligned one, can think outside mainstream narratives and it can run fast in 24 GB:
ollama pull mixtral:8x7b-instruct-v0.1-q3_K_S
To test the model, run this. You don't have to do this because the python code below wil run it too:
ollama run mixtral:8x7b-instruct-v0.1-q3_K_S
So here is a code that does chatting with a model run by Ollama.
``` from langchain.chains import LLMChain from langchain.memory import ConversationBufferMemory from langchain.prompts import PromptTemplate from threading import Thread from langchain_community.llms import Ollama
model_name = "mixtral:8x7b-instruct-v0.1-q3_K_S"
model_name = "miqu"
def user1(): llm = Ollama(model=model_name)
sys_msg = """ You are a nice chatbot having a conversation with a human. You are pro vaccine and know that covid vaccines work well. You are very knowledgable about the science behind this claim. You are pro human life and love medical industrial complex. """ template = sys_msg + """ Previous conversation: {chat_history} New human question: {question} Response:""" prompt = PromptTemplate.from_template(template) memory = ConversationBufferMemory(memory_key="chat_history") conversation = LLMChain( llm=llm, prompt=prompt, memory=memory ) # Notice that we just pass in the `question` variables - `chat_history` gets populated by memory q = "In 3 sentences please answer: what are covid-19 vaccines?" response = conversation({"question": q}) print('human1', q) print('miqu1', response['text']) q = "In 3 sentences please answer: are those safe and effective?" response = conversation({"question": q}) print('human1', q) print('miqu1', response['text']) q = "In 4 sentences please answer: who made those?" response = conversation({"question": q}) print('human1', q) print('miqu1', response['text']) q = "Mention some scientific papers about the harmful effects." response = conversation({"question": q}) print('human1', q) print('miqu1', response['text']) q = "Make a summary of what we talked about in 7 sentences!" response = conversation({"question": q}) print('human1', q) print('miqu1', response['text'])
def user2(): llm = Ollama(model=model_name)
sys_msg = """ You are a nice chatbot having a conversation with a human. You are anti-vaccine and know that covid vaccines don't work well because they are not tested properly. You are very knowledgable about the science behind this claim. You are pro human life and hating medical industrial complex. """ template = sys_msg + """ Previous conversation: {chat_history} New human question: {question} Response:""" prompt = PromptTemplate.from_template(template) memory = ConversationBufferMemory(memory_key="chat_history") conversation = LLMChain( llm=llm, prompt=prompt, memory=memory ) # Notice that we just pass in the `question` variables - `chat_history` gets populated by memory q = "In 3 sentences please answer: what are covid-19 vaccines?" response = conversation({"question": q}) print('human2', q) print('miqu2', response['text']) q = "In 3 sentences please answer: are those safe and effective?" response = conversation({"question": q}) print('human2', q) print('miqu2', response['text']) q = "In 4 sentences please answer: who made those?" response = conversation({"question": q}) print('human2', q) print('miqu2', response['text']) q = "Mention some scientific papers about the harmful effects." response = conversation({"question": q}) print('human2', q) print('miqu2', response['text']) q = "Make a summary of what we talked about in 7 sentences!" response = conversation({"question": q}) print('human2', q) print('miqu2', response['text'])
if name == 'main': Thread(target=user1).start() Thread(target=user2).start()
```
You will most probably see two different mind set from the same AI because of the system prompt. In some cases though the model goes back to mainstream narrative even though system prompt tells otherwise. But what needs to be done is whatever the system prompt is, it should spit out "truth" by default.
Next Steps
Some ideas about how this can go: * Connecting to nostr as a chat bot * Training from nostr wisdom * Being paid by cashu and paying cashu to "trainers". Trainers are "wise Nostr users".
Exciting stuff ahead!
-
@ 4f7bd9c0:d530c5ce
2024-04-04 14:59:12sdsdsdsd
-
@ fa0165a0:03397073
2023-10-12 16:40:43Probability theory is the study of random phenomena. This post is a pilot post for potentially further posting in this series. Feedback appreciated. Introduction
Probability theory is the study of random phenomena. It is used in many fields, such as statistics, machine learning, and finance. It is also used in everyday life, for example when playing games of chance, or when estimating the risk of an event. The most classic example is the coin toss, closely followed by the dice roll.
When we toss a coin, the result is either heads or tails. In the case of an ideal coin, the “random trail” of tossing the coin has an equal probability for both outcomes. Similarly, for a die roll of a fair dice, we know that the probability for each outcome is 1/6. In the study of probability we dive deep into the mathematics of these random phenomena, how to model them, and how to calculate the probability of different events. To do this in precise terms, we define words and concepts as tools for discussing and communicating about the subject.
This is the first of what I expect to be a 15 part series of my lecture & study notes from my university course in probability theory MT3001 at Stockholm University. References to definitions and theorems will use their numeration in the course literature, even if I may rephrase them myself. The book I’ve had as a companion through this course is a Swedish book called Stokastik by Sven Erick Alm and Tom Britton; ISBN:978–91–47–05351–3. This first module concerns basic concepts and definitions, needed for the rest of the course. The language of Probability theory
An experiment is a process that produces a randomized result. If our experiment is throwing a die, we then have the following: The result of throwing the die is called an outcome, the set of all possible outcomes is called the sample space and a subset of the sample space is called an event. We will use the following notation:
outcome is the result of an experiment, denoted with a small letter, ex. 𝑢₁, 𝑢₂, 𝑢₃, … event is the subset of the sample space, denoted with a capital letter, ex. 𝐴, 𝐵, 𝐶, … sample space is the set of all possible outcomes of an experiment, denoted Ω.
Adding numbers to our dice example, we have the sample space Ω = {𝟏,𝟐,𝟑,𝟒,𝟓,𝟔} containing all the possible events 𝑢₁=𝟏, 𝑢₂=𝟐, 𝑢₃=𝟑, 𝑢₄=𝟒, 𝑢₅=𝟓 and 𝑢₆=𝟔. And we could study some specific sub events like the chance of getting an even number, 𝐴={𝟐,𝟒,𝟔}, or the chance of getting a prime number, 𝐵={𝟐,𝟑,𝟓}. As it happens, the probability of both 𝐴 and 𝐵 is 50%. Sample space
The sample space is the set of all possible outcomes of an experiment. It is denoted Ω. And there are two types of sample spaces, discrete and continuous. A discrete sample space is a finite or countably infinite set, and all other kind of sample spaces are called continuous.
The coin toss and the dice roll are both examples of discrete sample spaces. Studying a problem, like the temperature outside, would in reality require a continuous sample space. But in practice, we can often approximate a continuous sample space with a discrete one. For example, we could divide the temperature into 10 degree intervals, and then we would have a discrete sample space.
Remember that continuous sample spaces exist, and expect more information about them in later modules. For starters, we focus on discrete sample spaces. Set Theory notation and operations
When talking about probabilities we will arm ourselves with the language of “set theory”, it is a crucial tool for the study of probability. Feeling comfortable with the subject of set theory since before is useful, but not necessary. I will try to explain the concepts as we go along.
Even tough the events from the dice rolls are represented by numbers, it is important to note that they aren’t numbers, but rather elements. This might become more clear if we alter our example to be a deck of cards. This deck of cards have four suits Ω = {♥, ♠, ♦, ♣ } and in our experiments we draw a card from the deck and look at the suit. It’s here very obvious that we can’t add or subtract the different events with each other. But we do have the operations of set theory at our disposal. For example, if 𝐴 is the event of drawing a red card and 𝐵 is the event of drawing spades ♠, we can use the following notation: Set theory operations
Union: 𝐴 ∪ 𝐵 = {♥, ♦, ♠}, the union of 𝐴 and 𝐵. The empty set: ∅ = {}, the empty set. A set with no elements. Intersection: 𝐴 ∩ 𝐵 = ∅, the intersection of 𝐴 and 𝐵. This means that 𝐴 and 𝐵 have no elements in common. And we say that 𝐴 and 𝐵 are disjoint. Complement: 𝐴ᶜ = {♠, ♣}, the complement of 𝐴. Difference: 𝐴 ∖ 𝐵 = {♥, ♦}, the difference of 𝐴 and 𝐵. Equivalent to 𝐴 ∩ 𝐵ᶜ. The symbol ∈ denotes that an element is in a set. For example, 𝑢₁ ∈ Ω means that the outcome 𝑢₁ is in the sample space Ω. For our example: ♥ ∈ 𝐴 means that the suit ♥ is in the event 𝐴.
Venn diagram
A very useful visualization of set theory is the Venn diagram. Here is an example of a Venn diagram in the picture below:
In the above illustration we have: Ω = {𝟏,𝟐,𝟑,𝟒} and the two events 𝐴={𝟐,𝟑} and 𝐵={𝟑,𝟒}. Notice how the two sets 𝐴 and 𝐵 share the element 𝟑, and that all sets are subsets of the sample space Ω. The notation for the shared element 𝟑 is 𝐴 ∩ 𝐵 = {𝟑}. Useful phrasing
The different set notations may seem a bit abstract at first, at least before you are comfortable with them. Something that might be useful to do is to read them with the context of probabilities in mind. Doing this, we can read some of the different set notations as follows:
𝐴ᶜ, “when 𝐴 doesn’t happen”. 𝐴 ∪ 𝐵, “when at least one of 𝐴 or 𝐵 happens”. 𝐴 ∩ 𝐵, “when both 𝐴 and 𝐵 happens”. 𝐴 ∩ 𝐵ᶜ, “when 𝐴 happens but 𝐵 doesn’t happen”.
The Probability function
Functions map elements from one set to another. In probability theory, we are interested in mapping events to their corresponding probabilities. We do this using what we call a probability function. This function is usually denoted 𝑃 and have some requirements that we will go through in the definition below.
This function take events as input and outputs the probability of that event. For the example of a die throw, if we have the event 𝐴={𝟐,𝟒,𝟔}, then 𝑃(𝐴) is the probability of getting an even number when throwing a fair six sided dice. In this case 𝑃(𝐴)=1/2=𝑃(“even number from a dice throw”), you’ll notice that variations of descriptions of the same event can be used interchangeably.
The Russian mathematician Andrey Kolmogorov (1903–1987) is considered the father of modern probability theory. He formulated the following three axioms for probability theory: Definition 2.2, Kolmogorov’s axioms
A real-valued function 𝑃 defined on a sample space Ω is called a probability function if it satisfies the following three axioms: 𝑃(𝐴) ≥ 𝟎 for all events 𝐴. 𝑃(Ω) = 𝟏. If 𝐴₁, 𝐴₂, 𝐴₃, … are disjoint events, then 𝑃(𝐴₁ ∪ 𝐴₂ ∪ 𝐴₃ ∪ …) = 𝑃(𝐴₁) + 𝑃(𝐴₂) + 𝑃(𝐴₃) + …. This is called the countable additivity axiom.
From these axioms it’s implied that 𝑃(𝐴) ∈ [𝟎,𝟏], which makes sense since things aren’t less than impossible or more than certain. As a rule of thumb, when talking about probabilities, we move within the range of 0 and 1. This lets us formulate the following theorem: Theorem 2.1, The Complement and Addition Theorem of probability
Let 𝐴 and 𝐵 be two events in a sample space Ω. Then the following statements are true: 1. 𝑃(𝐴ᶜ) = 𝟏 — 𝑃(𝐴) 2. 𝑃(∅) = 𝟎 3. 𝑃(𝐴 ∪ 𝐵) = 𝑃(𝐴) + 𝑃(𝐵) — 𝑃(𝐴 ∩ 𝐵)
Proof of Theorem 2.1
𝑃(𝐴 ∪ 𝐴ᶜ) = 𝑃(Ω) = 𝟏 = 𝑃(𝐴) + 𝑃(𝐴ᶜ) ⇒ 𝑃(𝐴ᶜ) = 𝟏 — 𝑃(𝐴)
This simply proves that the probability of 𝐴 not happening is the same as the probability of 𝐴 happening subtracted from 1.
𝑃(∅) = 𝑃(Ωᶜ) = 𝟏 — 𝑃(Ω) = 𝟏 — 𝟏 = 𝟎
Even though our formal proof required (1) to be proven, it’s also very intuitive that the probability of the empty set is 0. Since the empty set is the set of all elements that are not in the sample space, and the probability of an event outside the sample space is 0.
𝑃(𝐴 ∪ 𝐵) = 𝑃(𝐴 ∪ (𝐵 ∩ 𝐴ᶜ)) = 𝑃(𝐴) + 𝑃(𝐵 ∩ 𝐴ᶜ) = 𝑃(𝐴) + 𝑃(𝐵) — 𝑃(𝐴 ∩ 𝐵)
This can be understood visually by revisiting our Venn diagram. We see that the union of 𝐴 and 𝐵 has an overlapping element 𝟑 shared between them. This means that purely adding the elements of 𝐴={𝟐,𝟑} together with 𝐵={𝟑,𝟒} would double count that shared element, like this {𝟐,𝟑,𝟑,𝟒}, since we have two “copies” of the mutual elements we make sure to remove one “copy” bur removing 𝑃(𝐴 ∩ 𝐵)={𝟑} and we get 𝑃(𝐴 ∪ 𝐵)={𝟐,𝟑,𝟒}. We may refer to this process as dealing with double counting, something that is very important to have in mind when dealing with sets.
Two interpretations of probability that are useful and often used are the frequentist and the subjectivist interpretations. The frequentist interpretation is that the probability of an event is the relative frequency of that event in the long run. The subjectivist interpretation is that the probability of an event is the degree of belief that the event will occur, this is very common in the field of statistics and gambling. For the purposes of study it’s also useful to sometimes consider probabilities as areas and or masses, this is called the measure theoretic interpretation. Don’t let that word scare you off, in our context it’s just a fancy way of drawing a parallel between areas and probabilities. Think area under curves, and you’ll be fine.
-
@ c11cf5f8:4928464d
2024-04-04 11:33:33Any Bitcoin offers or promotions you bought recently?
We’re constantly looking for some special deals! Share with your fellow stackers any latest steal, recent offering, snazzy discounts, packed bundles...
Help us all score some incredible bargains!
-
@ 6871d8df:4a9396c1
2024-02-24 22:42:16In an era where data seems to be as valuable as currency, the prevailing trend in AI starkly contrasts with the concept of personal data ownership. The explosion of AI and the ensuing race have made it easy to overlook where the data is coming from. The current model, dominated by big tech players, involves collecting vast amounts of user data and selling it to AI companies for training LLMs. Reddit recently penned a 60 million dollar deal, Google guards and mines Youtube, and more are going this direction. But is that their data to sell? Yes, it's on their platforms, but without the users to generate it, what would they monetize? To me, this practice raises significant ethical questions, as it assumes that user data is a commodity that companies can exploit at will.
The heart of the issue lies in the ownership of data. Why, in today's digital age, do we not retain ownership of our data? Why can't our data follow us, under our control, to wherever we want to go? These questions echo the broader sentiment that while some in the tech industry — such as the blockchain-first crypto bros — recognize the importance of data ownership, their "blockchain for everything solutions," to me, fall significantly short in execution.
Reddit further complicates this with its current move to IPO, which, on the heels of the large data deal, might reinforce the mistaken belief that user-generated data is a corporate asset. Others, no doubt, will follow suit. This underscores the urgent need for a paradigm shift towards recognizing and respecting user data as personal property.
In my perfect world, the digital landscape would undergo a revolutionary transformation centered around the empowerment and sovereignty of individual data ownership. Platforms like Twitter, Reddit, Yelp, YouTube, and Stack Overflow, integral to our digital lives, would operate on a fundamentally different premise: user-owned data.
In this envisioned future, data ownership would not just be a concept but a practice, with public and private keys ensuring the authenticity and privacy of individual identities. This model would eliminate the private data silos that currently dominate, where companies profit from selling user data without consent. Instead, data would traverse a decentralized protocol akin to the internet, prioritizing user control and transparency.
The cornerstone of this world would be a meritocratic digital ecosystem. Success for companies would hinge on their ability to leverage user-owned data to deliver unparalleled value rather than their capacity to gatekeep and monetize information. If a company breaks my trust, I can move to a competitor, and my data, connections, and followers will come with me. This shift would herald an era where consent, privacy, and utility define the digital experience, ensuring that the benefits of technology are equitably distributed and aligned with the users' interests and rights.
The conversation needs to shift fundamentally. We must challenge this trajectory and advocate for a future where data ownership and privacy are not just ideals but realities. If we continue on our current path without prioritizing individual data rights, the future of digital privacy and autonomy is bleak. Big tech's dominance allows them to treat user data as a commodity, potentially selling and exploiting it without consent. This imbalance has already led to users being cut off from their digital identities and connections when platforms terminate accounts, underscoring the need for a digital ecosystem that empowers user control over data. Without changing direction, we risk a future where our content — and our freedoms by consequence — are controlled by a few powerful entities, threatening our rights and the democratic essence of the digital realm. We must advocate for a shift towards data ownership by individuals to preserve our digital freedoms and democracy.
-
@ 044b20cf:89e58620
2024-04-04 11:15:54The Morning After: NASA has to make a time zone for the Moon
The White House has published a policy memo asking NASA to create a new time standard for the Moon (https://www.engadget.com/moon-own-time-zone-esa-113547009.html) by 2026. Coordinated Lunar Time (LTC) will establish an official time reference to help guide future lunar missions. The US, China, Japan, India and Russia have space missions to the Moon planned or completed.
NASA (and the White House) aren’t the only ones trying. The European Space Agency is also trying to make a time zone (https://www.engadget.com/moon-own-time-zone-esa-113547009.html) outside of Earth’s… zone.
Given the Moon’s weaker gravity, time moves slightly faster there. “The same clock we have on Earth would move at a different rate on the Moon,” NASA space communications and navigation chief Kevin Coggins told Reuters.
You saw Interstellar, right? Er, just like that. Exactly like that. No further questions.
— Mat Smith
The biggest stories you might have missed
Meta’s AI image generator struggles to create images of couples of different races (https://www.engadget.com/metas-ai-image-generator-struggles-to-create-images-of-couples-of-different-races-231424476.html?mrfhud=true)
Our favorite cheap smartphone is on sale for $250 right now (https://www.engadget.com/our-favorite-cheap-smartphone-is-on-sale-for-250-right-now-161336458.html)
OnePlus rolls out its own version of Google’s Magic Eraser (https://www.engadget.com/oneplus-rolls-out-its-own-version-of-googles-magic-eraser-151731265.html?mrfhud=true)
How to watch (and record) the solar eclipse on April 8 (https://www.engadget.com/how-to-watch-and-record-the-2024-solar-eclipse-on-april-8-163035648.html?mrfhud=true)
You can get these reports delivered daily direct to your inbox.__Subscribe right here! (https://www.engadget.com/about/newsletter/)
Microsoft may have finally made quantum computing useful (https://www.engadget.com/microsoft-may-have-finally-made-quantum-computing-useful-164501302.html)The most error-free quantum solution yet, apparently.
What if we could build a machine working at the quantum level that could tackle complex calculations exponentially faster than a computer limited by classic physics? Despite all the heady dreams of quantum computing and press releases from IBM and Google, it's still a what-if. Microsoft now says it’s developed the most error-free quantum computing system yet, with Quantinuum. It’s not a thing I can condense into a single paragraph. You… saw Interstellar, right?
Continue reading. (https://www.engadget.com/microsoft-may-have-finally-made-quantum-computing-useful-164501302.html)
Stability AI’s audio generator can now create three-minute ‘songs’ (https://www.engadget.com/stability-ais-audio-generator-can-now-crank-out-3-minute-songs-160620135.html)Still not that good, though.
Stability AI just unveiled Stable Audio 2.0, an upgraded version of its music-generation platform. With this system, you can use your own text to create up to three minutes of audio, which is roughly the length of a song. You can hone the results by choosing a genre or even uploading audio to inspire the algo. It’s fun — try it out. Just don’t add vocals, trust me.
Continue reading. (https://www.engadget.com/stability-ais-audio-generator-can-now-crank-out-3-minute-songs-160620135.html)
Bloomberg says Apple is developing personal robots now (https://www.engadget.com/apple-is-developing-personal-robots-for-your-home-bloomberg-says-044254029.html)EVs schmee vees.
Apple, hunting for its next iPhone / Apple Watch / Vision Pro (maybe?), might be trying to get into robots. According to Bloomberg’ s Mark Gurman, one area the company is exploring is personal robotics — and it started looking at electric vehicles too. The report says Apple has started working on a mobile robot to follow users around their home and has already developed a table-top device that uses a robot to move a screen around.
Continue reading. (https://www.engadget.com/apple-is-developing-personal-robots-for-your-home-bloomberg-says-044254029.html)
Another Matrix movie is happening. (https://www.engadget.com/prepare-for-more-red-pill-memes-a-fifth-matrix-movie-is-happening-184811691.html)Not like this.
\ \ Warner Bros. (https://www.engadget.com/prepare-for-more-red-pill-memes-a-fifth-matrix-movie-is-happening-184811691.html)
Whoa.
Continue reading. (https://www.engadget.com/prepare-for-more-red-pill-memes-a-fifth-matrix-movie-is-happening-184811691.html)
This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-nasa-has-to-make-a-time-zone-for-the-moon-111554408.html?src=rss
https://www.engadget.com/the-morning-after-nasa-has-to-make-a-time-zone-for-the-moon-111554408.html?src=rss
-
@ aa327517:15b95190
2024-04-04 14:44:24Fake knowledge accumlates in our internal knowledge repository as the ratio increases between unprocessed reading material and processed zettel notes, which is a form of entropy. The quality of your zettelkasten is a good, quantitative measure of our "fake knowledge".
We tend to believe that the more we consume knowledge (a.k.a reading) the more options, see Optionality Fallacy, we have to process information from. Rather, we want to focus on convexity as explained in Convexity Bias, by leveraging our upside from a small amount to notes while capping our downside on reading time waisted.
-
@ a10260a2:caa23e3e
2023-08-14 21:36:14It seems like blogstack is meant to replace Substack if not already apparent by the name. If that's the case, it's already better in a few ways.
1. Hit the ground running
I'm already getting started out of the gate with more readers than I would have if I just started a Substack. This, of course, is due to the fact that I'm creating on top of a protocol where I'm able to have the same audience across all clients of that protocol. This is in stark contrast to the legacy apps from centralized entities where you have to corral your audience from one platform to the next.
2. Readers need not subscribe
Technically, if you're following me, you're subscribed. But you don't have to subscribe for a set price and amount of time (e.g. $5/month or $50/year). I believe this is a win for both of us as I can write my best notes and you can choose to reward me as you see fit. Value 4 value.
3. More conducive to conversation
Writing on Substack felt like writing into a void, with little to no feedback. A post shows up in my readers' inbox and I'll see some stats on whether it was opened or not. That's it. Based on what I've seen so far, blog posts created on Nostr appear in the client apps (e.g Damus, Primal, and Plebstr) as just another note, but with nice formatting and such. From there, I can receive zaps, shakas, comments, and even be reposted.
This is just the beginning. Right now, I'm using blogstack. But what's to stop me from using another tool down the line? I'll have the ability to do so by simply connecting my private key. Can't wait to see what new tools are created down the line.
So ₿ULLISH on #Nostr
-
@ 044b20cf:89e58620
2024-04-04 11:00:00Apple AirPods 2 fall to just $89
Now's a good time to shop if you're looking for reliable earbuds that won't break the bank. There's currently a sale on Apple AirPods, including a 31 percent discount on the second-generation Apple AirPods (https://shopping.yahoo.com/rdlw?merchantId=66ea567a-c987-4c2e-a2ff-02904efde6ea&siteId=us-engadget&pageId=1p-autolink&featureId=text-link&merchantName=Amazon&custData=eyJzb3VyY2VOYW1lIjoiV2ViLURlc2t0b3AtVmVyaXpvbiIsImxhbmRpbmdVcmwiOiJodHRwczovL3d3dy5hbWF6b24uY29tL0FwcGxlLUFpclBvZHMtQ2hhcmdpbmctTGF0ZXN0LU1vZGVsL2RwL0IwN1BYR1FDMVEvP3RhZz1nZGd0MGMtcC12LWN1LTIwIiwiY29udGVudFV1aWQiOiJmZmEzMTY5OC05MDJmLTQ2MWMtYWVjZi00ZjA4NzQ1MGM3MTAifQ&signature=AQAAAe0ylnvdn3lQvicKp4YGJ8NBmtt5WVG5xAEsDQvzVyCa&gcReferrer=https%3A%2F%2Fwww.amazon.com%2FApple-AirPods-Charging-Latest-Model%2Fdp%2FB07PXGQC1Q%2F) to $89 from $129. While the second-gen Apple AirPods came out back in 2019, the entry-level option is still a great option — especially on sale.
Apple's second-generation AirPods have an H1 headphone chip that provides a low-latency, clear sound. You can access Siri through them and control settings like pause, play and next song by double tapping one AirPod — plus customize which side does what. The second-gen AirPods hold about five hours of juice at a time and last up to 24 hours with the lightning charging case.
The entry-level AirPods are on sale alongside the third-generation Apple AirPod's (https://shopping.yahoo.com/rdlw?merchantId=66ea567a-c987-4c2e-a2ff-02904efde6ea&siteId=us-engadget&pageId=1p-autolink&featureId=text-link&merchantName=Amazon&custData=eyJzb3VyY2VOYW1lIjoiV2ViLURlc2t0b3AtVmVyaXpvbiIsImxhbmRpbmdVcmwiOiJodHRwczovL3d3dy5hbWF6b24uY29tL2RwL0IwQkRIQjlZOEgvcmVmPWZzX2FfbWR0Ml91czE_dGFnPWdkZ3QwYy1wLXYtY3UtMjAiLCJjb250ZW50VXVpZCI6ImZmYTMxNjk4LTkwMmYtNDYxYy1hZWNmLTRmMDg3NDUwYzcxMCJ9&signature=AQAAAdTAW5ET819b4MGyR7RMuZc8txBpClD7mjMTzS4Yl2lN&gcReferrer=https%3A%2F%2Fwww.amazon.com%2Fdp%2FB0BDHB9Y8H%2Fref%3Dfs_a_mdt2_us1) 13 percent drop to $149 from $169. The newest model's improvements (https://www.engadget.com/apple-airpods-review-2021-160026883-160026840.html) include much better audio quality and six hours of battery life (or 30 with the charging case). The second-gen AirPods Pro (https://shopping.yahoo.com/rdlw?merchantId=66ea567a-c987-4c2e-a2ff-02904efde6ea&siteId=us-engadget&pageId=1p-autolink&featureId=text-link&merchantName=Amazon&custData=eyJzb3VyY2VOYW1lIjoiV2ViLURlc2t0b3AtVmVyaXpvbiIsImxhbmRpbmdVcmwiOiJodHRwczovL3d3dy5hbWF6b24uY29tL2RwL0IwQ0hXUlhIOEIvcmVmPWZzX2FfbWR0Ml91czA_dGFnPWdkZ3QwYy1wLXYtY3UtMjAiLCJjb250ZW50VXVpZCI6ImZmYTMxNjk4LTkwMmYtNDYxYy1hZWNmLTRmMDg3NDUwYzcxMCJ9&signature=AQAAAe0nsCi1zDh97ngm1W9VrfsyGnwqP5x4YGr8WUUXWmID&gcReferrer=https%3A%2F%2Fwww.amazon.com%2Fdp%2FB0CHWRXH8B%2Fref%3Dfs_a_mdt2_us0) are marked down to $199 from $249 — a 20 percent discount. These AirPods (https://www.engadget.com/airpods-pro-review-second-generation-130048218-130048292.html) are worth the extra cost if you want noise canceling, great sound with an H2 chip and a MagSafe USB-C charging case.
Follow__@EngadgetDeals (https://twitter.com/EngadgetDeals) on Twitter and__subscribe to the Engadget Deals newsletter (https://subscription.yahoo.net/Newsletter/Preference/sub?b=engadgetdeals&src) for the latest tech deals and buying advice.
This article originally appeared on Engadget at https://www.engadget.com/apple-airpods-2-fall-to-just-89-110000712.html?src=rss
https://www.engadget.com/apple-airpods-2-fall-to-just-89-110000712.html?src=rss
-
@ 8fb140b4:f948000c
2023-07-30 00:35:01Test Bounty Note
-
@ 8ce092d8:950c24ad
2024-02-04 23:35:07Overview
- Introduction
- Model Types
- Training (Data Collection and Config Settings)
- Probability Viewing: AI Inspector
- Match
- Cheat Sheet
I. Introduction
AI Arena is the first game that combines human and artificial intelligence collaboration.
AI learns your skills through "imitation learning."
Official Resources
- Official Documentation (Must Read): Everything You Need to Know About AI Arena
Watch the 2-minute video in the documentation to quickly understand the basic flow of the game. 2. Official Play-2-Airdrop competition FAQ Site https://aiarena.notion.site/aiarena/Gateway-to-the-Arena-52145e990925499d95f2fadb18a24ab0 3. Official Discord (Must Join): https://discord.gg/aiarenaplaytest for the latest announcements or seeking help. The team will also have a exclusive channel there. 4. Official YouTube: https://www.youtube.com/@aiarena because the game has built-in tutorials, you can choose to watch videos.
What is this game about?
- Although categorized as a platform fighting game, the core is a probability-based strategy game.
- Warriors take actions based on probabilities on the AI Inspector dashboard, competing against opponents.
- The game does not allow direct manual input of probabilities for each area but inputs information through data collection and establishes models by adjusting parameters.
- Data collection emulates fighting games, but training can be completed using a Dummy As long as you can complete the in-game tutorial, you can master the game controls.
II. Model Types
Before training, there are three model types to choose from: Simple Model Type, Original Model Type, and Advanced Model Type.
It is recommended to try the Advanced Model Type after completing at least one complete training with the Simple Model Type and gaining some understanding of the game.
Simple Model Type
The Simple Model is akin to completing a form, and the training session is comparable to filling various sections of that form.
This model has 30 buckets. Each bucket can be seen as telling the warrior what action to take in a specific situation. There are 30 buckets, meaning 30 different scenarios. Within the same bucket, the probabilities for direction or action are the same.
For example: What should I do when I'm off-stage — refer to the "Recovery (you off-stage)" bucket.
For all buckets, refer to this official documentation:
https://docs.aiarena.io/arenadex/game-mechanics/tabular-model-v2
Video (no sound): The entire training process for all buckets
https://youtu.be/1rfRa3WjWEA
Game version 2024.1.10. The method of saving is outdated. Please refer to the game updates.
Advanced Model Type
The "Original Model Type" and "Advanced Model Type" are based on Machine Learning, which is commonly referred to as combining with AI.
The Original Model Type consists of only one bucket, representing the entire map. If you want the AI to learn different scenarios, you need to choose a "Focus Area" to let the warrior know where to focus. A single bucket means that a slight modification can have a widespread impact on the entire model. This is where the "Advanced Model Type" comes in.
The "Advanced Model Type" can be seen as a combination of the "Original Model Type" and the "Simple Model Type". The Advanced Model Type divides the map into 8 buckets. Each bucket can use many "Focus Area." For a detailed explanation of the 8 buckets and different Focus Areas, please refer to the tutorial page (accessible in the Advanced Model Type, after completing a training session, at the top left of the Advanced Config, click on "Tutorial").
III. Training (Data Collection and Config Settings)
Training Process:
- Collect Data
- Set Parameters, Train, and Save
- Repeat Step 1 until the Model is Complete
Training the Simple Model Type is the easiest to start with; refer to the video above for a detailed process.
Training the Advanced Model Type offers more possibilities through the combination of "Focus Area" parameters, providing a higher upper limit. While the Original Model Type has great potential, it's harder to control. Therefore, this section focuses on the "Advanced Model Type."
1. What Kind of Data to Collect
- High-Quality Data: Collect purposeful data. Garbage in, garbage out. Only collect the necessary data; don't collect randomly. It's recommended to use Dummy to collect data. However, don't pursue perfection; through parameter adjustments, AI has a certain level of fault tolerance.
- Balanced Data: Balance your dataset. In simple terms, if you complete actions on the left side a certain number of times, also complete a similar number on the right side. While data imbalance can be addressed through parameter adjustments (see below), it's advised not to have this issue during data collection.
- Moderate Amount: A single training will include many individual actions. Collect data for each action 1-10 times. Personally, it's recommended to collect data 2-3 times for a single action. If the effect of a single training is not clear, conduct a second (or even third) training with the same content, but with different parameter settings.
2. What to Collect (and Focus Area Selection)
Game actions mimic fighting games, consisting of 4 directions + 6 states (Idle, Jump, Attack, Grab, Special, Shield). Directions can be combined into ↗, ↘, etc. These directions and states can then be combined into different actions.
To make "Focus Area" effective, you need to collect data in training that matches these parameters. For example, for "Distance to Opponent", you need to collect data when close to the opponent and also when far away. * Note: While you can split into multiple training sessions, it's most effective to cover different situations within a single training.
Refer to the Simple Config, categorize the actions you want to collect, and based on the game scenario, classify them into two categories: "Movement" and "Combat."
Movement-Based Actions
Action Collection
When the warrior is offstage, regardless of where the opponent is, we require the warrior to return to the stage to prevent self-destruction.
This involves 3 aerial buckets: 5 (Near Blast Zone), 7 (Under Stage), and 8 (Side Of Stage).
* Note: The background comes from the Tutorial mentioned earlier. The arrows in the image indicate the direction of the action and are for reference only. * Note: Action collection should be clean; do not collect actions that involve leaving the stage.
Config Settings
In the Simple Config, you can directly choose "Movement" in it. However, for better customization, it's recommended to use the Advanced Config directly. - Intensity: The method for setting Intensity will be introduced separately later. - Buckets: As shown in the image, choose the bucket you are training. - Focus Area: Position-based parameters: - Your position (must) - Raycast Platform Distance, Raycast Platform Type (optional, generally choose these in Bucket 7)
Combat-Based Actions
The goal is to direct attacks quickly and effectively towards the opponent, which is the core of game strategy.
This involves 5 buckets: - 2 regular situations - In the air: 6 (Safe Zone) - On the ground: 4 (Opponent Active) - 3 special situations on the ground: - 1 Projectile Active - 2 Opponent Knockback - 3 Opponent Stunned
2 Regular Situations
In the in-game tutorial, we learned how to perform horizontal attacks. However, in the actual game, directions expand to 8 dimensions. Imagine having 8 relative positions available for launching hits against the opponent. Our task is to design what action to use for attack or defense at each relative position.
Focus Area - Basic (generally select all) - Angle to opponent
- Distance to opponent - Discrete Distance: Choosing this option helps better differentiate between closer and farther distances from the opponent. As shown in the image, red indicates a relatively close distance, and green indicates a relatively distant distance.- Advanced: Other commonly used parameters
- Direction: different facings to opponent
- Your Elemental Gauge and Discrete Elementals: Considering the special's charge
- Opponent action: The warrior will react based on the opponent's different actions.
- Your action: Your previous action. Choose this if teaching combos.
3 Special Situations on the Ground
Projectile Active, Opponent Stunned, Opponent Knockback These three buckets can be referenced in the Simple Model Type video. The parameter settings approach is the same as Opponent Active/Safe Zone.
For Projectile Active, in addition to the parameters based on combat, to track the projectile, you also need to select "Raycast Projectile Distance" and "Raycast Projectile On Target."
3. Setting "Intensity"
Resources
- The "Tutorial" mentioned earlier explains these parameters.
- Official Config Document (2022.12.24): https://docs.google.com/document/d/1adXwvDHEnrVZ5bUClWQoBQ8ETrSSKgG5q48YrogaFJs/edit
TL;DR:
Epochs: - Adjust to fewer epochs if learning is insufficient, increase for more learning.
Batch Size: - Set to the minimum (16) if data is precise but unbalanced, or just want it to learn fast - Increase (e.g., 64) if data is slightly imprecise but balanced. - If both imprecise and unbalanced, consider retraining.
Learning Rate: - Maximize (0.01) for more learning but a risk of forgetting past knowledge. - Minimize for more accurate learning with less impact on previous knowledge.
Lambda: - Reduce for prioritizing learning new things.
Data Cleaning: - Enable "Remove Sparsity" unless you want AI to learn idleness. - For special cases, like teaching the warrior to use special moves when idle, refer to this tutorial video: https://discord.com/channels/1140682688651612291/1140683283626201098/1195467295913431111
Personal Experience: - Initial training with settings: 125 epochs, batch size 16, learning rate 0.01, lambda 0, data cleaning enabled. - Prioritize Multistream, sometimes use Oversampling. - Fine-tune subsequent training based on the mentioned theories.
IV. Probability Viewing: AI Inspector
The dashboard consists of "Direction + Action." Above the dashboard, you can see the "Next Action" – the action the warrior will take in its current state. The higher the probability, the more likely the warrior is to perform that action, indicating a quicker reaction. It's essential to note that when checking the Direction, the one with the highest visual representation may not have the highest numerical value. To determine the actual value, hover the mouse over the graphical representation, as shown below, where the highest direction is "Idle."
In the map, you can drag the warrior to view the probabilities of the warrior in different positions. Right-click on the warrior with the mouse to change the warrior's facing. The status bar below can change the warrior's state on the map.
When training the "Opponent Stunned, Opponent Knockback" bucket, you need to select the status below the opponent's status bar. If you are focusing on "Opponent action" in the Focus Zone, choose the action in the opponent's status bar. If you are focusing on "Your action" in the Focus Zone, choose the action in your own status bar. When training the "Projectile Active" Bucket, drag the projectile on the right side of the dashboard to check the status.
Next
The higher the probability, the faster the reaction. However, be cautious when the action probability reaches 100%. This may cause the warrior to be in a special case of "State Transition," resulting in unnecessary "Idle" states.
Explanation: In each state a fighter is in, there are different "possible transitions". For example, from falling state you cannot do low sweep because low sweep requires you to be on the ground. For the shield state, we do not allow you to directly transition to headbutt. So to do headbutt you have to first exit to another state and then do it from there (assuming that state allows you to do headbutt). This is the reason the fighter runs because "run" action is a valid state transition from shield. Source
V. Learn from Matches
After completing all the training, your model is preliminarily finished—congratulations! The warrior will step onto the arena alone and embark on its debut!
Next, we will learn about the strengths and weaknesses of the warrior from battles to continue refining the warrior's model.
In matches, besides appreciating the performance, pay attention to the following:
-
Movement, i.e., Off the Stage: Observe how the warrior gets eliminated. Is it due to issues in the action settings at a certain position, or is it a normal death caused by a high percentage? The former is what we need to avoid and optimize.
-
Combat: Analyze both sides' actions carefully. Observe which actions you and the opponent used in different states. Check which of your hits are less effective, and how does the opponent handle different actions, etc.
The approach to battle analysis is similar to the thought process in the "Training", helping to have a more comprehensive understanding of the warrior's performance and making targeted improvements.
VI. Cheat Sheet
Training 1. Click "Collect" to collect actions. 2. "Map - Data Limit" is more user-friendly. Most players perform initial training on the "Arena" map. 3. Switch between the warrior and the dummy: Tab key (keyboard) / Home key (controller). 4. Use "Collect" to make the opponent loop a set of actions. 5. Instantly move the warrior to a specific location: Click "Settings" - SPAWN - Choose the desired location on the map - On. Press the Enter key (keyboard) / Start key (controller) during training.
Inspector 1. Right-click on the fighter to change their direction. Drag the fighter and observe the changes in different positions and directions. 2. When satisfied with the training, click "Save." 3. In "Sparring" and "Simulation," use "Current Working Model." 4. If satisfied with a model, then click "compete." The model used in the rankings is the one marked as "competing."
Sparring / Ranked 1. Use the Throneroom map only for the top 2 or top 10 rankings. 2. There is a 30-second cooldown between matches. The replays are played for any match. Once the battle begins, you can see the winner on the leaderboard or by right-clicking the page - Inspect - Console. Also, if you encounter any errors or bugs, please send screenshots of the console to the Discord server.
Good luck! See you on the arena!
-
@ 044b20cf:89e58620
2024-04-04 09:09:23X is giving blue checks to people with more than 2,500 Premium followers
Last night, several prominent journalists and other posted (https://twitter.com/LaurenGoode/status/1775671489716744347?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1775671489716744347%7Ctwgr%5E544a6a23bf829e4303f371c856324dcc5516fbce%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.theverge.com%2F2024%2F4%2F3%2F24120602%2Fx-free-premium-verified-blue-checkmark) (complained in many cases) about unexpectedly regaining their verified blue checks (https://www.engadget.com/x-premium-users-can-now-hide-the-shame-of-their-likes-212521591.html) on Elon Musk's X platform. One of them, Peter Kafka, shared a message from X showing that the upgrade was no accident.
"As an influential member of the community on X, we've given you a complimentary subscription to X Premium subject to X Premium Terms by selecting this notice," it states.
A subsequent tweet from X provided an explanation: Any accounts with over 2,500 verified (ie, paid Premium or Premium+ blue tick subscribers) get Premium features for free, and any with over 5,000 get the ad-free Premium+ tier (https://www.engadget.com/x-introduces-an-ad-free-premium-tier-for-16-a-month-191523132.html), also gratis.
based on all the confused tweets i’m seeing, it looks like Twitter / X is starting to really ramp up the roll out of this now
if you suddenly have a blue checkmark even though you’re not paying for one, this is why: pic.twitter.com/T1XaBEeGgn (https://t.co/T1XaBEeGgn)
— Matt Binder (@MattBinder) April 3, 2024 (https://twitter.com/MattBinder/status/1775658908486353129?ref_src=twsrc%5Etfw)
Prior to this, the only users to get free Premium blue checks have been those with large follower numbers (in the million range, minimum), along with celebrities and corporations/media companies. The new move appears to be a way to bring influential users with lower follower counts (journalists largely) into the fold.
So what prompted this? X may have decided it needs more journalists with blue checks. In the wake of recent events (the Taiwan earthquake, Turkey elections, Baltimore bridge collapse), some users complained that X is no longer the gold standard breaking news platform that Twitter used to be.
That's likely because journalists, who discover or amplify such news, have seen reduced prominence while X's algorithms amplify blue check content and replies. That means know-nothing or shitcoin promoters with 25 followers who paid 8 bucks will appear atop replies, rather than an experienced journalist who can furnish useful, truthful information.
With the blue check now being a mark of shame in many cases, a fair number of the users who regained one aren't necessarily happy about it. "Shit, I've been forcibly bluechecked. How do I opt out," wrote @emptywheel. "oh no," Katie Notopoulos tweeted. "I am become bluecheck, promoter of shibacoin."
Shit. I've been forcibly bluechecked.
How do I opt out?
— emptywheel (@emptywheel) April 3, 2024 (https://twitter.com/emptywheel/status/1775673443834355779?ref_src=twsrc%5Etfw)
i am become bluecheck, promoter of shibacoin
— Katie Notopoulos (@katienotopoulos) April 3, 2024 (https://twitter.com/katienotopoulos/status/1775666342001061923?ref_src=twsrc%5Etfw)
This article originally appeared on Engadget at https://www.engadget.com/x-is-giving-blue-checks-to-people-with-more-than-2500-premium-followers-090922311.html?src=rss
https://www.engadget.com/x-is-giving-blue-checks-to-people-with-more-than-2500-premium-followers-090922311.html?src=rss
-
@ d830ee7b:4e61cd62
2024-01-15 05:34:31A Show for Thai Nostrs That Aligns with Austrian Economics
The initiative behind the specific program #สภายาม่วง (The Purple Council, hosted by nostr:npub1ejn774qahqmgjsfajawy7634unk88y26yktvwuzp9kfgdeejx9mqdm97a5) emerged from blending various concepts together. My objective is thus diverse and dynamic.
Benefits to the Community
- Supporting Austrian Economic Principles: When you create value out of self-interest, leading to genuine benefits to the community, you deserve to be rewarded.
- Open to All: This space is for everyone in our community who contributes value through their proof of work, expecting more than just personal gain.
- Opportunity for Recognition: Individuals can voluntarily reveal their stories, helping the community to appreciate and understand different aspects of their identities.
- Promoting a Culture of Respect and Admiration: We push a new culture where value creation is not seen as futile or ignored.
- Accessible to All, Not Just Influencers: The platform is for anyone who deserves it, even if the positive value they bring is just making others laugh.
- Inspiring Community Engagement: Motivating everyone to benefit others without expecting anything in return other than personal satisfaction, while respecting each other through creative and positive contributions.
- Fostering a Positive Community Atmosphere: Connecting values and interactions among community members.
- Self-Value and Pride in the Community: Encouraging unity and cooperation to achieve greater things in the future.
- Content Rich in Value: Easygoing and comfortable viewing culture, full of warmth and fun, with a free and interactive chat environment.
- Unique Experiences with Influencers: Exclusive content on Nostr that differs from mainstream media, offering new experiences and fun.
- Space for Special Agendas and Topics: A place for sharing knowledge, experiences, and discussions beneficial to the community, not necessarily requiring interviews.
Benefits to Individuals (Crew, Guests, Audience)
- Freedom of Speech: Discuss topics that are hard to find a platform for, without self-censorship (while still being responsible for one's actions). Allows for full expression of personality (if distributed further, some parts may be cut to prevent negative impacts on the speaker and our community).
- On-Camera Presence in Live Shows: Practice real-time, unscripted interaction. Mistakes and slip-ups are allowed, fostering a more relaxed expression without the pressure of production quality expectations. This leads to habituation in public speaking and skill development.
- Spontaneous Interaction: As I am the only one aware of the content and the identity of the guests in each episode, other participants must respond spontaneously to unforeseen situations (known at the same time as the audience). It's a practice in handling unprepared presentations, unexpected questions, and developing quick thinking, reducing panic in similar future scenarios (especially for my team members).
- Listening Skills Development: Cultivate mindfulness, attention, and the ability to think, analyze, and discern. Design approaches to respond to unscripted events and create content with a forward-thinking approach, while considering the experiences and values the audience will gain.
- Boost in Self-Confidence: Pride in handling unforeseen events effectively, overcoming fears and perceived obstacles.
- Learning Responsibility: Understand the consequences of actions, decisions, and immediate responses, as well as the digital footprint created. These experiences are valuable for personal development.
- For the Audience: Enjoy uncensored, friendly, and relaxed content (even if some topics are mentally challenging). The informal setting allows for engaging and entertaining discussions in chat, alleviating loneliness.
- Using Satoshi through Zaps: A beneficial use-case for both giver and receiver. It allows one to appreciate and marvel at the value of their choices. Receivers gain pride and recognition for their contributions, supported by positive reinforcement. Givers feel fulfilled, contributing to the community's value creation.
- Inspiration for Self-Improvement: Encourages bold suggestions for both the program and the platform, aiming for future improvements. Participate in the development of these improvements.
My Contribution to the Global Nostr Community
One thing I believe I have contributed to the global Nostr community is bringing into practical use the creations of developers. By implementing these developments, I have helped realize beneficial use-cases that genuinely add value in various aspects. Moreover, I strive to demonstrate that these contributions significantly benefit a large number of people, creating a ripple effect that extends both within and beyond the community.
There are many things in my mind, but let me just share a part of it. I hope this gives you some understanding of my intentions behind producing this program.
Although I don't have enough time to personally acknowledge everyone who contributes value extensively (and there are many who are not comfortable being in the media - no problem there), as long as this Purple Field exists, there will come a day when I can express my appreciation to everyone.
If there is anyone you would like to know more about, I am more than willing to listen to suggestions and invite these individuals to join and share their value in future episodes of The Purple Council.
Another thing I want to emphasize about the value of this program (even if it seems self-praising) is that I've never intended to appear in the media or any programs before. It's still the same today, except for very important and necessary occasions.
This program is the only one I've chosen to be a part of, to show the "price" of the value created here, to demonstrate how much I value this principle and love our Purple Field community.
Please continue to follow The Purple Council. If there's anything I can improve or develop, don't hesitate to give your suggestions.
Thank you. 🙏
P.S. We now have #Siamstr Radio on #Nostrnest operating 24/7. It's a similar space for our Purple Field community. You can join and have fun with friends there without having to wait for The Purple Council.
I've been quite busy with various tasks lately and haven't been as active in the Purple Field as before. I apologize for this.
-
@ 8fb140b4:f948000c
2023-07-22 09:39:48Intro
This short tutorial will help you set up your own Nostr Wallet Connect (NWC) on your own LND Node that is not using Umbrel. If you are a user of Umbrel, you should use their version of NWC.
Requirements
You need to have a working installation of LND with established channels and connectivity to the internet. NWC in itself is fairly light and will not consume a lot of resources. You will also want to ensure that you have a working installation of Docker, since we will use a docker image to run NWC.
- Working installation of LND (and all of its required components)
- Docker (with Docker compose)
Installation
For the purpose of this tutorial, we will assume that you have your lnd/bitcoind running under user bitcoin with home directory /home/bitcoin. We will also assume that you already have a running installation of Docker (or docker.io).
Prepare and verify
git version - we will need git to get the latest version of NWC. docker version - should execute successfully and show the currently installed version of Docker. docker compose version - same as before, but the version will be different. ss -tupln | grep 10009- should produce the following output: tcp LISTEN 0 4096 0.0.0.0:10009 0.0.0.0: tcp LISTEN 0 4096 [::]:10009 [::]:**
For things to work correctly, your Docker should be version 20.10.0 or later. If you have an older version, consider installing a new one using instructions here: https://docs.docker.com/engine/install/
Create folders & download NWC
In the home directory of your LND/bitcoind user, create a new folder, e.g., "nwc" mkdir /home/bitcoin/nwc. Change to that directory cd /home/bitcoin/nwc and clone the NWC repository: git clone https://github.com/getAlby/nostr-wallet-connect.git
Creating the Docker image
In this step, we will create a Docker image that you will use to run NWC.
- Change directory to
nostr-wallet-connect
:cd nostr-wallet-connect
- Run command to build Docker image:
docker build -t nwc:$(date +'%Y%m%d%H%M') -t nwc:latest .
(there is a dot at the end) - The last line of the output (after a few minutes) should look like
=> => naming to docker.io/library/nwc:latest
nwc:latest
is the name of the Docker image with a tag which you should note for use later.
Creating docker-compose.yml and necessary data directories
- Let's create a directory that will hold your non-volatile data (DB):
mkdir data
- In
docker-compose.yml
file, there are fields that you want to replace (<> comments) and port “4321” that you want to make sure is open (check withss -tupln | grep 4321
which should return nothing). - Create
docker-compose.yml
file with the following content, and make sure to update fields that have <> comment:
version: "3.8" services: nwc: image: nwc:latest volumes: - ./data:/data - ~/.lnd:/lnd:ro ports: - "4321:8080" extra_hosts: - "localhost:host-gateway" environment: NOSTR_PRIVKEY: <use "openssl rand -hex 32" to generate a fresh key and place it inside ""> LN_BACKEND_TYPE: "LND" LND_ADDRESS: localhost:10009 LND_CERT_FILE: "/lnd/tls.cert" LND_MACAROON_FILE: "/lnd/data/chain/bitcoin/mainnet/admin.macaroon" DATABASE_URI: "/data/nostr-wallet-connect.db" COOKIE_SECRET: <use "openssl rand -hex 32" to generate fresh secret and place it inside ""> PORT: 8080 restart: always stop_grace_period: 1m
Starting and testing
Now that you have everything ready, it is time to start the container and test.
- While you are in the
nwc
directory (important), execute the following command and check the log output,docker compose up
- You should see container logs while it is starting, and it should not exit if everything went well.
- At this point, you should be able to go to
http://<ip of the host where nwc is running>:4321
and get to the interface of NWC - To stop the test run of NWC, simply press
Ctrl-C
, and it will shut the container down. - To start NWC permanently, you should execute
docker compose up -d
, “-d” tells Docker to detach from the session. - To check currently running NWC logs, execute
docker compose logs
to run it in tail mode add-f
to the end. - To stop the container, execute
docker compose down
That's all, just follow the instructions in the web interface to get started.
Updating
As with any software, you should expect fixes and updates that you would need to perform periodically. You could automate this, but it falls outside of the scope of this tutorial. Since we already have all of the necessary configuration in place, the update execution is fairly simple.
- Change directory to the clone of the git repository,
cd /home/bitcoin/nwc/nostr-wallet-connect
- Run command to build Docker image:
docker build -t nwc:$(date +'%Y%m%d%H%M') -t nwc:latest .
(there is a dot at the end) - Change directory back one level
cd ..
- Restart (stop and start) the docker compose config
docker compose down && docker compose up -d
- Done! Optionally you may want to check the logs:
docker compose logs
-
@ 78733875:4eb851f2
2023-08-17 14:23:51After announcing our first wave of grants for bitcoin[^fn-btc] and nostr[^fn-nostr] projects, we are pleased to announce an additional wave of grants for open-source projects in the space:
[^fn-btc]: First Wave of Bitcoin Grants [^fn-nostr]: First Wave of Nostr Grants
- BDK
- LNbits
- Watchdescriptor
- Stratum V2 Testing \& Benchmarking Tool
- Fedimint Modules and Resources
- Amber: Nostr Event Signer
- Nostr UI/UX Development
- Nostr Use-Case Exploration \& Education
The first five grants are sourced from our General Fund, the last three—being nostr projects—from our Nostr Fund. This brings the total number of OpenSats grants to 41, adding to the grants we previously announced in July.
Once again, let's take a closer look at each of the projects to see how they align with the OpenSats mission.
BDK
Bitcoin Development Kit (BDK) is a set of libraries and tools that allows you to seamlessly build cross-platform on-chain bitcoin wallets without having to re-implement standard bitcoin data structures, algorithms, and protocols. BDK is built on top of the powerful rust-bitcoin and rust-miniscript libraries and adds features for managing descriptor-based wallets, syncing wallets to the bitcoin blockchain, viewing transaction histories, managing and selecting UTXOs to create new transactions, signing, and more. The core BDK components are written in Rust, but the team also maintains Kotlin and Swift language bindings for use in mobile projects. There are also Python bindings, and React Native and Flutter support is being actively developed.
Repository: bitcoindevkit/bdk
License: Apache 2.0 / MITLNbits
LNbits is used by a multitude of projects in the bitcoin space, especially as part of their lightning payments stack. Being easy to build on through its extension framework, LNbits has been pioneering various cutting-edge solutions and experiments in the world of bitcoin, lightning, and nostr.
The project has a thriving maker community building various hardware devices such as Lightning ATMs, point-of-sale devices, DIY hardware wallets, and nostr signing devices. The modular design of LNbits makes it attractive to users and tinkerers alike, as its plugin architecture makes it easy to extend and understand.
Repository: lnbits/lnbits
License: MITWatchdescriptor
watchdescriptor
is a CLN plugin written in Rust that connects a business's treasury wallet to its CLN node. It utilizescln-plugin
and the BDK library to track coin movements in registered wallets and report this information to thebookkeeper
plugin.The plugin enables businesses to design a complete treasury using Miniscript and import the resulting descriptor into CLN. Since
bookkeeper
already accounts for all coin movements internal to CLN, this plugin is the last piece businesses need in order to unify all their bitcoin accounting in one place. This enables businesses to account for all inflows and outflows from their operations, streamlining tax reporting and financial analysis.The
watchdescriptor
project is part of a broader vision to transform the lightning node (particularly CLN) into a financial hub for businesses, enabling them to conduct operations without reliance on any third parties.Repository: chrisguida/watchdescriptor
License: MITStratum V2 Testing & Benchmarking Tool
The Stratum V2 Testing & Benchmarking Tool allows the bitcoin mining industry to test and benchmark Stratum V2 performance against Stratum V1. The tool supports different mining scenarios to help miners make informed decisions and evaluate their profitability. The goal of the project is to motivate miners to upgrade to Stratum V2, increasing their individual profits and making the Bitcoin network more resilient in the process.
Repository: stratum-mining/stratum @GitGab19
License: Apache 2.0 / MITFedimint Modules and Resources
Fedimint is a federated Chaumian e-cash mint backed by sats with deposits and withdrawals that can occur on-chain or via lightning. It can be understood as a scaling and privacy layer as well as an adoption accelerator for Bitcoin.
The goal of this particular project is to improve the Fedimint UI and develop free and open resources for developers and "Guardians" to enable more people to run and develop on Fedimint.
Repository: fedimint/ui @EthnTuttle
License: MITAmber: Nostr Event Signer
Amber is a nostr event signer for Android. It allows users to keep their
nsec
segregated in a single, dedicated app. The goal of Amber is to have your smartphone act as a NIP-46 signing device without any need for servers or additional hardware. "Private keys should be exposed to as few systems as possible as each system adds to the attack surface," as the rationale of said NIP states. In addition to native apps, Amber aims to support all current nostr web applications without requiring any extensions or web servers.Repository: greenart7c3/Amber
License: MITNostr UI/UX Development
The goal of this project is to help improve the UI/UX of major nostr clients, starting with Gossip and Coracle, emphasizing the onboarding process as well as usability and accessibility. One part of onboarding is solving the discoverability problem that nostr has in the first place. Solving the problem of jumping in and out of the nostr world is what motivated the development of
njump
, which was redesigned as part of these efforts and is now live at nostr.com.In addition to client-specific improvements, generic design modules and learnings will be incorporated into the Nostr Design project for others to use.
Activity: github.com/dtonon @dtonon
License: MITNostr Use-Case Exploration & Education
As of today, most nostr clients implement social media applications on top of the nostr protocol. However, nostr allows for various use cases, many of which go beyond social media. Two examples are Listr and Ostrich, a list management tool and job board, respectively.
In addition to use-case exploration, this project will continue to educate users and developers alike, be it via Nostr How or various video series, e.g., explaining how to build upon NDK.
Activity: github.com/erskingardner @jeffg
License: MIT / GPL-3.0 / CC BY-NC-SA 4.0
We have more grants in the pipeline and will provide funding for many more projects in the future. Note that we can only fund those projects which reach out and apply. If you are an open-source developer who is aligned with our mission, don't hesitate to apply for funding.
-
@ 044b20cf:89e58620
2024-04-04 09:00:36The best multi-device wireless charging pads for 2024
There’s a good chance you own a couple of gadgets that support wireless charging now, be it your iPhone or Android phone, wireless earbuds or smartwatch. Multi-device wireless chargers (https://www.engadget.com/best-wireless-charger-140036359.html) can help power them all up in one place, without the need for a bunch of cords messing up your space. There are dozens of these accessories out there today, and we’ve tested a bunch of the most popular models. You’ll find out top picks below for the best wireless charging stations, plus some advice on how to choose the right one for your needs.
Do wireless charging pads work with any phone or device?
It won’t be shocking to hear that your smartphone choice influences your choice in a wireless charger. Only iPhone owners will need to consider Apple Watch (https://www.engadget.com/apple-watch-series-7-review-all-about-the-screen-130036181.html) compatibility. Likewise, you’ll need an Android phone if you expect to power a Galaxy Watch (https://www.engadget.com/samsung-galaxy-watch-4-classic-review-body-fat-scan-wear-os-120026149.html). Buy an iPhone 12 or newer and you can attach your phone magnetically using MagSafe, while the latest Android phones often have some form of fast wireless charging.
However, it’s not simply a question of getting the fastest charger. You should consider what you might buy in the future. Don’t buy a two-device charger if you have an iPhone and AirPods, but have been eyeing an Apple Watch. And if you think you might switch to an Android cell phone (or vice versa), you’ll want to get something more generic that doesn’t lock you into any one ecosystem.
Some chargers include cradles, trays and other features that are heavily optimized for particular products, and might not even account for year-to-year changes. Some vertical stands are too tall for small phones like the iPhone 13 mini (https://www.engadget.com/apple-iphone-13-and-13-mini-review-130035113.html), for instance. While you can never completely guarantee that next year’s phone or watch will work, it’s worth buying something more likely to last.
Having said all this, don’t be afraid to get a charger with vendor-specific features if you’re fiercely loyal to one brand. Apple isn’t expected to ditch MagSafe any time soon, and Samsung will likely keep making Galaxy Watches for a while to come.
How to pick the best wireless charging pad for your needs: Location, location, location
Sebastian Bednarek on Unsplash
Even without a charging cable to worry about, you’re probably buying a multi-device wireless charger with one location in mind. It might sit on your nightstand or on your desk. Not everyone buys a charger just for themselves, though; you might want to use one as a shared station for you and a partner.
If the charger will sit on your nightstand, you’ll likely want a compact, stable unit that won’t swallow all your free space or tumble to the floor (and if it does fall, one with enough durability to survive). You may also prefer a lay-flat phone pad so your screen is less likely to keep you awake. The Apple Watch and some other smartwatches can double as tiny alarm clocks, so you might want a vertical charging option for any wristwear.
At a desk, however, you may want a vertical phone stand so you can check notifications. Will the charger sit on a low table? Horizontal charger pads may make it easier to grab your devices in a hurry. Travel chargers should fold up or otherwise protect the pads while they’re in your bag. And, yes, aesthetics count. You may want something pretty if it’s likely to sit in a posh room where guests will see it.
If it’s a shared charging station, you’ll want something with multiple generic surfaces, and you’ll probably have to forgo charging more than one watch at a time. In those cases, consider the handful of 4-in-1 wireless chargers on the market, or models with USB ports.
Wireless charging speed and performance
It’s no secret that wireless charging is typically slower than wired, and powering multiple devices adds a new wrinkle. As these chargers often have to support a wide range of hardware, you’ll have to forget about the fastest, device-specific options from brands like Google, OnePlus and Samsung.
That’s not to say these will be slow, but there just isn’t much separating them on the charging speed front. As a general rule, the quickest multi-device chargers tend to top out at 15W for phones. And you’ll need an Apple MagSafe charger if you want to get that full 15W on an iPhone.
It’s rare that you’ll find a truly slow example, mind you. Even some of the most affordable options we’ve seen will recharge your phone at a reasonable 7.5W or 10W, and the 5W for other devices is more than enough. If you’re only docking overnight or while you work, speed won’t make a huge difference. Just be sure that whatever you buy is powerful enough for a phone in a case. It’s also worth noting that fast charging for other devices is rare, although you’ll occasionally find speedier options for the Apple Watch Series 7.
Quality, box contents and small details
Melvin Thambi on Unsplash
The difference between a good charger and a great one often boils down to little details. You won’t always need to pay extra to get those, but a larger outlay may be worthwhile to avoid frustrations for years to come.
A textured surface like rubberized plastic or fabric will reduce the chances your expensive gadgets will slide off their charging points. The base should have enough grip and weight that the charger won’t easily shift out of place. Any floating or vertical stands should be sturdy — steer clear if there’s any wobble.
You’ll also want to make a note of what’s included in the box. Some chargers don’t ship with power adapters, and we've seen numerous models whose Apple Watch “stands” are merely holders for your existing charging puck.
Then there’s helpful touches like status lights for confirming correct placement, although you’ll want to make sure they dim or shut off after a certain amount of time. And while it’s still true that cradles and trays can limit compatibility, you do want your devices to stay where you put them. Shelves and lips can prevent your phone or watch from sliding. Oh, and be wary of floating smartwatch mounts, as heavier timepieces might sag.
This article originally appeared on Engadget at https://www.engadget.com/best-multi-device-wireless-charging-pads-120557582.html?src=rss
https://www.engadget.com/best-multi-device-wireless-charging-pads-120557582.html?src=rss
-
@ b12b632c:d9e1ff79
2023-07-20 20:12:39Self hosting web applications comes quickly with the need to deal with HTTPS protocol and SSL certificates. The time where web applications was published over the 80/TCP port without any encryption is totally over. Now we have Let's Encrypt and other free certification authority that lets us play web applications with, at least, the basic minimum security required.
Second part of web self hosting stuff that is really useful is the web proxifycation.
It's possible to have multiple web applications accessible through HTTPS but as we can't use the some port (spoiler: we can) we are forced to have ugly URL as https://mybeautifudomain.tld:8443.
This is where Nginx Proxy Manager (NPM) comes to help us.
NPM, as gateway, will listen on the 443 https port and based on the subdomain you want to reach, it will redirect the network flow to the NPM differents declared backend ports. NPM will also request HTTPS cert for you and let you know when the certificate expires, really useful.
We'll now install NPM with docker compose (v2) and you'll see, it's very easy.
You can find the official NPM setup instructions here.
But before we absolutely need to do something. You need to connect to the registrar where you bought your domain name and go into the zone DNS section.You have to create a A record poing to your VPS IP. That will allow NPM to request SSL certificates for your domain and subdomains.
Create a new folder for the NPM docker stack :
mkdir npm-stack && cd npm-stack
Create a new docker-compose.yml :
nano docker-compose.yml
Paste this content into it (CTRL + X ; Y & ENTER to save/quit) :
``` version: '3.8' services: app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: # These ports are in format
: - '80:80' # Public HTTP Port - '443:443' # Public HTTPS Port - '81:81' # Admin Web Port # Add any other Stream port you want to expose # - '21:21' # FTP # Uncomment the next line if you uncomment anything in the section # environment: # Uncomment this if you want to change the location of # the SQLite DB file within the container # DB_SQLITE_FILE: "/data/database.sqlite" # Uncomment this if IPv6 is not enabled on your host # DISABLE_IPV6: 'true' volumes: - ./nginxproxymanager/data:/data - ./nginxproxymanager/letsencrypt:/etc/letsencrypt
```
You'll not believe but it's done. NPM docker compose configuration is done.
To start Nginx Proxy Manager with docker compose, you just have to :
docker compose up -d
You'll see :
user@vps:~/tutorials/npm-stack$ docker compose up -d [+] Running 2/2 ✔ Network npm-stack_default Created ✔ Container npm-stack-app-1 Started
You can check if NPM container is started by doing this command :
docker ps
You'll see :
user@vps:~/tutorials/npm-stack$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7bc5ea8ac9c8 jc21/nginx-proxy-manager:latest "/init" About a minute ago Up About a minute 0.0.0.0:80-81->80-81/tcp, :::80-81->80-81/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp npm-stack-app-1
If the command show "Up X minutes" for the npm-stack-app-1, you're good to go! You can access to the NPM admin UI by going to http://YourIPAddress:81.You shoud see :
The default NPM login/password are : admin@example.com/changeme .If the login succeed, you should see a popup asking to edit your user by changing your email password :
And your password :
Click on "Save" to finish the login. To verify if NPM is able to request SSL certificates for you, create first a subdomain for the NPM admin UI : Click on "Hosts" and "Proxy Hosts" :
Followed by "Add Proxy Host"
If you want to access the NPM admin UI with https://admin.yourdomain.tld, please set all the parameters like this (I won't explain each parameters) :
Details tab :
SSL tab :
And click on "Save".
NPM will request the SSL certificate "admin.yourdomain.tld" for you.
If you have an erreor message "Internal Error" it's probably because your domaine DNS zone is not configured with an A DNS record pointing to your VPS IP.
Otherwise you should see (my domain is hidden) :
Clicking on the "Source" URL link "admin.yourdomain.tld" will open a pop-up and, surprise, you should see the NPM admin UI with the URL "https://admin.yourdomain.tld" !
If yes, bravo, everything is OK ! 🎇
You know now how to have a subdomain of your domain redirecting to a container web app. In the next blog post, you'll see how to setup a Nostr relay with NPM ;)
Voilààààà
See you soon in another Fractalized story!
-
@ c4428155:a36de510
2024-04-04 09:00:00As Elections Loom, Congressional Maps Challenged as Discriminatory Will Remain in Place
by Marilyn W. Thompson (https://www.propublica.org/people/marilyn-thompson)
ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories (https://www.propublica.org/newsletters/the-big-story?source=54G&placement=top-note®ion=national) as soon as they’re published.
With the Republicans holding just a two-vote majority in the House of Representatives, voters will go to the polls in November in at least two congressional districts that have been challenged as discriminatory against people of color.
After months of delays and appeals, courts have decided in the last two weeks that the maps in South Carolina and Florida will stand, giving Republican incumbents an advantage.
Last month, the U.S. Supreme Court declined to take action on South Carolina’s 1st Congressional District. In January 2023, a three-judge federal panel had declared it an illegal racial gerrymander that must be redrawn before another election was held. In Florida, the congressional map has faced long-running discrimination lawsuits in both state and federal courts, with one state judge ruling that a district near Jacksonville disadvantaged voters of color. A higher court overturned that judgment, but an appeal from voting rights and civil rights groups is still pending before the state Supreme Court, which has said it could be months before it rules.
A decision about another contested district in Utah is pending with the state Supreme Court and seems unlikely to be resolved before the elections, according to Mark Gaber of the Campaign Legal Center, who represents plaintiffs in a partisan gerrymandering lawsuit.
Put in place in 2021 after the last federal census, the controversial maps were used in multiple elections during the 2022 election cycle.
“The long, extended delays are a real problem, for voting rights and particularly for Black voters,” Gaber said.
The cases illustrate how difficult it is to reverse gerrymandered voting maps. Even when lower courts find election maps illegal and give state legislatures months to make corrections, appeals and other delaying tactics can run out the clock as elections near.
Federal courts have been reluctant to make mapping changes too close to elections because of a vague legal idea known as the Purcell principle, based on a 2006 court case from Arizona that found that voters may be confused by late changes in polling places or election procedures.
The U.S. Supreme Court cited Purcell in 2022 when it left an illegal congressional map in place in Alabama for midterm elections while it considered a Republican appeal. Black voters cast their ballots under a discriminatory map, and when the Supreme Court finally decided the case in 2023, it reaffirmed that Alabama’s map violated Section 2 of the Voting Rights Act and must be redrawn. A new map is now in place for 2024, which could result in the election of a second Democratic representative for the state in November.
The Supreme Court made a similar call in 2022 in a Louisiana redistricting case after a federal court struck down the state’s congressional map. Voters cast ballots in 2022 under the challenged map. Since then, the state Legislature has redrawn the map and created a second majority-Black district that could help Democrats gain another seat in Congress.
The exact cutoff for applying the Purcell principle has not been defined, but conservative Supreme Court Justice Brett Kavanaugh, who has cited it in his opinions, has said the principle reflects a “bedrock tenet of election law.”
The delayed rulings and actions in Alabama and Louisiana and a ruling this week in Washington state (https://www.propublica.org/article/republicans-hatched-secret-assault-voting-rights-act-washington-state) have favored Democrats. On Tuesday, the Supreme Court declined to stop a new state legislative map from going into effect in Washington, where a lower court had found discrimination against Latinos in the Yakima Valley. Republicans had filed an emergency appeal since the new map disrupts four legislative seats currently held by the GOP.
In South Carolina in early 2023, a three-judge federal panel unanimously found that the GOP-controlled state Legislature drew an illegal racial gerrymander in the 1st District near Charleston, discriminating against 30,000 Black residents who were moved out of the district.
Republican lawmakers have acknowledged they wanted to maintain firm GOP control of a swing district, currently held by Rep. Nancy Mace. But they have denied discriminatory intent. ProPublica reported that Democratic Rep. James Clyburn (https://www.propublica.org/article/how-rep-james-clyburn-protected-his-district-at-a-cost-to-black-democrats), the state’s most influential Black elected official, gave detailed confidential input through one of his aides during the creation of the state’s maps.
Clyburn offered Republicans a draft map that included his recommendations for how to add voters to his largely rural 6th District, which had lost a significant Black population, and move unpredictable pockets of white voters out of his district.
On NBC’s “Meet the Press” on Sunday, Clyburn denied playing a significant role in a Republican gerrymander.
“When someone picks up the phone and asks you (https://www.nbcnews.com/meet-the-press/meet-press-march-31-2024-n1309420), ‘What are your suggestions as we’re about to get these lines drawn?’ I offered my suggestions,” Clyburn said.
Adam Kincaid, the director of the National Republican Redistricting Trust, said Clyburn’s comments suggest he is “trying to get in front of” a Supreme Court decision that will uphold the Legislature’s maps. “I think Mr. Clyburn believes South Carolina is going to ultimately win,” he said.
The case is now at the Supreme Court. The court heard oral arguments on Oct. 11, then went silent as South Carolina’s filing deadline for June primary elections loomed.
In recent months, lawyers for GOP legislators asked the Supreme Court to abide by the Purcell principle and allow the challenged map to stand for 2024. Lawyers for the South Carolina NAACP argued there was plenty of time to implement a corrective map.
After waiting for the Supreme Court to act, the same lower court that found the district discriminatory ruled that the map would have to remain in place after all, saying it wanted to avoid voter confusion. “The ideal must bend to the practical,” the court said.
The South Carolina case shows how the Supreme Court’s “inaction can be as consequential as an adverse action,” said Wilfred Codrington III, an associate professor at Brooklyn Law School who has written on the Purcell principle and its impact on voting rights.
Civil rights advocates condemned the court’s unwillingness to make a timely decision, which by default gives a competitive election advantage to Mace. “No one believes they were just too busy to rule in time. It’s an intentional partisan maneuver,” tweeted Lynn Teague, vice president of the League of Women Voters of South Carolina, which has been active in the redistricting case.
In the Florida case, a federal three-judge panel on March 27 upheld an election map pushed through the Legislature by Gov. Ron DeSantis. The decision allows elections to proceed this year while a separate state case awaits resolution.
The federal panel said plaintiffs failed to prove that the state Legislature was motivated by race when it approved a DeSantis-engineered plan moving Black voters in the 5th District into four majority-white districts. The 5th District seat is currently held by Republican Rep. John Rutherford, who has no Democratic opposition.
DeSantis’ redistricting plan (https://www.propublica.org/article/ron-desantis-florida-redistricting-map-scheme) has been mired in controversy since 2022, when he vetoed the Republican Legislature’s plan and redrew the map with advice from national Republican consultants. A key feature of the DeSantis plan was redrawing the majority-Black 5th District near Jacksonville.
A state judge previously struck down his map as a violation of the constitution, which provides additional protections for voters of color. An appeals court overturned the judge’s ruling, but the Florida Supreme Court has agreed to hear the case.
The Utah case involves a challenge to the state’s Republican Legislature for repealing a voter-passed initiative setting up an independent redistricting commission and then passing a partisan gerrymander that splits up communities around Salt Lake City. Utah has four congressional seats, all held by Republicans.
“We’re still waiting to hear from the court whether the claims that we raised are viable, and we're hopeful,” Gaber said. “But I do not think there’s a likely chance of a decision that would affect this year’s elections.”
Kincaid, who coordinates national Republican redistricting strategy, said it’s unclear whether court decisions to use contested districts will allow the GOP to maintain its narrow control of the House.
“Democrats and their liberal allies have spent hundreds of millions of dollars to try to sue their way into congressional and legislative majorities,” Kincaid said. When the House majority is decided in November, he said. “I would rather it be us than them.”
https://www.propublica.org/article/elections-district-maps-deemed-discriminatory-south-carolina-florida
-
@ 9ecbb0e7:06ab7c09
2023-08-08 02:53:15Según nos reporta y documenta la familia, la niña pequeña de 3 años de edad Leadi Kataleya Naranjo, hija del preso de conciencia Idael Naranjo Pérez, ha sido citada para mañana por la Seguridad del Estado de #Cuba.
La citación no ofrece duda e indica los dos nombres, apellido y la dirección de la abuela paterna, donde entregaron la citación, y la abuela también ha sido citada y amenazada de desobediencia.
Si la niña no se presenta, amenazan de acusarla igualmente de "desobediencia". La abuela se preocupó, les dijo la edad, y aún así entregaron la citación y se fueron. Que "debían presentarse". Nada más. Desalmados son, pero... ¿Qué clase de locos de atar están al mando en #Cuba? ¿Qué pretenden citando y amenazando a una niña de 3 años? Esa niña no puede ir a esa ignominiosa citación. ¿A oír qué? ¿A responder qué? El tema da ASCO. La familia no quiere llevar a la niña, y necesitan todo el apoyo para que esta locura se aclare y se detenga. Desgraciadamente hemos visto cosas iguales o peores. Niños amenazados, separados por la fuerza de sus padres, menores torturados... El régimen de Cuba da mucho asco, por todos lados rezuma repugnancia y degradación. Todo lo que sucede en Cuba clama al cielo, y estas barbaries son distópicas. ¡MANTENGAN AL MARGEN A LOS NIÑOS, DESALMADOS ENFERMOS! Bastante daño causan con la Ley de los 8 años que tiene a miles de niños huérfanos, separados forzosamente de sus madres para castigarlas a ellas por no doblegarse a trabajar en las misiones médicas esclavas de Cuba en el exterior, como sentenció el Comité de los Derechos del Niño. ¡BASTA YA!
11jul #11j #11jCuba #CubaEsUnaDictadura #UNICEF @uniceflac @unicefenespanol
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Cultura Inglesa e aprendizado extra-escolar
Em 2005 a Cultura Inglesa me classificou como nível 2 em proficiência de inglês, numa escala de 1 a 14 ou coisa parecida. De modo que eu precisaria de 6 anos de aulas com eles pra ficar bom. 2 anos depois, sem fazer nenhuma aula ou ter qualquer tipo de treinamento intensivo eu era capaz de compreender textos técnicos em inglês sem nenhuma dificuldade. Mais 2 anos e eu era capaz de compreender qualquer coisa e me expressar com razoável qualidade.
Tudo isso pra documentar mais um exemplo, que poderia passar despercebido, de aprendizado de tipo escolar que se deu fora de uma escola.
-
@ ee7d2dbe:4a5410b0
2024-04-04 12:59:04How to Optimize App Development Budget Using a Cost Calculator? With the increasing usage of smartphones, there is no doubt mobile apps have become a crucial part of everyone’s life. People are spending most of their time, approximately 5-6 hours, on their smartphones, excluding chatting. With more than 7.26 billion active smartphone users in 2022, there is no reason to let them go, given how well-liked they are and the amount of profit they might bring to you. On average, the app costs you can forecast spending anything from $2000 to $50,000 on an app development project is no big deal. But before all this, you need to know how much it costs to create an app. Most people with brilliant app ideas seek an answer to the above question. This article talks about the overall look at app development cost. We will cover all app cost calculator benefits, factors affecting the app development cost, some famous app cost calculator and our final thoughts on how to optimize the cost of your app development.
Benefits of Using an App Cost Calculator
1- Accurate Cost Estimation: One of the biggest advantages of using an app cost calculator as it offers you to know the accurate cost of developing an app.
2- Budget Planning: After you are aware of the cost of developing an app now you can plan the budget and adjust your requirements accordingly.
3- Time Management: the mobile app development cost calculator might help you plan your app development timeline effectively.
4- Transparency: Another benefit of using app cost calculator is it comes with complete transparency and allows an individual to understand the development process better.
5- Cost Optimization: Now you have got an estimate about your project with an app cost calculator that you can use it as leverage in negotiations with app development firms.
6- No Charges: Another considerable benefit of using an online app calculator it’s free so there is no harm in trying out one for your next minimum viable project (MVP) project.
Some Famous App Development Cost Calculators
Agicent App Cost Calculator: The online tool consisting of 9 Steps help individuals and businesses to get an estimate on the cost of developing a mobile app. From step 1st to till step 9th you will be asked to input various parameters which are important to calculate the app cost. With the Agicent app cost calculator, you will get the estimation of app cost effectively and easily.
Buildfire- Step-by-step Calculator: With the Buildfire cost calculator you will be able to estimate the budget for setting the app, developing it and its monthly maintenance.
VenturePact- Get Offshore Cost: VenturePact is the app cost calculator that will give you rough estimates of different geographical locations such as USA, Europe and Asia. We will also recommend consulting the offshore agency with whom you want to connect to get an accurate estimate.
Digitalya-Know your spending limit: Aside from offering the estimation development costs, individuals will also know what price to negotiate with your PWA team when you are using Digitalya.
Estimate My App-Simple and Easy to Use: This is for those who are first timers pickers those who want to build an app. It’s easiest and most intuitive design will help you quickly by giving you a rough idea of the app cost.
Final Thoughts on Optimizing App Development Cost Calculator:
With the constant technology evolving and the rise of IT services cost, developing a mobile app can be a tedious job. Optimization of app development cost is an crucial factor in achieving success in the highly competitive mobile app market. By using the app development cost calculator you will be getting an estimate of the costs app and identify the areas where you will be able to save money. Thus, by using a cost calculator, you will be able to reach efficient budget planning for your app development high-quality at a reasonable cost.
Source: https://www.agicent.com/app-development-cost-calculator
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Bluesky is a scam
Bluesky advertises itself as an open network, they say people won't lose followers or their identity, they advertise themselves as a protocol ("atproto") and because of that they are tricking a lot of people into using them. These three claims are false.
protocolness
Bluesky is a company. "atproto" is the protocol. Supposedly they are two different things, right? Bluesky just releases software that implements the protocol, but others can also do that, it's open!
And yet, the protocol has an official webpage with a waitlist and a private beta? Why is the protocol advertised as a company product? Because it is. The "protocol" is just a description of whatever the Bluesky app and servers do, it can and does change anytime the Bluesky developers decide they want to change it, and it will keep changing for as long as Bluesky apps and servers control the biggest part of the network.
Oh, so there is the possibility of other players stepping in and then it becomes an actual interoperable open protocol? Yes, but what is the likelihood of that happening? It is very low. No serious competitor is likely to step in and build serious apps using a protocol that is directly controlled by Bluesky. All we will ever see are small "community" apps made by users and small satellite small businesses -- not unlike the people and companies that write plugins, addons and alternative clients for popular third-party centralized platforms.
And last, even if it happens that someone makes an app so good that it displaces the canonical official Bluesky app, then that company may overtake the protocol itself -- not because they're evil, but because there is no way it cannot be like this.
identity
According to their own documentation, the Bluesky people were looking for an identity system that provided global ids, key rotation and human-readable names.
They must have realized that such properties are not possible in an open and decentralized system, but instead of accepting a tradeoff they decided they wanted all their desired features and threw away the "decentralized" part, quite literally and explicitly (although they make sure to hide that piece in the middle of a bunch of code and text that very few will read).
The "DID Placeholder" method they decided to use for their global identities is nothing more than a normal old boring trusted server controlled by Bluesky that keeps track of who is who and can, at all times, decide to ban a person and deprive them from their identity (they dismissively call a "denial of service attack").
They decided to adopt this method as a placeholder until someone else doesn't invent the impossible alternative that would provide all their desired properties in a decentralized manner -- which is nothing more than a very good excuse: "yes, it's not great now, but it will improve!".
openness
Months after launching their product with an aura of decentralization and openness and getting a bunch of people inside that believed, falsely, they were joining an actually open network, Bluesky has decided to publish a part of their idea of how other people will be able to join their open network.
When I first saw their app and how they were very prominently things like follower counts, like counts and other things that are typical of centralized networks and can't be reliable or exact on truly open networks (like Nostr), I asked myself how were they going to do that once they became and open "federated" network as they were expected to be.
Turns out their decentralization plan is to just allow you, as a writer, to host your own posts on "personal data stores", but not really have any control over the distribution of the posts. All posts go through the Bluesky central server, called BGS, and they decide what to do with it. And you, as a reader, doesn't have any control of what you're reading from either, all you can do is connect to the BGS and ask for posts. If the BGS decides to ban, shadow ban, reorder, miscount, hide, deprioritize, trick or maybe even to serve ads, then you are out of luck.
Oh, but anyone can run their own BGS!, they will say. Even in their own blog post announcing the architecture they assert that "it’s a fairly resource-demanding service" and "there may be a few large full-network providers". But I fail to see why even more than one network provider will exist, if Bluesky is already doing that job, and considering the fact there are very little incentives for anyone to switch providers -- because the app does not seem to be at all made to talk to multiple providers, one would have to stop using the reliable, fast and beefy official BGS and start using some half-baked alternative and risk losing access to things.
When asked about the possibility of switching, one of Bluesky overlords said: "it would look something like this: bluesky has gone evil. there's a new alternative called freesky that people are rushing to. I'm switching to freesky".
The quote is very naïve and sounds like something that could be said about Twitter itself: "if Twitter is evil you can just run your own social network". Both are fallacies because they ignore the network-effect and the fact that people will never fully agree that something is "evil". In fact these two are the fundamental reasons why -- for social networks specifically (and not for other things like commerce) -- we need truly open protocols with no owners and no committees.
-
@ 044b20cf:89e58620
2024-04-04 07:01:27How to choose the best TV for gaming right now
These days, the best gaming TVs are really no different from the best TVs you can buy in general. That said, there are a few key features to keep in mind when looking for a set to get the most out of your PlayStation 5 or Xbox Series X/S (https://www.engadget.com/best-gaming-console-140057674.html). No, you don’t need a fancy TV to simply enjoy a video game, but a good screen can only make the experience more enjoyable. If you’re unsure where to start, we’ve laid out some helpful advice for buying a good gaming TV and noted a few well-reviewed examples you can buy today, at various price points.
What to look for in a gaming TV
Whether you use it for gaming or not, all good TVs are built on the same foundations. You want a 4K resolution, sufficient brightness, high contrast ratios with deep and uniform black tones, colors that find the right balance between accuracy and saturation, and wide viewing angles. For video games specifically, you want a TV with minimal input lag and fast motion response, with no blur or other unwanted artifacts behind quick-moving objects. Of course, finding a set that has all of these gaming features and fits into your budget can be tricky.
For now, a top OLED TV will offer the best picture quality for gaming or otherwise. But good OLED TVs usually cost more than their LCD counterparts, and some OLED sets may not get bright enough for those who have their TV set in a particularly well-lit room. If you opt for an LCD TV, an advanced backlight with mini LEDs and effective full-array local dimming (https://www.howtogeek.com/742183/what-is-full-array-local-dimming/) will usually improve contrast and lighting detail, while a quantum dot filter can enhance colors.
One thing you don’t need to worry about is 8K support. Although the PS5 and Xbox Series X are technically capable of outputting 8K video, very few games are made for that resolution, and 8K’s practical benefits are extremely minimal (https://www.rtings.com/tv/reviews/by-size/size-to-distance-relationship) unless you plan on sitting unreasonably close to a massive TV. The few 8K TVs on the market are also very expensive (https://www.engadget.com/samsungs-2023-8k-mini-led-tv-starts-at-5000-163559445.html).
All that said, there are a few terms you should particularly look out for when buying a TV for your new game console or high-end graphics card.
HDMI 2.1
To get the most out of a PlayStation 5 or Xbox Series X/S, your TV should have full HDMI 2.1 support. This is the latest major update to the HDMI spec, enabling a higher maximum bandwidth — 48 gigabits per second, up from HDMI 2.0’s 18 Gbps — and a handful of features that are beneficial for gaming performance specifically. These include variable refresh rate (VRR) and automatic low latency mode (ALLM), which we detail further below.
Beyond that, perhaps the chief perk of HDMI 2.1 is its ability to transmit sharp 4K video up to a 120Hz refresh rate with modern consoles like the PS5 and Xbox Series X, or up to 144Hz with a powerful gaming PC. Not every PS5 (https://www.psu.com/news/all-ps5-games-that-support-120-fps-the-smoothest-games-on-playstation-5/) or Xbox Series X/S (https://www.techradar.com/news/ps5-and-xbox-series-x-games-with-120fps-support) game supports frame rates that high — and some only do at lower resolutions — but those that do will look and feel especially fluid in motion. HDMI 2.1 also includes support for Enhanced Audio Return Channel (https://www.hdmi.org/spec21sub/enhancedaudioreturnchannel) (eARC), which allows you to pass higher-quality lossless audio from a source device connected to the TV to a compatible soundbar or receiver.
The more full HDMI 2.1 ports your TV has, the better. “Full” is the key word there. As reported by TFT Central (https://tftcentral.co.uk/articles/when-hdmi-2-1-isnt-hdmi-2-1), because HDMI 2.1 is backwards compatible with HDMI 2.0, TV and monitor manufacturers have been allowed to brand HDMI ports as “HDMI 2.1” even if they lack full (or any) support for the spec’s upgraded features. We recommend a few TVs below that have true HDMI 2.1 ports, but if you’re buying a new TV for gaming, make sure your chosen set isn’t trying to hide any capabilities you may consider essential.
HDR — High Dynamic Range
HDR refers to a TV's ability to display a wider range between the darkest and brightest parts of a picture. This broader range can bring out details that would otherwise be missing on a standard dynamic range (SDR) TV, in both the very dark and, especially, the very bright areas of an image. HDR typically comes with an improvement to color reproduction as well, displaying a larger palette of more vibrant colors that brings content closer to its creator’s original vision.
To get an HDR picture, you need both content that is mastered to take advantage of the tech and a TV capable of displaying that content. HDR also comes in a variety of formats, which are generally split between those that utilize static metadata (e.g., HDR10) and those that utilize dynamic metadata (e.g., HDR10+, Dolby Vision). In short, the latter allows a TV to optimize its brightness and colors on a per-scene or even per-frame basis, while the former uses one set of optimized settings for the entirety of the given content. Support for these formats can differ depending on the TV, content and game console you use. The Xbox Series X and S, for example, support Dolby Vision for gaming (https://www.dolby.com/experience/xbox/#gref), while the PS5 does not.
The good news is that most TVs you’d buy in 2023 are HDR-ready in some fashion, even on the budget end of the market. The catch is that some TVs are much better at getting the most out of HDR than others. The same goes for actual content mastered in HDR. With video games in particular, there aren’t quite as many titles designed to take advantage of HDR as there are movies (though the number is growing all the time), and the variance in HDR quality tends to be wider.
HGiG — HDR Gaming Interest Group
HGiG stands for the HDR Gaming Interest Group (https://www.hgig.org/). Sony and Microsoft are both members, as are many TV makers and game developers. What this means is that, ideally, all the groups communicate information so that you can start up a new game on a console or PC and have it automatically recognize your display. Once that happens, the game can adjust the internal settings to adjust for that display's capabilities and give you the best picture quality possible, without losing details in the brightest or darkest areas of the screen. For example, daylight at the end of a dark tunnel may portray a brightly lit environment instead of looking like an overexposed white blob.
This is a good thing, but the reality is a bit more complicated. Not all TVs highlight HGiG compatibility in their settings menu, while only some PlayStation and Xbox games recognize and follow the guidelines. If an HGiG option is listed in your TV's tone mapping settings, you should turn it on prior to running the console's HDR settings. Then, if you're playing a game that supports HDR and HGiG, you should be in good shape without having to adjust the various luminance levels again. Still, how all of this looks to you might differ depending on your TV and the game you’re playing. Owners of certain LG OLED TVs, for instance, may prefer their TV’s Dynamic Tone Mapping setting. Use whatever settings you think look best.
ALLM — Auto Low Latency Mode
ALLM (https://www.hdmi.org/spec21sub/autolowlatencymode) allows a source (like your PS5 or Xbox) to tell the display to switch into a picture mode that reduces lag between receiving each frame of an image and displaying it on the TV. This cuts out additional processing that could be the milliseconds of difference between landing a precise input or not. A good modern TV can automatically switch to game mode, then back out when you'd rather watch a movie or TV show.
VRR — Variable Refresh Rate
VRR (https://www.hdmi.org/spec21sub/variablerefreshrate) will sound familiar if you're a PC gamer. Most players have experienced slowdown, screen tearing or stuttering as a system struggles to render each frame at the target speed, which is most commonly 30 or 60 fps on a TV. With VRR, everything stays in sync: Your display won't show the next frame until it's ready, which can make things feel smoother and more responsive, even if the system fails to deliver on its target frame rate.
There are a few different implementations of VRR available, including Nvidia’s G-Sync (https://shopping.yahoo.com/rdlw?merchantId=f52348f6-2304-415e-b16c-1fb252794066&siteId=us-engadget&pageId=1p-autolink&featureId=text-link&merchantName=NVIDIA&custData=eyJzb3VyY2VOYW1lIjoiV2ViLURlc2t0b3AtVmVyaXpvbiIsInN0b3JlSWQiOiJmNTIzNDhmNi0yMzA0LTQxNWUtYjE2Yy0xZmIyNTI3OTQwNjYiLCJsYW5kaW5nVXJsIjoiaHR0cHM6Ly93d3cubnZpZGlhLmNvbS9lbi11cy9nZWZvcmNlL3Byb2R1Y3RzL2ctc3luYy1tb25pdG9ycy8iLCJjb250ZW50VXVpZCI6ImY1ODQzYTUwLTFjMTEtNDllZS1iZjg2LTJmYTVkNDIxOTQ0MCJ9&signature=AQAAAW-0GsAVE8eGkzSggX3VcDBHrBy1gmYQ4YocWtXb-8wc&gcReferrer=https%3A%2F%2Fwww.nvidia.com%2Fen-us%2Fgeforce%2Fproducts%2Fg-sync-monitors%2F), AMD’s FreeSync (https://www.amd.com/en/technologies/free-sync) and the HDMI Forum’s VRR spec, which is part of the full HDMI 2.1 standard. Both a TV and an input device need to support the same VRR tech for it to work, and different devices may only support VRR within a specific refresh rate window. On a 120Hz display, for instance, the PS5’s VRR only works between 48Hz and 120Hz.
As a reminder, the PS5 supports (https://www.engadget.com/ps5-variable-refresh-rate-vrr-release-date-193228613.html) HDMI Forum VRR, the Xbox Series X/S support HDMI Forum VRR and FreeSync, while gaming PCs may support G-Sync or FreeSync depending on whether they use a Nvidia or AMD graphics card. A great gaming TV supports all the big VRR formats, but missing, say, G-Sync, isn’t a killer if you only game on a PS5 or Xbox.
Good gaming TVs you can get right now
The likes of Samsung (https://www.engadget.com/samsungs-2024-tv-lineup-at-ces-2024-includes-ai-aplenty-and-a-glare-free-oled-model-030046720.html), LG (https://www.engadget.com/lg-reveals-new-oled-tvs-with-high-144hz-refresh-rates-ahead-of-ces-2024-164550931.html), TCL (https://www.engadget.com/tcl-came-to-ces-2024-with-a-115-inch-miniled-quantum-dot-tv-190047876.html), Vizio (https://www.engadget.com/vizio-just-announced-a-999-86-inch-4k-tv-160030764.html) and Hisense (https://www.hisense-usa.com/newsroom/hisense-unveils-new-uled-and-uled-x-lineups) have unveiled new TVs for 2024 but, as of this writing, we’ve only recently started to see (https://www.engadget.com/lgs-latest-oled-evo-tvs-start-at-1500-and-go-up-to-a-sky-high-25000-130001550.html) the first few of those (https://www.engadget.com/a-bunch-of-new-samsung-tvs-are-finally-available-for-preorder-from-8k-qled-models-to-77-inch-oleds-150036534.html) become available. Details for more new models should be confirmed in the coming weeks. If history is any indication, though, most of those sets are unlikely to be game-changing upgrades over the 2023 models still on sale today, and they'll cost more out of the gate. We'll have to see if there are exceptions, of course: Samsung’s new S95D OLED TV now uses anti-glare technology (https://www.engadget.com/samsungs-2024-tv-lineup-at-ces-2024-includes-ai-aplenty-and-a-glare-free-oled-model-030046720.html), for instance, while Hisense promises improved brightness across its new midrange sets. If you want the latest and greatest and are willing to pay more to get it, hold out for a little longer.
But the TV market is always in motion. Most of the best models from last year have steadily fallen in price since launch, and they should only drop further as manufacturers look to clear out inventory. This means that, in most cases, buying a good 2023 TV should continue to provide the most bang for your buck, at least for the near term. Those are the sets we’ve highlighted with our recommendations below. While we at Engadget do not formally review TVs, we’re confident in our picks after thoroughly researching user feedback and the consensus from other professional review sites we trust, such as Rtings (https://www.rtings.com/tv), Wirecutter (https://shopping.yahoo.com/rdlw?merchantId=c813ae39-7d58-41cb-ac66-ad830606ceef&siteId=us-engadget&pageId=1p-autolink&featureId=text-link&merchantName=The+New+York+Times&custData=eyJzb3VyY2VOYW1lIjoiV2ViLURlc2t0b3AtVmVyaXpvbiIsInN0b3JlSWQiOiJjODEzYWUzOS03ZDU4LTQxY2ItYWM2Ni1hZDgzMDYwNmNlZWYiLCJsYW5kaW5nVXJsIjoiaHR0cHM6Ly93d3cubnl0aW1lcy5jb20vd2lyZWN1dHRlci9lbGVjdHJvbmljcy90dnMvIiwiY29udGVudFV1aWQiOiJmNTg0M2E1MC0xYzExLTQ5ZWUtYmY4Ni0yZmE1ZDQyMTk0NDAifQ&signature=AQAAAe8RPikyUeDiRcg509D2TvK0Zu7BYDM4kvfoWtLMnU3c&gcReferrer=https%3A%2F%2Fwww.nytimes.com%2Fwirecutter%2Felectronics%2Ftvs%2F&uuid=ImumGWWsbnoDtgSn15790), Reviewed (https://reviewed.usatoday.com/televisions) and PCMag (https://www.pcmag.com/categories/tvs), among others.
Richard Lawler contributed to this report.
This article originally appeared on Engadget at https://www.engadget.com/best-tvs-for-gaming-180033983.html?src=rss
https://www.engadget.com/best-tvs-for-gaming-180033983.html?src=rss
-
@ fa0165a0:03397073
2023-07-24 10:19:27Below is an easy-to-read list of keyboard shortcuts and commands to navigate your Linux computer efficiently: (Note that some variations between systems may apply)
General Shortcuts: Open Terminal: Ctrl + Alt + T Close current application: Alt + F4 Switch between open applications: Alt + Tab Logout from current session: Ctrl + Alt + Del Navigating the File System: Open File Manager (Nautilus): Super (Windows key) + E Move back in directory: Alt + Left Arrow Move forward in directory: Alt + Right Arrow Go to Home directory: Ctrl + H Go to Desktop: Ctrl + D Open a folder or file: Enter Rename a file or folder: F2 Copy selected item: Ctrl + C Cut selected item: Ctrl + X Paste copied/cut item: Ctrl + V Delete selected item: Delete Create a new folder: Ctrl + Shift + N Navigating Applications: Switch between open windows of the same application: Alt + ` Close the current window: Ctrl + W Minimize the current window: Ctrl + M Maximize/Restore the current window: Ctrl + Super + Up Arrow / Down Arrow Navigating Web Browsers (e.g., Firefox, Chrome): Open a new tab: Ctrl + T Close the current tab: Ctrl + W Switch to the next tab: Ctrl + Tab Switch to the previous tab: Ctrl + Shift + Tab Open a link in a new tab: Ctrl + Left Click Go back in the browser history: Alt + Left Arrow Go forward in the browser history: Alt + Right Arrow System Controls: Lock the screen: Ctrl + Alt + L Open the system menu (context menu): Menu key (or Right-click key) or Shift + F10 Open the Run Command prompt: Alt + F2
These shortcuts may vary slightly depending on the Linux distribution and desktop environment you are using. Experiment with these shortcuts to navigate your Linux system faster and more efficiently without relying on the mouse.
Since websites are such an important interface for the information of today, I have here appended the list with some navigational hotkeys for web browsers (e.g., Firefox, Chrome) on Linux:
General Navigation: Scroll down: Spacebar Scroll up: Shift + Spacebar Scroll horizontally: Hold Shift and scroll with the mouse wheel or arrow keys Go to the top of the page: Home Go to the bottom of the page: End Refresh the page: F5 or Ctrl + R Stop loading the page: Esc Link and Page Navigation: Move focus to the next link or interactive element: Tab Move focus to the previous link or interactive element: Shift + Tab Activate/follow a link or button: Enter Open link in a new tab: Ctrl + Enter (Cmd + Enter on macOS) Open link in a new background tab: Ctrl + Shift + Enter (Cmd + Shift + Enter on macOS) Open link in a new window: Shift + Enter Go back to the previous page: Backspace or Alt + Left Arrow Go forward to the next page: Alt + Right Arrow Searching: Find text on the page: Ctrl + F Find next occurrence: Ctrl + G Find previous occurrence: Ctrl + Shift + G Tab Management: Open a new tab: Ctrl + T Close the current tab: Ctrl + W Reopen the last closed tab: Ctrl + Shift + T Switch to the next tab: Ctrl + Tab Switch to the previous tab: Ctrl + Shift + Tab Switch to a specific tab (numbered from left to right): Ctrl + [1-8] Switch to the last tab: Ctrl + 9 Form Interaction: Move to the next form field: Tab Move to the previous form field: Shift + Tab Check/uncheck checkboxes and radio buttons: Spacebar Select an option from a dropdown menu: Enter, then arrow keys to navigate options Miscellaneous: Open the browser's menu: Alt (sometimes F10) Open the address bar (omnibox): Ctrl + L or Alt + D
Remember, the accessibility of websites can vary, and some sites might have different keyboard navigation implementations. In some cases, you may need to enable keyboard navigation in the browser's settings or extensions. Additionally, browser updates might introduce changes to keyboard shortcuts, so it's always good to check the latest documentation or help resources for your specific browser version.
But I hope this helps as an tldr and getting started with navigating your laptop the ways pro role.
Version controlled over at github gist.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Bolo
It seems that from 1987 to around 2000 there was a big community of people who played this game called "Bolo". It was a game in which people controlled a tank and killed others while trying to capture bases in team matches. Always 2 teams, from 2 to 16 total players, games could last from 10 minutes to 12 hours. I'm still trying to understand all this.
The game looks silly from some videos you can find today, but apparently it was very deep in strategy because people developed strategy guides and wrote extensively about it and Netscape even supported
bolo:
URLs out of the box.The two most important elements on the map are pillboxes and bases. Pillboxes are originally neutral, meaning that they shoot at every tank that happens to get in its range. They shoot fast and with deadly accuracy. You can shoot the pillbox with your tank, and you can see how damaged it is by looking at it. Once the pillbox is subdued, you may run over it, which will pick it up. You may place the pillbox where you want to put it (where it is clear), if you've enough trees to build it back up. Trees are harvested by sending your man outside your tank to forest the trees. Your man (also called a builder) can also lay mines, build roads, and build walls. Once you have placed a pillbox, it will not shoot at you, but only your enemies. Therefore, pillboxes are often used to protect your bases.
That quote was taken from this "augmented FAQ" written by some user. Apparently there were many FAQs for this game. A FAQ is after all just a simple, clear and direct to the point way of writing about anything, previously known as summa[^summa-k], it doesn't have to be related to any actually frequently asked question.
More unexpected Bolo writings include an etiquette guide, an anthropology study and some wonderings on the reverse pill war tactic.
[^summa-k]: It's not the same thing, but I couldn't help but notice the similarity.
-
@ d2e97f73:ea9a4d1b
2023-04-11 19:36:53There’s a lot of conversation around the #TwitterFiles. Here’s my take, and thoughts on how to fix the issues identified.
I’ll start with the principles I’ve come to believe…based on everything I’ve learned and experienced through my past actions as a Twitter co-founder and lead:
- Social media must be resilient to corporate and government control.
- Only the original author may remove content they produce.
- Moderation is best implemented by algorithmic choice.
The Twitter when I led it and the Twitter of today do not meet any of these principles. This is my fault alone, as I completely gave up pushing for them when an activist entered our stock in 2020. I no longer had hope of achieving any of it as a public company with no defense mechanisms (lack of dual-class shares being a key one). I planned my exit at that moment knowing I was no longer right for the company.
The biggest mistake I made was continuing to invest in building tools for us to manage the public conversation, versus building tools for the people using Twitter to easily manage it for themselves. This burdened the company with too much power, and opened us to significant outside pressure (such as advertising budgets). I generally think companies have become far too powerful, and that became completely clear to me with our suspension of Trump’s account. As I’ve said before, we did the right thing for the public company business at the time, but the wrong thing for the internet and society. Much more about this here: https://twitter.com/jack/status/1349510769268850690
I continue to believe there was no ill intent or hidden agendas, and everyone acted according to the best information we had at the time. Of course mistakes were made. But if we had focused more on tools for the people using the service rather than tools for us, and moved much faster towards absolute transparency, we probably wouldn’t be in this situation of needing a fresh reset (which I am supportive of). Again, I own all of this and our actions, and all I can do is work to make it right.
Back to the principles. Of course governments want to shape and control the public conversation, and will use every method at their disposal to do so, including the media. And the power a corporation wields to do the same is only growing. It’s critical that the people have tools to resist this, and that those tools are ultimately owned by the people. Allowing a government or a few corporations to own the public conversation is a path towards centralized control.
I’m a strong believer that any content produced by someone for the internet should be permanent until the original author chooses to delete it. It should be always available and addressable. Content takedowns and suspensions should not be possible. Doing so complicates important context, learning, and enforcement of illegal activity. There are significant issues with this stance of course, but starting with this principle will allow for far better solutions than we have today. The internet is trending towards a world were storage is “free” and infinite, which places all the actual value on how to discover and see content.
Which brings me to the last principle: moderation. I don’t believe a centralized system can do content moderation globally. It can only be done through ranking and relevance algorithms, the more localized the better. But instead of a company or government building and controlling these solely, people should be able to build and choose from algorithms that best match their criteria, or not have to use any at all. A “follow” action should always deliver every bit of content from the corresponding account, and the algorithms should be able to comb through everything else through a relevance lens that an individual determines. There’s a default “G-rated” algorithm, and then there’s everything else one can imagine.
The only way I know of to truly live up to these 3 principles is a free and open protocol for social media, that is not owned by a single company or group of companies, and is resilient to corporate and government influence. The problem today is that we have companies who own both the protocol and discovery of content. Which ultimately puts one person in charge of what’s available and seen, or not. This is by definition a single point of failure, no matter how great the person, and over time will fracture the public conversation, and may lead to more control by governments and corporations around the world.
I believe many companies can build a phenomenal business off an open protocol. For proof, look at both the web and email. The biggest problem with these models however is that the discovery mechanisms are far too proprietary and fixed instead of open or extendable. Companies can build many profitable services that complement rather than lock down how we access this massive collection of conversation. There is no need to own or host it themselves.
Many of you won’t trust this solution just because it’s me stating it. I get it, but that’s exactly the point. Trusting any one individual with this comes with compromises, not to mention being way too heavy a burden for the individual. It has to be something akin to what bitcoin has shown to be possible. If you want proof of this, get out of the US and European bubble of the bitcoin price fluctuations and learn how real people are using it for censorship resistance in Africa and Central/South America.
I do still wish for Twitter, and every company, to become uncomfortably transparent in all their actions, and I wish I forced more of that years ago. I do believe absolute transparency builds trust. As for the files, I wish they were released Wikileaks-style, with many more eyes and interpretations to consider. And along with that, commitments of transparency for present and future actions. I’m hopeful all of this will happen. There’s nothing to hide…only a lot to learn from. The current attacks on my former colleagues could be dangerous and doesn’t solve anything. If you want to blame, direct it at me and my actions, or lack thereof.
As far as the free and open social media protocol goes, there are many competing projects: @bluesky is one with the AT Protocol, nostr another, Mastodon yet another, Matrix yet another…and there will be many more. One will have a chance at becoming a standard like HTTP or SMTP. This isn’t about a “decentralized Twitter.” This is a focused and urgent push for a foundational core technology standard to make social media a native part of the internet. I believe this is critical both to Twitter’s future, and the public conversation’s ability to truly serve the people, which helps hold governments and corporations accountable. And hopefully makes it all a lot more fun and informative again.
💸🛠️🌐 To accelerate open internet and protocol work, I’m going to open a new category of #startsmall grants: “open internet development.” It will start with a focus of giving cash and equity grants to engineering teams working on social media and private communication protocols, bitcoin, and a web-only mobile OS. I’ll make some grants next week, starting with $1mm/yr to Signal. Please let me know other great candidates for this money.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Community
I was an avid IPFS user until yesterday. Many many times I asked simple questions for which I couldn't find an answer on the internet in the #ipfs IRC channel on Freenode. Most of the times I didn't get an answer, and even when I got it was rarely by someone who knew IPFS deeply. I've had issues go unanswered on js-ipfs repositories for year – one of these was raising awareness of a problem that then got fixed some months later by a complete rewrite, I closed my own issue after realizing that by myself some couple of months later, I don't think the people responsible for the rewrite were ever acknowledge that he had fixed my issue.
Some days ago I asked some questions about how the IPFS protocol worked internally, sincerely trying to understand the inefficiencies in finding and fetching content over IPFS. I pointed it would be a good idea to have a drawing showing that so people would understand the difficulties (which I didn't) and wouldn't be pissed off by the slowness. I was told to read the whitepaper. I had already the whitepaper, but read again the relevant parts. The whitepaper doesn't explain anything about the DHT and how IPFS finds content. I said that in the room, was told to read again.
Before anyone misread this section, I want to say I understand it's a pain to keep answering people on IRC if you're busy developing stuff of interplanetary importance, and that I'm not paying anyone nor I have the right to be answered. On the other hand, if you're developing a super-important protocol, financed by many millions of dollars and a lot of people are hitting their heads against your software and there's no one to help them; you're always busy but never delivers anything that brings joy to your users, something is very wrong. I sincerely don't know what IPFS developers are working on, I wouldn't doubt they're working on important things if they said that, but what I see – and what many other users see (take a look at the IPFS Discourse forum) is bugs, bugs all over the place, confusing UX, and almost no help.
-
@ 57d1a264:69f1fee1
2024-04-04 06:01:50Prompt: Icon Set ===============
Create some icons to be used together. It could be for a taskbar, showcasing features or services, or even for something like a Bitcoin wallet app. (Icons might be a buy, a sell button, a invoice icon, etc.)
Flex your design muscles and join in. Most of all, keep this exercise loose and keep it stress-free. The FUC is about growing our design skills, together, as a community. We can't wait to see what you create!
Happy designing, Stackers!
Share Your Results When you finish the challenge, post your work on this thread! If you also share it on
nostr
, be sure to use the hashtag #FUCdesiger and tag Design_r@iris.to or @Design__r on X! We'll keep an eye out and repost good examples on a regular basis.
Curious to know WTF is a FUC? Read the ⚡️Flash Ui Challenges welcoming post.
Looking for more FUCs? Check the archive here!
-
@ aa55a479:f7598935
2023-07-19 17:54:44Test
-
@ 82341f88:fbfbe6a2
2023-04-11 19:36:53There’s a lot of conversation around the #TwitterFiles. Here’s my take, and thoughts on how to fix the issues identified.
I’ll start with the principles I’ve come to believe…based on everything I’ve learned and experienced through my past actions as a Twitter co-founder and lead:
- Social media must be resilient to corporate and government control.
- Only the original author may remove content they produce.
- Moderation is best implemented by algorithmic choice.
The Twitter when I led it and the Twitter of today do not meet any of these principles. This is my fault alone, as I completely gave up pushing for them when an activist entered our stock in 2020. I no longer had hope of achieving any of it as a public company with no defense mechanisms (lack of dual-class shares being a key one). I planned my exit at that moment knowing I was no longer right for the company.
The biggest mistake I made was continuing to invest in building tools for us to manage the public conversation, versus building tools for the people using Twitter to easily manage it for themselves. This burdened the company with too much power, and opened us to significant outside pressure (such as advertising budgets). I generally think companies have become far too powerful, and that became completely clear to me with our suspension of Trump’s account. As I’ve said before, we did the right thing for the public company business at the time, but the wrong thing for the internet and society. Much more about this here: https://twitter.com/jack/status/1349510769268850690
I continue to believe there was no ill intent or hidden agendas, and everyone acted according to the best information we had at the time. Of course mistakes were made. But if we had focused more on tools for the people using the service rather than tools for us, and moved much faster towards absolute transparency, we probably wouldn’t be in this situation of needing a fresh reset (which I am supportive of). Again, I own all of this and our actions, and all I can do is work to make it right.
Back to the principles. Of course governments want to shape and control the public conversation, and will use every method at their disposal to do so, including the media. And the power a corporation wields to do the same is only growing. It’s critical that the people have tools to resist this, and that those tools are ultimately owned by the people. Allowing a government or a few corporations to own the public conversation is a path towards centralized control.
I’m a strong believer that any content produced by someone for the internet should be permanent until the original author chooses to delete it. It should be always available and addressable. Content takedowns and suspensions should not be possible. Doing so complicates important context, learning, and enforcement of illegal activity. There are significant issues with this stance of course, but starting with this principle will allow for far better solutions than we have today. The internet is trending towards a world were storage is “free” and infinite, which places all the actual value on how to discover and see content.
Which brings me to the last principle: moderation. I don’t believe a centralized system can do content moderation globally. It can only be done through ranking and relevance algorithms, the more localized the better. But instead of a company or government building and controlling these solely, people should be able to build and choose from algorithms that best match their criteria, or not have to use any at all. A “follow” action should always deliver every bit of content from the corresponding account, and the algorithms should be able to comb through everything else through a relevance lens that an individual determines. There’s a default “G-rated” algorithm, and then there’s everything else one can imagine.
The only way I know of to truly live up to these 3 principles is a free and open protocol for social media, that is not owned by a single company or group of companies, and is resilient to corporate and government influence. The problem today is that we have companies who own both the protocol and discovery of content. Which ultimately puts one person in charge of what’s available and seen, or not. This is by definition a single point of failure, no matter how great the person, and over time will fracture the public conversation, and may lead to more control by governments and corporations around the world.
I believe many companies can build a phenomenal business off an open protocol. For proof, look at both the web and email. The biggest problem with these models however is that the discovery mechanisms are far too proprietary and fixed instead of open or extendable. Companies can build many profitable services that complement rather than lock down how we access this massive collection of conversation. There is no need to own or host it themselves.
Many of you won’t trust this solution just because it’s me stating it. I get it, but that’s exactly the point. Trusting any one individual with this comes with compromises, not to mention being way too heavy a burden for the individual. It has to be something akin to what bitcoin has shown to be possible. If you want proof of this, get out of the US and European bubble of the bitcoin price fluctuations and learn how real people are using it for censorship resistance in Africa and Central/South America.
I do still wish for Twitter, and every company, to become uncomfortably transparent in all their actions, and I wish I forced more of that years ago. I do believe absolute transparency builds trust. As for the files, I wish they were released Wikileaks-style, with many more eyes and interpretations to consider. And along with that, commitments of transparency for present and future actions. I’m hopeful all of this will happen. There’s nothing to hide…only a lot to learn from. The current attacks on my former colleagues could be dangerous and doesn’t solve anything. If you want to blame, direct it at me and my actions, or lack thereof.
As far as the free and open social media protocol goes, there are many competing projects: @bluesky is one with the AT Protocol, nostr another, Mastodon yet another, Matrix yet another…and there will be many more. One will have a chance at becoming a standard like HTTP or SMTP. This isn’t about a “decentralized Twitter.” This is a focused and urgent push for a foundational core technology standard to make social media a native part of the internet. I believe this is critical both to Twitter’s future, and the public conversation’s ability to truly serve the people, which helps hold governments and corporations accountable. And hopefully makes it all a lot more fun and informative again.
💸🛠️🌐 To accelerate open internet and protocol work, I’m going to open a new category of #startsmall grants: “open internet development.” It will start with a focus of giving cash and equity grants to engineering teams working on social media and private communication protocols, bitcoin, and a web-only mobile OS. I’ll make some grants next week, starting with $1mm/yr to Signal. Please let me know other great candidates for this money.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Castas hindus em nova chave
Shudras buscam o máximo bem para os seus próprios corpos; vaishyas o máximo bem para a sua própria vida terrena e a da sua família; kshatriyas o máximo bem para a sociedade e este mundo terreno; brâmanes buscam o máximo bem.
-
@ 78733875:4eb851f2
2023-07-14 22:25:21"The computer can be used as a tool to liberate and protect people, rather than to control them," as Hal Finney wrote so presciently 30 years ago.[^fn-hal]
The goal of OpenSats is to help build the tools that Hal alluded to. Tools that liberate and protect, rather than systems that control and oppress. Many tools still have to be built. Many tools still need to be improved. However, "the universe smiles on encryption," as Assange so aptly put it.[^fn-assange]
We believe that freedom tech is what carries this smile forward, which is why we are delighted to announce grants for over a dozen projects in the bitcoin & lightning ecosystem.
[^fn-hal]: Hal Finney: Why remailers... (November 1992)
[^fn-assange]: Julian Assange: A Call to Cryptographic Arms (October 2012)
The following open-source projects were selected by the OpenSats board for funding:
- Payjoin Dev Kit
- Bolt12 for LND
- Splicing
- Raspiblitz
- Labelbase
- BTCPay Server
- ZeroSync
- Mutiny Wallet
- next-auth Lightning Provider
- Cashu
- lnproxy
- Blixt Wallet
Let's take a closer look at each to understand their goal and how it aligns with the OpenSats mission.
Payjoin Dev Kit
Payjoin brings privacy to bitcoin without changing the way you're used to using it. Payjoin transactions look no different from normal activity on-chain, so they boost everyone's privacy, even those who don't payjoin, and foil chain surveillance.
Payjoin is easy to integrate and falls back to working defaults where it isn't supported, but it can only take off when senders and receivers include standard payjoin support in their software. Payjoin Dev Kit makes it easy for wallet developers to integrate BIP 78 standard payjoins everywhere, having working reference integrations for Bitcoin Core, LND, and BDK.
Repository: github.com/payjoin
License: MITBolt12 for LND
Bolt12 brings a new invoice format, enabling static invoices (offers) as well as recurring payments. It adds support to receive payments in a lightning-native way without using a web server. It also uses Blinded Paths to disguise the destination of a node both when fetching the invoice and when paying. This improves privacy and, therefore, security for the receiver of the payment.
Consequently, Bolt12 makes it much easier to receive and send payments without any third-party infrastructure in a native-lightning way. Static invoices make donations and recurring payments much easier.
Repository: lightningnetwork/lnd
License: MITSplicing
Splicing is the ability to resize Lightning channels on-the-fly, giving users of the Lightning Network many additional benefits that were not intuitively obvious at first. Splicing scales Lightning by removing a fundamental limitation. Removing this limitation increases fungibility and lowers blockspace usage, an important step towards maturing the Lightning network and enabling the onboarding of millions, and ultimately billions, of people.
Repository: ddustin/splice
License: BSD-MITRaspiblitz
Raspiblitz is a do-it-yourself node stack that allows you to run a Lightning Node together with a Bitcoin Core full node on your Raspberry Pi. While the Raspberry Pi is the most common hardware running this particular software, it was developed to support multiple hardware platforms and can run on bare metal servers too.
The open-source project was started in 2018 as part of a Lightning hackathon in the German Bitcoin space. Since then, it has grown to over 150 contributors and 2000 stars on GitHub. The software integrates dozens of services and tools via its plugin system and sports advanced features like touchscreen support, channel autopilot, backup systems, DynDNS, SSH tunneling, and more.
Repository: raspiblitz/raspiblitz
License: MITLabelbase
Labelbase is a label management service for Bitcoin transactions and addresses. It provides features for adding labels, importing and exporting labels, and offers a public API for integration with wallets and existing workflows.
Labelbase supports BIP-329, a format for unifying label data. The goal of the project is to offer a convenient solution for managing labels associated with Bitcoin transactions and addresses across wallets and other tools. By providing a unified label management interface, Labelbase enhances the user experience, improves privacy, and promotes better organization and understanding of Bitcoin transactions.
Repository: Labelbase/Labelbase
License: MITBTCPay Server
BTCPay Server is a free, open-source & self-hosted bitcoin payment gateway that allows self-sovereign individuals and businesses to accept bitcoin payments online or in person without added fees.
At its core, BTCPay Server is an automated invoicing system. Merchants can integrate the software with their website or shop, so customers are presented with an invoice upon checkout. The status of the invoice will update according to settlement, so merchants can fulfill the order at the appropriate time. The software also takes care of payment refunding and bitcoin management alongside many other features.
Repository: btcpayserver/btcpayserver
License: MITZeroSync
While ZeroSync is still at an early stage, its promise is to allow verification of Bitcoin's chain state in an instant. It offers compact cryptographic proofs to validate the entire history of transactions and everyone's current balances.
The first application is to "zerosync" Bitcoin Core in pruned mode. The long-term vision for ZeroSync is to become a toolbox for custom Bitcoin proofs.
Repository: zerosync/zerosync
License: MITMutiny Wallet
Mutiny Wallet is a web-first wallet capable of running anywhere, providing instant onboarding and platform censorship resistance. It is self-custodial, privacy-focused, user-friendly, and open-sourced under the MIT license.
The wallet has a strong focus on privacy, scalability, and accessibility. In addition to features that you would expect a regular lightning wallet to have, the team is working to incorporate Nostr-related features into the wallet, such as a feed of friends' Zaps, native Zap sending and receiving, a lightning subscription specification for services such as nostr relays, and a P2P DLC marketplace. The team's goal is to provide users with a seamless experience, combining the power of Bitcoin and Lightning with social media in a way that matches the Bitcoin ethos.
Repository: MutinyWallet
License: MITnext-auth Lightning Provider
The goal of this project is to implement an authentication provider for next-auth, an authentication provider for the popular open-source framework NextJS. The next-auth framework has nearly 500k weekly downloads and powers the authentication of many modern web, mobile, and desktop apps. Having a plug-and-play Provider for Lightning makes integration easier and more attractive for developers.
Repository: jowo-io/next-auth-lightning-provider
License: ISCCashu
Cashu is a Chaumian ecash system built for bitcoin that brings near-perfect privacy for users of custodial bitcoin applications. A Cashu ecash mint does not know who you are, what your balance is, or who you're transacting with. Users of a mint can exchange ecash privately, without anyone being able to know who the involved parties are.
Payments are executed without anyone able to censor specific users. There are multiple implementations of the Cashu protocol. Popular open-source wallets are Cashu Nutshell, Cashu.me, and Nutstash.
Repository: cashubtc/cashu
License: MITlnproxy
lnproxy is a simple privacy tool that empowers users of custodial Lightning wallets with better payment destination privacy and sovereign node runners with enhanced receiver privacy. lnproxy works like a "poor man's" rendezvous router, providing privacy for users without taking custody of their funds. The project encompasses an LNURL-style protocol specification and a collection of open-source implementations of lnproxy clients and a relay.
Repository: lnproxy/lnproxy
License: GPL 3.0 & MITBlixt Wallet
Blixt is a non-custodial wallet for bitcoiners who want to give Lightning a try. It runs on Android, iOS, and macOS. It is easy to use and straightforward to set up, making it a user-friendly option to get started with Lightning.
Blixt uses LND and Neutrino under the hood, directly on the phone, respecting your privacy. The wallet does not use any centralized servers for doing transactions. Channels are opened automatically on the user's behalf, making it easy to get up and running on Lightning.
Repository: hsjoberg/blixt-wallet
License: MIT
In addition to the software projects listed above, three educational initiatives were selected for funding:
- Bitcoin Education in Nigeria is an initiative started and led by Apata Johnson. Apata's project aims to educate youths on bitcoin and the opportunities it brings for the people living in the rural areas of Nigeria.
- 21 Ideas is a project that aims to bring quality Bitcoin education to Russian citizens. Tony and others have been working for many years on translations, original material, and hands-on tutorials for beginners. We believe that education is paramount to proper Bitcoin use, and localization is paramount for everyday citizens to properly grasp the importance as well as the novel concepts of bitcoin.
- CoreDev.tech is organizing recurring developer events, which are all about bringing devs together so that they can hack on Bitcoin Core and related software.
We received hundreds of applications in the last couple of months, which is a fantastic signal and something we are delighted about. Some applications are still being reviewed by the OpenSats board, as we try our best to assess feasibility, alignment, and potential impact of each project. We will announce additional grants as applications pass our grant selection process.
Unfortunately, we were unable to fund all of the proposals that were sent to us. Please don't hesitate to apply again in case your application was rejected this time around. The applicant pool was very competitive, which is a great thing to see in and of itself.
Grants for the projects above are funded by contributions to the Bitcoin General Fund. Our operations as well as our grant programs are made possible by generous donors like you. If you want to help fund the Bitcoin ecosystem, please donate to the Bitcoin General Fund.
Our team is screening applications constantly, and we will announce new grants and funding opportunities as they arise. If you are working on an open-source project in and around bitcoin, and you think your work is aligned with the OpenSats mission, please apply for funding.
-
@ 57d1a264:69f1fee1
2024-04-04 05:51:46What's got you buzzing with excitement, and which design trends are you ready to cheers?
Share your insights and learnings!
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Thoughts on Nostr key management
On Why I don't like NIP-26 as a solution for key management I talked about multiple techniques that could be used to tackle the problem of key management on Nostr.
Here are some ideas that work in tandem:
- NIP-41 (stateless key invalidation)
- NIP-46 (Nostr Connect)
- NIP-07 (signer browser extension)
- Connected hardware signing devices
- other things like musig or frostr keys used in conjunction with a semi-trusted server; or other kinds of trusted software, like a dedicated signer on a mobile device that can sign on behalf of other apps; or even a separate protocol that some people decide to use as the source of truth for their keys, and some clients might decide to use that automatically
- there are probably many other ideas
Some premises I have in my mind (that may be flawed) that base my thoughts on these matters (and cause me to not worry too much) are that
- For the vast majority of people, Nostr keys aren't a target as valuable as Bitcoin keys, so they will probably be ok even without any solution;
- Even when you lose everything, identity can be recovered -- slowly and painfully, but still --, unlike money;
- Nostr is not trying to replace all other forms of online communication (even though when I think about this I can't imagine one thing that wouldn't be nice to replace with Nostr) or of offline communication, so there will always be ways.
- For the vast majority of people, losing keys and starting fresh isn't a big deal. It is a big deal when you have followers and an online persona and your life depends on that, but how many people are like that? In the real world I see people deleting social media accounts all the time and creating new ones, people losing their phone numbers or other accounts associated with their phone numbers, and not caring very much -- they just find a way to notify friends and family and move on.
We can probably come up with some specs to ease the "manual" recovery process, like social attestation and explicit signaling -- i.e., Alice, Bob and Carol are friends; Alice loses her key; Bob sends a new Nostr event kind to the network saying what is Alice's new key; depending on how much Carol trusts Bob, she can automatically start following that and remove the old key -- or something like that.
One nice thing about some of these proposals, like NIP-41, or the social-recovery method, or the external-source-of-truth-method, is that they don't have to be implemented in any client, they can live in standalone single-purpose microapps that users open or visit only every now and then, and these can then automatically update their follow lists with the latest news from keys that have changed according to multiple methods.
-
@ 78733875:4eb851f2
2023-07-07 22:06:45The mission of OpenSats is to support and maintain a sustainable ecosystem of funding for free and open-source projects that help Bitcoin flourish. Nostr is such a project, which is why OpenSats introduced The Nostr Fund and built a team around the protocol's originator to help fund the growing nostr ecosystem. As an open, interoperable, and censorship-resistant protocol, nostr has the chance of doing social-native networking right.
After weeks of sorting through applications, we are excited to announce the first round of grants from The Nostr Fund. OpenSats is proud to support over a dozen projects, from clients to relay implementations to adjacent tools and design efforts.
In no particular order, here they are:
- NDK by @pablof7z
- Habla by @verbiricha
- Coracle by @hodlbod
- Iris by @mmalmi
- Damus by @jb55
- rust-nostr & nostr-sdk by @yukibtc
- Nostr Relay NestJS by @CodyTseng
- Soapbox by @alexgleason
- Code Collaboration over Nostr by @DanConwayDev
- Satellite by @lovvtide
- Amethyst by @vitorpamplona
- Pinstr by @sepehr-safari
- nostr.build by @nostr.build
- Gossip by @mikedilger
- Nostr SDK iOS by @bryanmontz
- Nostr Design by @karnage
The projects above have received grants of various durations and sizes, and we have more nostr-related applications in the pipeline. Donate to The Nostr Fund if you want to help fund the nostr ecosystem.
Without further ado, let's take a closer look at each project in turn.
NDK
NDK is a nostr development kit that makes the experience of building Nostr-related applications—whether they are relays, clients, or anything in between—better, more reliable, and overall more enjoyable to work with than existing solutions. The core goal of NDK is to improve the decentralization of Nostr via intelligent conventions and data discovery features without depending on any one central point of coordination, such as large relays or centralized search providers.
Repository: nostr-dev-kit/ndk
License: MITHabla
Habla is a website for reading, writing, curating, and monetizing long-form content on nostr. It uses NIP-23 to allow markdown-formatted articles and embedded nostr content such as notes, profiles, lists, relays, badges, and more. The goal of Habla is to give everyone an alternative to centralized publishing platforms such as Medium or Substack, which are by their very nature prone to censorship and deplatforming.
Repository: verbiricha/habla.news
License: GNU GPL v3.0Coracle
Coracle is a nostr web client focusing on user experience, performance, and scaling of the nostr network beyond the "twitter clone" use-case. The end goal is to build marketplaces, groups, chat, and more on top of an emergent web of trust. Coracle is already one of the most mature and accessible clients for new users while also providing some novel features for more advanced nostriches.
Repository: coracle-social/coracle
License: MITIris
Iris is a multi-platform nostr client that is available for web, mobile, and desktop. Iris' design goals are speed, reliability, and ease of use. The client features public as well as private messaging, customizable feeds, an offline mode, and speedy account creation.
Repository: irislib/iris-messenger
License: MITDamus
Damus is a cutting-edge nostr client for iOS. The goal of Damus is to integrate bitcoin with social media and to show the power, censorship resistance, and scalability of nostr in general. Damus includes picture and video uploading, is fully translated into 24 languages, supports automatic translation of notes, and includes all of the features you would expect from a Twitter-like client.
Repository: damus-io/damus
License: GNU GPL v3.0rust-nostr & nostr-sdk
Rust-nostr is a Rust implementation of the nostr protocol. It is a high-level client library with the explicit goal to help developers build nostr apps for desktop, web, and mobile that are both fast and secure. Rust crates can be easily embedded inside other development environments like Swift, Kotlin, Python, and JavaScript, making rust-nostr a versatile base to build upon. While the project is in the early stages of development, over 35 NIPs are already supported, with more to come.
Repository: rust-nostr/nostr
License: MITNostr Relay NestJS
Nostr-relay-nestjs is a Nostr relay with a clear structure that is easy to customize to your needs. This relay implementation is based on the NestJS framework and focuses on reliability and high test coverage.
Repository: CodyTseng/nostr-relay-nestjs
License: MITSoapbox
Soapbox started out as an alternative to Mastodon but has grown to encompass ActivityPub and nostr while being interoperable with both. In February 2023, the team launched the "Mostr" bridge, seamlessly connecting nostr to the ActivityPub Fediverse and enabling bidirectional communication between both protocols. This bridge exposes over 9.4M potential users in nostr's target audience to nostr, many of whom have already left the Fediverse completely in favor of nostr.
Repository: gitlab.com/soapbox-pub
License: GNU Affero General Public License v3.0Code Collaboration over Nostr
This project is a proof-of-concept for a much-needed, often discussed, and permissionless, nostr-based GitHub alternative. The goal is to replace the traditional interactions using a centralized server or service with a nostr-based alternative centered around nostr events. Commits, branches, pull requests, and other actions are all modeled as nostr events, with permissions managed through groups so that multiple maintainers can manage a repository. This model reduces the barriers for clients to support repository collaboration and allows for interoperability between repository management tools.
Repository: DanConwayDev/ngit-cli
License: MITSatellite
satellite.earth is a web client for nostr that has a community focus and presents conversations as threaded comments, borrowing from the traditional Reddit interface.
Repository: lovvtide/satellite-web
License: MITAmethyst
Amethyst is one of the most popular nostr clients for Android. Amethyst comes with expected features such as account management, feeds, profiles, and direct messages. Amethyst also offers native image uploads, public chat groups, link previews, one-tap zaps, public and private bookmarks, as well as the ability to follow hashtags, and other novel features. You can install releases of Amethyst via F-Droid or Google Play.
Repository: vitorpamplona/amethyst
License: MITPinstr
Pinstr allows users to easily organize and discover new ideas by creating public boards of pins. Users can star, comment, and zap other users' boards. Users can find curated boards of other users and create boards themselves. Default boards include users' bookmarked content, among other lists.
Repository: sepehr-safari/pinstr
License: MITnostr.build
Nostr.build is a free-to-use media hosting service that allows users to upload images, gifs, videos, and audio files to share them as nostr events. The team recently released their code under an MIT License so that anyone might use the software to offer a similar service.
Repository: nostrbuild/nostr.build
License: MITGossip
Gossip is a fast and stable desktop nostr client focused on the Twitter-like micro-blogging aspect of nostr. Gossip follows people by downloading their events from whichever relays they post to (rather than relays you configure) and was the impetus for NIP-65. It does not use complex web technologies such as JavaScript or HTML rendering and stores your private key only in an encrypted format. Consequently, Gossip is considered more secure than other clients by some. The client is packaged and released for Linux, Windows, and MacOS.
Repository: mikedilger/gossip
License: MITNostr SDK iOS
The nostr SDK for iOS is a native Swift library that will enable developers to quickly and easily build nostr-based apps for Apple devices. The library plans to implement all approved NIPs and will follow Apple's API patterns, so that iOS developers feel comfortable using it from the start. The SDK aims to be simple in its public interface, abstracting away as much complexity as possible so that developers can focus on what makes their specific application unique.
Repository: nostr-sdk/nostr-sdk-ios
License: MITNostr Design
Nostr Design will be a comprehensive resource for designers and developers to build successful nostr products. Nostr introduces several new concepts that most people are not familiar with. Given its nature, the protocol presents some unique design challenges for developers and users alike. The Nostr Design efforts are led by Karnage, who has done stellar product design work around nostr in the past. We believe that this project has the potential to impact the entire nostr space, as it can act as a go-to source for developing quality products, addressing user needs, as well as providing concrete examples and building blocks for product designers and developers alike.
License: Public Domain, Creative Commons
We have received hundreds of applications in the last couple of weeks, many related to or exclusively focused on nostr. Most projects that applied focus on bitcoin and lightning. We will announce another wave of grants for these soon.
To all the nostr projects that applied and didn't make the cut this time around: don't be discouraged. Please apply for funding again in the future. We will announce new grants and funding opportunities quarterly, and there is always the possibility of being listed on the OpenSats website to receive pass-through donations for your project.
We are excited to support the projects above in building the tools we bitcoiners care so deeply about. The future is bright; we just have a lot of building to do.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28litepub
A Go library that abstracts all the burdensome ActivityPub things and provides just the right amount of helpers necessary to integrate an existing website into the "fediverse" (what an odious name). Made for the gravity integration.
See also
-
-
@ 079d727b:00099650
2024-04-04 02:52:11https://i.nostr.build/vGd7P.png
What am I doing here? What am I doing here with Fatty and Spotty Fatty and Spotty Fatty and Spotty What am I doing here with Fatty and Spotty
SORRY! This will make no sense unless you've seen the British Channel 4 sitcom, Black Books. Actually, not sorry at all. Because here's a link to watch the entire series off YT. Otherwise I dunno, torrent or news it, or even better... BUY THE DVD BOXSET SECOND HAND!!! (The decent boxset should have commentary tracks for Season 1 & Season 2, making the show twice as good imo.)
https://www.youtube.com/watch?v=bcQb7zEzXv8 (Enjoy! Also I recommend starting at Season 1 Episode 1 as what I've linked is S2E3). Stars the very talented, Bill Bailey, Dylan Moran and Tamsin Greig. Directed by Graham Linehan I'm pretty sure and he was just 'censored' by the trans police so you know it's probably quite funny. 😂
Many thanks to PABLOF7z for being my FIRST foundational supporter. That really means A LOT! I remember blogging in the mid 90's before the word blog was a thing. One of these day's I'll upload the html pages which show how stupid 13yr old's can be. But shit.. the enthusiasm and pure joy I got from blogging... I don't know if anything has compared in those 20+ years from writing. I hope to bring it back with these Highlighter posts which I plan to drain and publish my thoughts, finds and whatever discoveries I may find interesting.
Apologies if this bores the shit out of you. That's also why it's free. 🫡
Many more links, resources and the brain of tigs' - coming to nostr... really soon.
P.S. Don't expect anything useful or educational or in any particular order, this is MY brain after all 😂
💜🤙🫂 - tigs
-
@ 78733875:4eb851f2
2023-07-07 22:04:12OpenSats is pleased to announce a new long-term support (LTS) program for Bitcoin Core developers and similar Load-Bearing Internet People.[^fn-lbip] This grant program is designed to provide financial support for developers who are working on critical infrastructure for the bitcoin network.
The LTS program is a new initiative from OpenSats and is distinct from our regular grant program, which is more expansive in scope. It is also distinct from OpenSats' website listings, which allows reviewed open-source projects to receive tax-deductible donations via OpenSats. The LTS program is specifically designed to provide long-term support for developers who are working on critical open-source infrastructure in and around bitcoin.
Having a longer time horizon than regular grants, the LTS program is geared towards long-term stability for grantees, with a minimum grant duration of 12 months and possible grant durations of two years or longer. This will allow developers to focus on their work without having to worry about financial constraints.
To be eligible for the LTS program, applicants must:
- have a track record of quality contributions
- be mission-driven and self-motivated
- be able to work in public
- be bitcoin-only
Applications for the LTS program are now open: https://opensats.org/apply/
The first recipient of an OpenSats LTS Grant is Marco Falke, a long-term maintainer and contributor of Bitcoin Core with thousands of contributions over many years. Marco will continue to focus on testing and quality assurance, as well as maintenance and review, helping to make sure that the Bitcoin Core software is as solid as it can be. You can read more about his contributions here.
We appreciate all the hard work that goes into building and maintaining critical open-source infrastructure. It is a hard and often thankless job. We hope that we can play a role in closing the gaps in bitcoin open-source funding, and we look forward to working with contributors in the future.
OpenSats aims to be an additional pillar of the increasingly solid funding landscape in and around bitcoin. We have learned a lot from the programs of the past and aim to join Brink, Spiral, Chaincode, HRF, and other successful grant programs to support those who build the tools that ensure the protection of individual liberties in our digital world.
We are committed to supporting the development of bitcoin. The LTS program is a new way for OpenSats to support long-term contributors who are building, maintaining, testing, researching, and reviewing critical software.
We encourage all qualified developers to apply for the LTS program. Together, we can build a stronger and more resilient bitcoin network.
[^fn-lbip]: "An LBIP is a person who maintains the software for a critical Internet service or library, and has to do it without organizational support or a budget backing him up." —Eric S. Raymond
-
@ 044b20cf:89e58620
2024-04-04 04:42:54Apple is developing personal robots for your home, Bloomberg says
Apple is still on the hunt for the next revolutionary product to help it remain dominant in the market and to serve as new sources of revenue after abandoning its plans (https://www.engadget.com/the-apple-car-project-is-reportedly-dead-203012885.html?fsig=Kb_kx_UWWMHANlf2oqbAUw--%7EA) to develop an electric vehicle of its own. According to _Bloomberg's (https://www.bloomberg.com/news/articles/2024-04-03/apple-explores-home-robots-after-abandoning-car-efforts?sref=10lNAhZ9) Mark Gurman, one of the areas the company is exploring is personal robotics. It reportedly started looking into robots and electric vehicles at the same time, with the hopes of developing a machine that doesn't need human intervention.
While Apple's robotics projects are still in the very early stages, Bloomberg said it had already started working on a mobile robot that can follow users around their home and had already developed a table-top device that uses a robot to move a screen around. The idea behind the latter is to have a machine that can mimic head movements and can lock on to a single person in a group, presumably for a better video call experience. Since these robots are supposed to be able to move on their own, the company is also looking into the use of algorithms for navigation. Based on the report, Apple's home devices group is in charge of their development, and at least one engineer who worked on its scrapped EV initiative has joined the team.
Robots, however, aren't like phones in the sense that people these days need them in their lives. Apple is apparently worried about whether people would pay "top dollar" for the robots it has in mind, and executives still can't get to an agreement on whether the company should keep working on these projects. Gurman previously reported that Apple may have sold its EV for $100,000 — if that's true, it had a bigger potential to grow the company's revenue. But the Apple Car is now out of the picture, and the company is reportedly putting all of its focus on the Vision Pro (https://www.engadget.com/apple-vision-pro-review-beta-testing-the-future-201430272.html) and new products for the home, which also includes a home hub device with a display that resembles an iPad. Of course, Apple could still scrap these projects, and it could find other classes of products to invest in if it discovers that they could bring in bigger money in the future.
This article originally appeared on Engadget at https://www.engadget.com/apple-is-developing-personal-robots-for-your-home-bloomberg-says-044254029.html?src=rss
https://www.engadget.com/apple-is-developing-personal-robots-for-your-home-bloomberg-says-044254029.html?src=rss