-
@ 6be5cc06:5259daf0
2025-04-28 01:05:49Eu reconheço que Deus, e somente Deus, é o soberano legítimo sobre todas as coisas. Nenhum homem, nenhuma instituição, nenhum parlamento tem autoridade para usurpar aquilo que pertence ao Rei dos reis. O Estado moderno, com sua pretensão totalizante, é uma farsa blasfema diante do trono de Cristo. Não aceito outro senhor.
A Lei que me guia não é a ditada por burocratas, mas a gravada por Deus na própria natureza humana. A razão, quando iluminada pela fé, é suficiente para discernir o que é justo. Rejeito as leis arbitrárias que pretendem legitimar o roubo, o assassinato ou a escravidão em nome da ordem. A justiça não nasce do decreto, mas da verdade.
Acredito firmemente na propriedade privada como extensão da própria pessoa. Aquilo que é fruto do meu trabalho, da minha criatividade, da minha dedicação, dos dons a mim concedidos por Deus, pertence a mim por direito natural. Ninguém pode legitimamente tomar o que é meu sem meu consentimento. Todo imposto é uma agressão; toda expropriação, um roubo. Defendo a liberdade econômica não por idolatria ao mercado, mas porque a liberdade é condição necessária para a virtude.
Assumo o Princípio da Não Agressão como o mínimo ético que devo respeitar. Não iniciarei o uso da força contra ninguém, nem contra sua propriedade. Exijo o mesmo de todos. Mas sei que isso não basta. O PNA delimita o que não devo fazer — ele não me ensina o que devo ser. A liberdade exterior só é boa se houver liberdade interior. O mercado pode ser livre, mas se a alma estiver escravizada pelo vício, o colapso será inevitável.
Por isso, não me basta a ética negativa. Creio que uma sociedade justa precisa de valores positivos: honra, responsabilidade, compaixão, respeito, fidelidade à verdade. Sem isso, mesmo uma sociedade que respeite formalmente os direitos individuais apodrecerá por dentro. Um povo que ama o lucro, mas despreza a verdade, que celebra a liberdade mas esquece a justiça, está se preparando para ser dominado. Trocará um déspota visível por mil tiranias invisíveis — o hedonismo, o consumismo, a mentira, o medo.
Não aceito a falsa caridade feita com o dinheiro tomado à força. A verdadeira generosidade nasce do coração livre, não da coerção institucional. Obrigar alguém a ajudar o próximo destrói tanto a liberdade quanto a virtude. Só há mérito onde há escolha. A caridade que nasce do amor é redentora; a que nasce do fisco é propaganda.
O Estado moderno é um ídolo. Ele promete segurança, mas entrega servidão. Promete justiça, mas entrega privilégios. Disfarça a opressão com linguagem técnica, legal e democrática. Mas por trás de suas máscaras, vejo apenas a velha serpente. Um parasita que se alimenta do trabalho alheio e manipula consciências para se perpetuar.
Resistir não é apenas um direito, é um dever. Obedecer a Deus antes que aos homens — essa é a minha regra. O poder se volta contra a verdade, mas minha lealdade pertence a quem criou o céu e a terra. A tirania não se combate com outro tirano, mas com a desobediência firme e pacífica dos que amam a justiça.
Não acredito em utopias. Desejo uma ordem natural, orgânica, enraizada no voluntarismo. Uma sociedade que se construa de baixo para cima: a partir da família, da comunidade local, da tradição e da fé. Não quero uma máquina que planeje a vida alheia, mas um tecido de relações voluntárias onde a liberdade floresça à sombra da cruz.
Desejo, sim, o reinado social de Cristo. Não por imposição, mas por convicção. Que Ele reine nos corações, nas famílias, nas ruas e nos contratos. Que a fé guie a razão e a razão ilumine a vida. Que a liberdade seja meio para a santidade — não um fim em si. E que, livres do jugo do Leviatã, sejamos servos apenas do Senhor.
-
@ 52b4a076:e7fad8bd
2025-04-28 00:48:57I have been recently building NFDB, a new relay DB. This post is meant as a short overview.
Regular relays have challenges
Current relay software have significant challenges, which I have experienced when hosting Nostr.land: - Scalability is only supported by adding full replicas, which does not scale to large relays. - Most relays use slow databases and are not optimized for large scale usage. - Search is near-impossible to implement on standard relays. - Privacy features such as NIP-42 are lacking. - Regular DB maintenance tasks on normal relays require extended downtime. - Fault-tolerance is implemented, if any, using a load balancer, which is limited. - Personalization and advanced filtering is not possible. - Local caching is not supported.
NFDB: A scalable database for large relays
NFDB is a new database meant for medium-large scale relays, built on FoundationDB that provides: - Near-unlimited scalability - Extended fault tolerance - Instant loading - Better search - Better personalization - and more.
Search
NFDB has extended search capabilities including: - Semantic search: Search for meaning, not words. - Interest-based search: Highlight content you care about. - Multi-faceted queries: Easily filter by topic, author group, keywords, and more at the same time. - Wide support for event kinds, including users, articles, etc.
Personalization
NFDB allows significant personalization: - Customized algorithms: Be your own algorithm. - Spam filtering: Filter content to your WoT, and use advanced spam filters. - Topic mutes: Mute topics, not keywords. - Media filtering: With Nostr.build, you will be able to filter NSFW and other content - Low data mode: Block notes that use high amounts of cellular data. - and more
Other
NFDB has support for many other features such as: - NIP-42: Protect your privacy with private drafts and DMs - Microrelays: Easily deploy your own personal microrelay - Containers: Dedicated, fast storage for discoverability events such as relay lists
Calcite: A local microrelay database
Calcite is a lightweight, local version of NFDB that is meant for microrelays and caching, meant for thousands of personal microrelays.
Calcite HA is an additional layer that allows live migration and relay failover in under 30 seconds, providing higher availability compared to current relays with greater simplicity. Calcite HA is enabled in all Calcite deployments.
For zero-downtime, NFDB is recommended.
Noswhere SmartCache
Relays are fixed in one location, but users can be anywhere.
Noswhere SmartCache is a CDN for relays that dynamically caches data on edge servers closest to you, allowing: - Multiple regions around the world - Improved throughput and performance - Faster loading times
routerd
routerd
is a custom load-balancer optimized for Nostr relays, integrated with SmartCache.routerd
is specifically integrated with NFDB and Calcite HA to provide fast failover and high performance.Ending notes
NFDB is planned to be deployed to Nostr.land in the coming weeks.
A lot more is to come. 👀️️️️️️
-
@ 266815e0:6cd408a5
2025-04-29 17:47:57I'm excited to announce the release of Applesauce v1.0.0! There are a few breaking changes and a lot of improvements and new features across all packages. Each package has been updated to 1.0.0, marking a stable API for developers to build upon.
Applesauce core changes
There was a change in the
applesauce-core
package in theQueryStore
.The
Query
interface has been converted to a method instead of an object withkey
andrun
fields.A bunch of new helper methods and queries were added, checkout the changelog for a full list.
Applesauce Relay
There is a new
applesauce-relay
package that provides a simple RxJS based api for connecting to relays and publishing events.Documentation: applesauce-relay
Features:
- A simple API for subscribing or publishing to a single relay or a group of relays
- No
connect
orclose
methods, connections are managed automatically by rxjs - NIP-11
auth_required
support - Support for NIP-42 authentication
- Prebuilt or custom re-connection back-off
- Keep-alive timeout (default 30s)
- Client-side Negentropy sync support
Example Usage: Single relay
```typescript import { Relay } from "applesauce-relay";
// Connect to a relay const relay = new Relay("wss://relay.example.com");
// Create a REQ and subscribe to it relay .req({ kinds: [1], limit: 10, }) .subscribe((response) => { if (response === "EOSE") { console.log("End of stored events"); } else { console.log("Received event:", response); } }); ```
Example Usage: Relay pool
```typescript import { Relay, RelayPool } from "applesauce-relay";
// Create a pool with a custom relay const pool = new RelayPool();
// Create a REQ and subscribe to it pool .req(["wss://relay.damus.io", "wss://relay.snort.social"], { kinds: [1], limit: 10, }) .subscribe((response) => { if (response === "EOSE") { console.log("End of stored events on all relays"); } else { console.log("Received event:", response); } }); ```
Applesauce actions
Another new package is the
applesauce-actions
package. This package provides a set of async operations for common Nostr actions.Actions are run against the events in the
EventStore
and use theEventFactory
to create new events to publish.Documentation: applesauce-actions
Example Usage:
```typescript import { ActionHub } from "applesauce-actions";
// An EventStore and EventFactory are required to use the ActionHub import { eventStore } from "./stores.ts"; import { eventFactory } from "./factories.ts";
// Custom publish logic const publish = async (event: NostrEvent) => { console.log("Publishing", event); await app.relayPool.publish(event, app.defaultRelays); };
// The
publish
method is optional for the asyncrun
method to work const hub = new ActionHub(eventStore, eventFactory, publish); ```Once an
ActionsHub
is created, you can use therun
orexec
methods to execute actions:```typescript import { FollowUser, MuteUser } from "applesauce-actions/actions";
// Follow fiatjaf await hub.run( FollowUser, "3bf0c63fcb93463407af97a5e5ee64fa883d107ef9e558472c4eb9aaaefa459d", );
// Or use the
exec
method with a custom publish method await hub .exec( MuteUser, "3bf0c63fcb93463407af97a5e5ee64fa883d107ef9e558472c4eb9aaaefa459d", ) .forEach((event) => { // NOTE: Don't publish this event because we never want to mute fiatjaf // pool.publish(['wss://pyramid.fiatjaf.com/'], event) }); ```There are a log more actions including some for working with NIP-51 lists (private and public), you can find them in the reference
Applesauce loaders
The
applesauce-loaders
package has been updated to support any relay connection libraries and not justrx-nostr
.Before:
```typescript import { ReplaceableLoader } from "applesauce-loaders"; import { createRxNostr } from "rx-nostr";
// Create a new rx-nostr instance const rxNostr = createRxNostr();
// Create a new replaceable loader const replaceableLoader = new ReplaceableLoader(rxNostr); ```
After:
```typescript
import { Observable } from "rxjs"; import { ReplaceableLoader, NostrRequest } from "applesauce-loaders"; import { SimplePool } from "nostr-tools";
// Create a new nostr-tools pool const pool = new SimplePool();
// Create a method that subscribes using nostr-tools and returns an observable function nostrRequest: NostrRequest = (relays, filters, id) => { return new Observable((subscriber) => { const sub = pool.subscribe(relays, filters, { onevent: (event) => { subscriber.next(event); }, onclose: () => subscriber.complete(), oneose: () => subscriber.complete(), });
return () => sub.close();
}); };
// Create a new replaceable loader const replaceableLoader = new ReplaceableLoader(nostrRequest); ```
Of course you can still use rx-nostr if you want:
```typescript import { createRxNostr } from "rx-nostr";
// Create a new rx-nostr instance const rxNostr = createRxNostr();
// Create a method that subscribes using rx-nostr and returns an observable function nostrRequest( relays: string[], filters: Filter[], id?: string, ): Observable
{ // Create a new oneshot request so it will complete when EOSE is received const req = createRxOneshotReq({ filters, rxReqId: id }); return rxNostr .use(req, { on: { relays } }) .pipe(map((packet) => packet.event)); } // Create a new replaceable loader const replaceableLoader = new ReplaceableLoader(nostrRequest); ```
There where a few more changes, check out the changelog
Applesauce wallet
Its far from complete, but there is a new
applesauce-wallet
package that provides a actions and queries for working with NIP-60 wallets.Documentation: applesauce-wallet
Example Usage:
```typescript import { CreateWallet, UnlockWallet } from "applesauce-wallet/actions";
// Create a new NIP-60 wallet await hub.run(CreateWallet, ["wss://mint.example.com"], privateKey);
// Unlock wallet and associated tokens/history await hub.run(UnlockWallet, { tokens: true, history: true }); ```
-
@ 30ceb64e:7f08bdf5
2025-04-26 20:33:30Status: Draft
Author: TheWildHustleAbstract
This NIP defines a framework for storing and sharing health and fitness profile data on Nostr. It establishes a set of standardized event kinds for individual health metrics, allowing applications to selectively access specific health information while preserving user control and privacy.
In this framework exists - NIP-101h.1 Weight using kind 1351 - NIP-101h.2 Height using kind 1352 - NIP-101h.3 Age using kind 1353 - NIP-101h.4 Gender using kind 1354 - NIP-101h.5 Fitness Level using kind 1355
Motivation
I want to build and support an ecosystem of health and fitness related nostr clients that have the ability to share and utilize a bunch of specific interoperable health metrics.
- Selective access - Applications can access only the data they need
- User control - Users can choose which metrics to share
- Interoperability - Different health applications can share data
- Privacy - Sensitive health information can be managed independently
Specification
Kind Number Range
Health profile metrics use the kind number range 1351-1399:
| Kind | Metric | | --------- | ---------------------------------- | | 1351 | Weight | | 1352 | Height | | 1353 | Age | | 1354 | Gender | | 1355 | Fitness Level | | 1356-1399 | Reserved for future health metrics |
Common Structure
All health metric events SHOULD follow these guidelines:
- The content field contains the primary value of the metric
- Required tags:
['t', 'health']
- For categorizing as health data['t', metric-specific-tag]
- For identifying the specific metric['unit', unit-of-measurement]
- When applicable- Optional tags:
['converted_value', value, unit]
- For providing alternative unit measurements['timestamp', ISO8601-date]
- When the metric was measured['source', application-name]
- The source of the measurement
Unit Handling
Health metrics often have multiple ways to be measured. To ensure interoperability:
- Where multiple units are possible, one standard unit SHOULD be chosen as canonical
- When using non-standard units, a
converted_value
tag SHOULD be included with the canonical unit - Both the original and converted values should be provided for maximum compatibility
Client Implementation Guidelines
Clients implementing this NIP SHOULD:
- Allow users to explicitly choose which metrics to publish
- Support reading health metrics from other users when appropriate permissions exist
- Support updating metrics with new values over time
- Preserve tags they don't understand for future compatibility
- Support at least the canonical unit for each metric
Extensions
New health metrics can be proposed as extensions to this NIP using the format:
- NIP-101h.X where X is the metric number
Each extension MUST specify: - A unique kind number in the range 1351-1399 - The content format and meaning - Required and optional tags - Examples of valid events
Privacy Considerations
Health data is sensitive personal information. Clients implementing this NIP SHOULD:
- Make it clear to users when health data is being published
- Consider incorporating NIP-44 encryption for sensitive metrics
- Allow users to selectively share metrics with specific individuals
- Provide easy ways to delete previously published health data
NIP-101h.1: Weight
Description
This NIP defines the format for storing and sharing weight data on Nostr.
Event Kind: 1351
Content
The content field MUST contain the numeric weight value as a string.
Required Tags
- ['unit', 'kg' or 'lb'] - Unit of measurement
- ['t', 'health'] - Categorization tag
- ['t', 'weight'] - Specific metric tag
Optional Tags
- ['converted_value', value, unit] - Provides the weight in alternative units for interoperability
- ['timestamp', ISO8601 date] - When the weight was measured
Examples
json { "kind": 1351, "content": "70", "tags": [ ["unit", "kg"], ["t", "health"], ["t", "weight"] ] }
json { "kind": 1351, "content": "154", "tags": [ ["unit", "lb"], ["t", "health"], ["t", "weight"], ["converted_value", "69.85", "kg"] ] }
NIP-101h.2: Height
Status: Draft
Description
This NIP defines the format for storing and sharing height data on Nostr.
Event Kind: 1352
Content
The content field can use two formats: - For metric height: A string containing the numeric height value in centimeters (cm) - For imperial height: A JSON string with feet and inches properties
Required Tags
['t', 'health']
- Categorization tag['t', 'height']
- Specific metric tag['unit', 'cm' or 'imperial']
- Unit of measurement
Optional Tags
['converted_value', value, 'cm']
- Provides height in centimeters for interoperability when imperial is used['timestamp', ISO8601-date]
- When the height was measured
Examples
```jsx // Example 1: Metric height Apply to App.jsx
// Example 2: Imperial height with conversion Apply to App.jsx ```
Implementation Notes
- Centimeters (cm) is the canonical unit for height interoperability
- When using imperial units, a conversion to centimeters SHOULD be provided
- Height values SHOULD be positive integers
- For maximum compatibility, clients SHOULD support both formats
NIP-101h.3: Age
Status: Draft
Description
This NIP defines the format for storing and sharing age data on Nostr.
Event Kind: 1353
Content
The content field MUST contain the numeric age value as a string.
Required Tags
['unit', 'years']
- Unit of measurement['t', 'health']
- Categorization tag['t', 'age']
- Specific metric tag
Optional Tags
['timestamp', ISO8601-date]
- When the age was recorded['dob', ISO8601-date]
- Date of birth (if the user chooses to share it)
Examples
```jsx // Example 1: Basic age Apply to App.jsx
// Example 2: Age with DOB Apply to App.jsx ```
Implementation Notes
- Age SHOULD be represented as a positive integer
- For privacy reasons, date of birth (dob) is optional
- Clients SHOULD consider updating age automatically if date of birth is known
- Age can be a sensitive metric and clients may want to consider encrypting this data
NIP-101h.4: Gender
Status: Draft
Description
This NIP defines the format for storing and sharing gender data on Nostr.
Event Kind: 1354
Content
The content field contains a string representing the user's gender.
Required Tags
['t', 'health']
- Categorization tag['t', 'gender']
- Specific metric tag
Optional Tags
['timestamp', ISO8601-date]
- When the gender was recorded['preferred_pronouns', string]
- User's preferred pronouns
Common Values
While any string value is permitted, the following common values are recommended for interoperability: - male - female - non-binary - other - prefer-not-to-say
Examples
```jsx // Example 1: Basic gender Apply to App.jsx
// Example 2: Gender with pronouns Apply to App.jsx ```
Implementation Notes
- Clients SHOULD allow free-form input for gender
- For maximum compatibility, clients SHOULD support the common values
- Gender is a sensitive personal attribute and clients SHOULD consider appropriate privacy controls
- Applications focusing on health metrics should be respectful of gender diversity
NIP-101h.5: Fitness Level
Status: Draft
Description
This NIP defines the format for storing and sharing fitness level data on Nostr.
Event Kind: 1355
Content
The content field contains a string representing the user's fitness level.
Required Tags
['t', 'health']
- Categorization tag['t', 'fitness']
- Fitness category tag['t', 'level']
- Specific metric tag
Optional Tags
['timestamp', ISO8601-date]
- When the fitness level was recorded['activity', activity-type]
- Specific activity the fitness level relates to['metrics', JSON-string]
- Quantifiable fitness metrics used to determine level
Common Values
While any string value is permitted, the following common values are recommended for interoperability: - beginner - intermediate - advanced - elite - professional
Examples
```jsx // Example 1: Basic fitness level Apply to App.jsx
// Example 2: Activity-specific fitness level with metrics Apply to App.jsx ```
Implementation Notes
- Fitness level is subjective and may vary by activity
- The activity tag can be used to specify fitness level for different activities
- The metrics tag can provide objective measurements to support the fitness level
- Clients can extend this format to include activity-specific fitness assessments
- For general fitness apps, the simple beginner/intermediate/advanced scale is recommended
-
@ 91bea5cd:1df4451c
2025-04-26 10:16:21O Contexto Legal Brasileiro e o Consentimento
No ordenamento jurídico brasileiro, o consentimento do ofendido pode, em certas circunstâncias, afastar a ilicitude de um ato que, sem ele, configuraria crime (como lesão corporal leve, prevista no Art. 129 do Código Penal). Contudo, o consentimento tem limites claros: não é válido para bens jurídicos indisponíveis, como a vida, e sua eficácia é questionável em casos de lesões corporais graves ou gravíssimas.
A prática de BDSM consensual situa-se em uma zona complexa. Em tese, se ambos os parceiros são adultos, capazes, e consentiram livre e informadamente nos atos praticados, sem que resultem em lesões graves permanentes ou risco de morte não consentido, não haveria crime. O desafio reside na comprovação desse consentimento, especialmente se uma das partes, posteriormente, o negar ou alegar coação.
A Lei Maria da Penha (Lei nº 11.340/2006)
A Lei Maria da Penha é um marco fundamental na proteção da mulher contra a violência doméstica e familiar. Ela estabelece mecanismos para coibir e prevenir tal violência, definindo suas formas (física, psicológica, sexual, patrimonial e moral) e prevendo medidas protetivas de urgência.
Embora essencial, a aplicação da lei em contextos de BDSM pode ser delicada. Uma alegação de violência por parte da mulher, mesmo que as lesões ou situações decorram de práticas consensuais, tende a receber atenção prioritária das autoridades, dada a presunção de vulnerabilidade estabelecida pela lei. Isso pode criar um cenário onde o parceiro masculino enfrenta dificuldades significativas em demonstrar a natureza consensual dos atos, especialmente se não houver provas robustas pré-constituídas.
Outros riscos:
Lesão corporal grave ou gravíssima (art. 129, §§ 1º e 2º, CP), não pode ser justificada pelo consentimento, podendo ensejar persecução penal.
Crimes contra a dignidade sexual (arts. 213 e seguintes do CP) são de ação pública incondicionada e independem de representação da vítima para a investigação e denúncia.
Riscos de Falsas Acusações e Alegação de Coação Futura
Os riscos para os praticantes de BDSM, especialmente para o parceiro que assume o papel dominante ou que inflige dor/restrição (frequentemente, mas não exclusivamente, o homem), podem surgir de diversas frentes:
- Acusações Externas: Vizinhos, familiares ou amigos que desconhecem a natureza consensual do relacionamento podem interpretar sons, marcas ou comportamentos como sinais de abuso e denunciar às autoridades.
- Alegações Futuras da Parceira: Em caso de término conturbado, vingança, arrependimento ou mudança de perspectiva, a parceira pode reinterpretar as práticas passadas como abuso e buscar reparação ou retaliação através de uma denúncia. A alegação pode ser de que o consentimento nunca existiu ou foi viciado.
- Alegação de Coação: Uma das formas mais complexas de refutar é a alegação de que o consentimento foi obtido mediante coação (física, moral, psicológica ou econômica). A parceira pode alegar, por exemplo, que se sentia pressionada, intimidada ou dependente, e que seu "sim" não era genuíno. Provar a ausência de coação a posteriori é extremamente difícil.
- Ingenuidade e Vulnerabilidade Masculina: Muitos homens, confiando na dinâmica consensual e na parceira, podem negligenciar a necessidade de precauções. A crença de que "isso nunca aconteceria comigo" ou a falta de conhecimento sobre as implicações legais e o peso processual de uma acusação no âmbito da Lei Maria da Penha podem deixá-los vulneráveis. A presença de marcas físicas, mesmo que consentidas, pode ser usada como evidência de agressão, invertendo o ônus da prova na prática, ainda que não na teoria jurídica.
Estratégias de Prevenção e Mitigação
Não existe um método infalível para evitar completamente o risco de uma falsa acusação, mas diversas medidas podem ser adotadas para construir um histórico de consentimento e reduzir vulnerabilidades:
- Comunicação Explícita e Contínua: A base de qualquer prática BDSM segura é a comunicação constante. Negociar limites, desejos, palavras de segurança ("safewords") e expectativas antes, durante e depois das cenas é crucial. Manter registros dessas negociações (e-mails, mensagens, diários compartilhados) pode ser útil.
-
Documentação do Consentimento:
-
Contratos de Relacionamento/Cena: Embora a validade jurídica de "contratos BDSM" seja discutível no Brasil (não podem afastar normas de ordem pública), eles servem como forte evidência da intenção das partes, da negociação detalhada de limites e do consentimento informado. Devem ser claros, datados, assinados e, idealmente, reconhecidos em cartório (para prova de data e autenticidade das assinaturas).
-
Registros Audiovisuais: Gravar (com consentimento explícito para a gravação) discussões sobre consentimento e limites antes das cenas pode ser uma prova poderosa. Gravar as próprias cenas é mais complexo devido a questões de privacidade e potencial uso indevido, mas pode ser considerado em casos específicos, sempre com consentimento mútuo documentado para a gravação.
Importante: a gravação deve ser com ciência da outra parte, para não configurar violação da intimidade (art. 5º, X, da Constituição Federal e art. 20 do Código Civil).
-
-
Testemunhas: Em alguns contextos de comunidade BDSM, a presença de terceiros de confiança durante negociações ou mesmo cenas pode servir como testemunho, embora isso possa alterar a dinâmica íntima do casal.
- Estabelecimento Claro de Limites e Palavras de Segurança: Definir e respeitar rigorosamente os limites (o que é permitido, o que é proibido) e as palavras de segurança é fundamental. O desrespeito a uma palavra de segurança encerra o consentimento para aquele ato.
- Avaliação Contínua do Consentimento: O consentimento não é um cheque em branco; ele deve ser entusiástico, contínuo e revogável a qualquer momento. Verificar o bem-estar do parceiro durante a cena ("check-ins") é essencial.
- Discrição e Cuidado com Evidências Físicas: Ser discreto sobre a natureza do relacionamento pode evitar mal-entendidos externos. Após cenas que deixem marcas, é prudente que ambos os parceiros estejam cientes e de acordo, talvez documentando por fotos (com data) e uma nota sobre a consensualidade da prática que as gerou.
- Aconselhamento Jurídico Preventivo: Consultar um advogado especializado em direito de família e criminal, com sensibilidade para dinâmicas de relacionamento alternativas, pode fornecer orientação personalizada sobre as melhores formas de documentar o consentimento e entender os riscos legais específicos.
Observações Importantes
- Nenhuma documentação substitui a necessidade de consentimento real, livre, informado e contínuo.
- A lei brasileira protege a "integridade física" e a "dignidade humana". Práticas que resultem em lesões graves ou que violem a dignidade de forma não consentida (ou com consentimento viciado) serão ilegais, independentemente de qualquer acordo prévio.
- Em caso de acusação, a existência de documentação robusta de consentimento não garante a absolvição, mas fortalece significativamente a defesa, ajudando a demonstrar a natureza consensual da relação e das práticas.
-
A alegação de coação futura é particularmente difícil de prevenir apenas com documentos. Um histórico consistente de comunicação aberta (whatsapp/telegram/e-mails), respeito mútuo e ausência de dependência ou controle excessivo na relação pode ajudar a contextualizar a dinâmica como não coercitiva.
-
Cuidado com Marcas Visíveis e Lesões Graves Práticas que resultam em hematomas severos ou lesões podem ser interpretadas como agressão, mesmo que consentidas. Evitar excessos protege não apenas a integridade física, mas também evita questionamentos legais futuros.
O que vem a ser consentimento viciado
No Direito, consentimento viciado é quando a pessoa concorda com algo, mas a vontade dela não é livre ou plena — ou seja, o consentimento existe formalmente, mas é defeituoso por alguma razão.
O Código Civil brasileiro (art. 138 a 165) define várias formas de vício de consentimento. As principais são:
Erro: A pessoa se engana sobre o que está consentindo. (Ex.: A pessoa acredita que vai participar de um jogo leve, mas na verdade é exposta a práticas pesadas.)
Dolo: A pessoa é enganada propositalmente para aceitar algo. (Ex.: Alguém mente sobre o que vai acontecer durante a prática.)
Coação: A pessoa é forçada ou ameaçada a consentir. (Ex.: "Se você não aceitar, eu termino com você" — pressão emocional forte pode ser vista como coação.)
Estado de perigo ou lesão: A pessoa aceita algo em situação de necessidade extrema ou abuso de sua vulnerabilidade. (Ex.: Alguém em situação emocional muito fragilizada é induzida a aceitar práticas que normalmente recusaria.)
No contexto de BDSM, isso é ainda mais delicado: Mesmo que a pessoa tenha "assinado" um contrato ou dito "sim", se depois ela alegar que seu consentimento foi dado sob medo, engano ou pressão psicológica, o consentimento pode ser considerado viciado — e, portanto, juridicamente inválido.
Isso tem duas implicações sérias:
-
O crime não se descaracteriza: Se houver vício, o consentimento é ignorado e a prática pode ser tratada como crime normal (lesão corporal, estupro, tortura, etc.).
-
A prova do consentimento precisa ser sólida: Mostrando que a pessoa estava informada, lúcida, livre e sem qualquer tipo de coação.
Consentimento viciado é quando a pessoa concorda formalmente, mas de maneira enganada, forçada ou pressionada, tornando o consentimento inútil para efeitos jurídicos.
Conclusão
Casais que praticam BDSM consensual no Brasil navegam em um terreno que exige não apenas confiança mútua e comunicação excepcional, mas também uma consciência aguçada das complexidades legais e dos riscos de interpretações equivocadas ou acusações mal-intencionadas. Embora o BDSM seja uma expressão legítima da sexualidade humana, sua prática no Brasil exige responsabilidade redobrada. Ter provas claras de consentimento, manter a comunicação aberta e agir com prudência são formas eficazes de se proteger de falsas alegações e preservar a liberdade e a segurança de todos os envolvidos. Embora leis controversas como a Maria da Penha sejam "vitais" para a proteção contra a violência real, os praticantes de BDSM, e em particular os homens nesse contexto, devem adotar uma postura proativa e prudente para mitigar os riscos inerentes à potencial má interpretação ou instrumentalização dessas práticas e leis, garantindo que a expressão de sua consensualidade esteja resguardada na medida do possível.
Importante: No Brasil, mesmo com tudo isso, o Ministério Público pode denunciar por crime como lesão corporal grave, estupro ou tortura, independente de consentimento. Então a prudência nas práticas é fundamental.
Aviso Legal: Este artigo tem caráter meramente informativo e não constitui aconselhamento jurídico. As leis e interpretações podem mudar, e cada situação é única. Recomenda-se buscar orientação de um advogado qualificado para discutir casos específicos.
Se curtiu este artigo faça uma contribuição, se tiver algum ponto relevante para o artigo deixe seu comentário.
-
@ 005bc4de:ef11e1a2
2025-04-29 16:08:56Trump Bitcoin Report Card - Day 100
For whatever reason day 100 of a president's term has been deemed a milestone. So, it's time to check in with President Trump's bitcoin pledges and issue a report card.
Repo and prior reports: - GitHub: https://github.com/crrdlx/trump-bitcoin-report-card - First post: https://stacker.news/items/757211 - Progress Report 1: https://stacker.news/items/774165 - Day 1 Report Card: https://stacker.news/items/859475 - Day 100 Report Card: https://stacker.news/items/966434
Report Card | | Pledge | Prior Grade | Current Grade | |--|--|--|--| | 1 | Fire SEC Chair Gary Gensler on day 1 | A | A | | 2 | Commute the sentence of Ross Ulbricht on day 1 | A | A | | 3 | Remove capital gains taxes on bitcoin transactions | F | F | | 4 | Create and hodl a strategic bitcoin stockpile | D | C- | | 5 | Prevent a CBDC during his presidency | B+ | A | | 6 | Create a "bitcoin and crypto" advisory council | C- | C | | 7 | Support the right to self-custody | D+ | B- | | 8 | End the "war on crypto" | D+ | B+ | | 9 | Mine all remaining bitcoin in the USA | C- | C | | 10 | Make the US the "crypto capital of the planet" | C- | C+ |
Comments
Pledge 1 - SEC chair - (no change from earlier) - Gensler is out. This happened after the election and Trump took office. With the writing on the wall, Gensler announced he would resign, Trump picked a new SEC head in Paul Atkins, and Gensler left office just before Trump was sworn in. The only reason an A+ was not awarded was that Trump wasn't given the chance to actually fire Gensler, because he quit. No doubt, though, his quitting was due to Trump and the threat of being sacked.
Day 100 Report Card Grade: A
Pledge 2 - free Ross - (no change from earlier) - Ross Ulbricht's sentence was just commuted. Going will "option 3" above, the pledge was kept. An A+ would have been a commutation yesterday or by noon today, but, let's not split hairs. It's done.
Day 100 Report Card Grade: A
Pledge 3 - capital gains - This requires either executive action and/or legislation. There was no action. Executive action can be done with the stroke of a pen, but it was not. Legislation is tricky and time-consuming, however, there wasn't even mention of this matter. This seems to be on the back burner since statements such as this report in November. See Progress Report 1: https://stacker.news/items/774165 for more context.
Trump's main tax thrust has been the tariff, actually a tax increase, instead of a cut. Currently, the emphasis is on extending the "Trump tax cuts" and recently House Speaker Mike Johnson indicated such a bill would be ready by Memorial Day. Earlier in his term, there was more chatter about tax relief for bitcoin or cryptocurrency. There seems to be less chatter on this, or none at all, such as its absence in the "ready by Memorial Day" article.
Until tax reform is codified and signed, it isn't tax law and the old code still applies.
Day 100 Report Card Grade: F
Pledge 4 - bitcoin reserve - The initial grade was a C, it was dropped to a D mainly due to Trump's propensity to [alt]coinery, and now it's back to a C-.
Getting the grade back up into C-level at a C- was a little bumpy. On March 2, 2025, Trump posted that a U.S. Crypto Reserve would be created. This is what had been hoped for, except that the pledge was for a Bitcoin Reserve, not crypto. And secondly, he specifically named XRP, SOL, and ADA (but not BTC). Just a couple of hours later, likely in clean up mode, he did add BTC (along with ETH) as "obviously" being included. So, the "Bitcoin Reserve" became a "Crypto Reserve."
Maybe still in "cleanup mode," Sec. of Commerce Howard Lutnick said bitcoin will hold "special status" in the reserve. Then, on March 6, an executive order made the U.S. Digital Asset Stockpile official. Again, "Bitcoin" was generalized until section 3 where the "Strategic Bitcoin Reserve" did come to official fruition.
The grade is only a C- because the only thing that happened was the naming of the stockpile. Indeed, it became official. But the "stockpile" was just BTC already held by the U.S. government. I think it's fair to say most bitcoiners would have preferred a statement about buying BTC. Other Trump bitcoin officials indicated acquiring "as much as we can get", which sounds great, but until it happens, is only words.
Day 100 Report Card Grade: C-
Pledge 5 - no CBDC - An executive order on January 23, 2025 forbade a CBDC in section 1, part v by "prohibiting the establishment, issuance, circulation, and use of a CBDC."
Day 100 Report Card Grade: A
Pledge 6 - advisory council - The Trump bitcoin or crypto team consists of the following: David Sacks as “crypto czar” and Bo Hines as executive director of the Presidential Council of Advisers for Digital Assets.
A White House Crypto Summit (see video) was held on March 7, 2025. In principle, the meeting was good, however, the summit seemed (a) to be very heavily "crypto" oriented, and (b) to largely be a meet-and-greet show.
Still, just the fact that such a show took place, inside the White House, reveals how far things have come and the change in climate. For the grade to go higher, more tangible things should take place over time.
Day 100 Report Card Grade: C
Pledge 7 - self-custody - There's been a bit of good news though on this front. First, the executive order above from January 23 stated in section 1, i, one of the goals was "...to maintain self-custody of digital assets." Also, the Phoenix wallet returned to the U.S. In 2024, both Phoenix and Wallet of Satoshi pulled out of the U.S. for fear of government crackdowns. The return of Phoenix, again, speaks to the difference in climate now and is a win for self-custody.
To rise above B-level, more assurance, it would be good to see further clear assurance that people can self-custody, that developers can build self-custody, and businesses can create products to self-custody. Also, Congressional action could get to an A.
Day 100 Report Card Grade: B-
Pledge 8 - end war on crypto - There has been improvement here. First, tangibly, SAB 121 was sent packing as SEC Commissioner Hester Peirce announced. Essentially, this removed a large regulatory burden. Commissioner Peirce also said ending the burdens will be a process to get out of the "mess". So, there's work to do. Also, hurdles were recently removed so that banks can now engage in bitcoin activity. This is both a symbolic and real change.
Somewhat ironically, Trump's own venture into cryptocurrency with his World Liberty Financial and the $TRUMP and $MELANIA tokens, roundly poo-pood by bitcoiners, might actually be beneficial in a way. The signal from the White House seems to be on all things cryptocurrency, "Do it."
The improvement and climate now seems very different than with the previous administration and leaders who openly touted a war on crypto.
Day 100 Report Card Grade: B+
Pledge 9 - USA mining - As noted earlier, this is an impossible pledge. That said, things can be done to make America mining friendly. The U.S. holds an estimated 37 to 40% of Bitcoin hash rate, which is substantial. Plus, Trump, or the Trump family at least, has entered into bitcoin mining. With Hut 8, Eric Trump is heading "American Bitcoin" to mine BTC. Like the $TRUMP token, this conveys that bitcoin mining is a go in the USA.
Day 100 Report Card Grade: C
Pledge 10 - USA crypto capital - This pledge closely aligned with pledges 8 and 9. If the war on crypto ends, the USA becomes more and more crypto and bitcoin friendly. And, if the hashrate stays high and even increases, that puts the USA at the center of it all. Most of the categories above have seen improvements, all of which help this last pledge. Trump's executive orders help this grade as well as they move from only words spoken to becoming official policy.
To get higher, the Bitcoin Strategic Reserve should move from a name-change only to acquiring more BTC. If the USA wants to be the world's crypto capital, being the leader in bitcoin ownership is the way to do it.
Day 100 Report Card Grade: C+
Sources
- Nashville speech - https://www.youtube.com/watch?v=EiEIfBatnH8
- CryptoPotato "top 8 promises" - https://x.com/Crypto_Potato/status/1854105511349584226
- CNBC - https://www.cnbc.com/2024/11/06/trump-claims-presidential-win-here-is-what-he-promised-the-crypto-industry-ahead-of-the-election.html
- BLOCKHEAD - https://www.blockhead.co/2024/11/07/heres-everything-trump-promised-to-the-crypto-industry/
- CoinTelegraph - https://cointelegraph.com/news/trump-promises-crypto-election-usa
- China vid - Bitcoin ATH and US Strategic Bitcoin Stockpile - https://njump.me/nevent1qqsgmmuqumhfktugtnx9kcsh3ap6v7ca4z8rgx79palz2qk0wzz5cksppemhxue69uhkummn9ekx7mp0qgszwaxc8j8e0zw9sdq59y43rykyx3wm0lcd2502xth699v0gxf0degrqsqqqqqpglusv6
- Capitals gains tax - https://bravenewcoin.com/insights/trump-proposes-crypto-tax-cuts-targets-u-s-made-tokens-for-tax-exemption Progress report 1 ------------------------------------------------------------------------------------
- Meeting with Brian Armstrong - https://www.wsj.com/livecoverage/stock-market-today-dow-sp500-nasdaq-live-11-18-2024/card/exclusive-trump-to-meet-privately-with-coinbase-ceo-brian-armstrong-DDkgF0xW1BW242rVeuqx
- Michael Saylor podcast - https://fountain.fm/episode/DHEzGE0f99QQqyM36nVr
- Gensler resigns - https://coinpedia.org/news/big-breaking-sec-chair-gary-gensler-officially-resigns/ Progress report 2 ------------------------------------------------------------------------------------
- Trump & Justin Sun - https://www.coindesk.com/business/2024/11/26/justin-sun-joins-donald-trumps-world-liberty-financial-as-adviser $30M investment: https://www.yahoo.com/news/trump-crypto-project-bust-until-154313241.html
- SEC chair - https://www.cnbc.com/2024/12/04/trump-plans-to-nominate-paul-atkins-as-sec-chair.html
- Crypto czar - https://www.zerohedge.com/crypto/trump-names-david-sacks-white-house-ai-crypto-czar
- Investigate Choke Point 2.0 - https://www.cryptopolitan.com/crypto-czar-investigate-choke-point/
- Crypto council head Bo Hines - https://cointelegraph.com/news/trump-appoints-bo-hines-head-crypto-council
- National hash rate: https://www.cryptopolitan.com/the-us-controls-40-of-bitcoins-hashrate/
- Senate committee https://coinjournal.net/news/rep-senator-cynthia-lummis-selected-to-chair-crypto-subcommittee/
- Treasurh Sec. CBDC: https://decrypt.co/301444/trumps-treasury-pick-scott-bessant-pours-cold-water-on-us-digital-dollar-initiative
- National priority: https://cointelegraph.com/news/trump-executive-order-crypto-national-priority-bloomberg?utm_source=rss_feed&utm_medium=rss&utm_campaign=rss_partner_inbound
- $TRUMP https://njump.me/nevent1qqsffe0d7mgtu5jhasy4hmkcdy7wfrlcqwc4vf676hulvdn8uaqa3acpzamhxue69uhhyetvv9ujuurjd9kkzmpwdejhgtczyztpa8q038vw5xluyhnydj5u39d7cpssvuswjhhjqj8q42jh4ul3wqcyqqqqqqgmha026
- World Liberty buys alts: https://www.theblock.co/post/335779/trumps-world-liberty-buys-25-million-of-tokens-including-link-tron-aave-and-ethena?utm_source=rss&utm_medium=rss
- CFTC chair: https://cryptoslate.com/trump-appoints-crypto-advocate-caroline-pham-as-cftc-acting-chair/
- WLF buys wrapped BTC https://www.cryptopolitan.com/trump-buys-47-million-in-bitcoin/
- SEC turnover https://www.theblock.co/post/335944/trump-names-sec-commissioner-mark-uyeda-as-acting-chair-amid-a-crypto-regulatory-shift?utm_source=rss&utm_medium=rss
- ----------------------------100 Days Report---------------------------------Davos speech "world capital of AI and crypto" https://coinpedia.org/news/big-breaking-president-trump-says-u-s-to-become-ai-and-crypto-superpower/
- SAB 121 gone, Hester P heads talk force & ends sab 121?, war on crypto https://x.com/HesterPeirce/status/1882562977985114185 article: https://www.theblock.co/post/336761/days-after-gensler-leaves-sec-rescinds-controversial-crypto-accounting-guidance-sab-121?utm_source=twitter&utm_medium=social CoinTelegraph: https://cointelegraph.com/news/trump-executive-order-cbdc-ban-game-changer-us-institutional-crypto-adoption?utm_source=rss_feed&utm_medium=rss&utm_campaign=rss_partner_inbound
- Possible tax relief https://cryptodnes.bg/en/will-trumps-crypto-policies-lead-to-tax-relief-for-crypto-investors/
- War on crypto https://decrypt.co/304395/trump-sec-crypto-task-force-priorities-mess
- Trump "truths" 2/18 make usa #1 in crypto, "Trump effect" https://www.theblock.co/post/333137/ripple-ceo-says-75-of-open-roles-are-now-us-based-due-to-trump-effect and https://www.coindesk.com/markets/2025/01/06/ripples-garlinghouse-touts-trump-effect-amid-bump-in-u-s-deals
- Strategic reserve https://njump.me/nevent1qqsf89l74mqfkk74jqhjcqtwp5m970gedmtykn5uhl0vz9mhmrvvvgqpzamhxue69uhhyetvv9ujuurjd9kkzmpwdejhgtczyztpa8q038vw5xluyhnydj5u39d7cpssvuswjhhjqj8q42jh4ul3wqcyqqqqqqge7c74u and https://njump.me/nevent1qqswv50m7mc95m3saqce08jzpqc0vedw4avdk6zxy9axrn3hqet52xgpzamhxue69uhhyetvv9ujuurjd9kkzmpwdejhgtczyztpa8q038vw5xluyhnydj5u39d7cpssvuswjhhjqj8q42jh4ul3wqcyqqqqqqgpc7cp3
- Strategic reserve, bitcoin special https://www.thestreet.com/crypto/policy/bitcoin-to-hold-special-status-in-u-s-crypto-strategic-reserve
- Bitcoin reserve, crypto stockpile https://decrypt.co/309032/president-trump-signs-executive-order-to-establish-bitcoin-reserve-crypto-stockpile vid link https://njump.me/nevent1qqs09h58patpv9vfjpcss6v5nxv7m23u8g6g43nqvkjzgzescztucmspr9mhxue69uhhyetvv9ujumt0d4hhxarj9ecxjmnt9upzqtjzyy2ylrsceh5uj20j5e95v0e99s3epsvyctu2y0vrwyltvq33qvzqqqqqqyus4pu7
- Truth summit https://njump.me/nevent1qqswj6sv0wr4d4ppwzam5egr5k6nmqgjpwmsrlx2a7d4ndpfj0fxvcqpzamhxue69uhhyetvv9ujuurjd9kkzmpwdejhgtczyztpa8q038vw5xluyhnydj5u39d7cpssvuswjhhjqj8q42jh4ul3wqcyqqqqqqgu0mzzh and vid https://njump.me/nevent1qqsptn8c8wyuhlqtjr5u767x20q4dmjvxy28cdj30t4v9phhf6y5a5spzamhxue69uhhyetvv9ujuurjd9kkzmpwdejhgtczyztpa8q038vw5xluyhnydj5u39d7cpssvuswjhhjqj8q42jh4ul3wqcyqqqqqqgqklklu
- SEC chair confirmed https://beincrypto.com/sec-chair-paul-atkins-confirmed-senate-vote/
- pro bitcoin USA https://coinpedia.org/news/u-s-secretary-of-commerce-howard-lutnick-says-america-is-ready-for-bitcoin/
- tax cuts https://thehill.com/homenews/house/5272043-johnson-house-trump-agenda-memorial-day/
- "as much as we can get" https://cryptobriefing.com/trump-bitcoin-acquisition-strategy/
- ban on CBDC https://www.whitehouse.gov/presidential-actions/2025/01/strengthening-american-leadership-in-digital-financial-technology/
- Phoenix WoS leave https://www.coindesk.com/opinion/2024/04/29/wasabi-wallet-and-phoenix-leave-the-us-whats-next-for-non-custodial-crypto
- Trump hut 8 mining https://www.reuters.com/technology/hut-8-eric-trump-launch-bitcoin-mining-company-2025-03-31/
-
@ 8125b911:a8400883
2025-04-25 07:02:35In Nostr, all data is stored as events. Decentralization is achieved by storing events on multiple relays, with signatures proving the ownership of these events. However, if you truly want to own your events, you should run your own relay to store them. Otherwise, if the relays you use fail or intentionally delete your events, you'll lose them forever.
For most people, running a relay is complex and costly. To solve this issue, I developed nostr-relay-tray, a relay that can be easily run on a personal computer and accessed over the internet.
Project URL: https://github.com/CodyTseng/nostr-relay-tray
This article will guide you through using nostr-relay-tray to run your own relay.
Download
Download the installation package for your operating system from the GitHub Release Page.
| Operating System | File Format | | --------------------- | ---------------------------------- | | Windows |
nostr-relay-tray.Setup.x.x.x.exe
| | macOS (Apple Silicon) |nostr-relay-tray-x.x.x-arm64.dmg
| | macOS (Intel) |nostr-relay-tray-x.x.x.dmg
| | Linux | You should know which one to use |Installation
Since this app isn’t signed, you may encounter some obstacles during installation. Once installed, an ostrich icon will appear in the status bar. Click on the ostrich icon, and you'll see a menu where you can click the "Dashboard" option to open the relay's control panel for further configuration.
macOS Users:
- On first launch, go to "System Preferences > Security & Privacy" and click "Open Anyway."
- If you encounter a "damaged" message, run the following command in the terminal to remove the restrictions:
bash sudo xattr -rd com.apple.quarantine /Applications/nostr-relay-tray.app
Windows Users:
- On the security warning screen, click "More Info > Run Anyway."
Connecting
By default, nostr-relay-tray is only accessible locally through
ws://localhost:4869/
, which makes it quite limited. Therefore, we need to expose it to the internet.In the control panel, click the "Proxy" tab and toggle the switch. You will then receive a "Public address" that you can use to access your relay from anywhere. It's that simple.
Next, add this address to your relay list and position it as high as possible in the list. Most clients prioritize connecting to relays that appear at the top of the list, and relays lower in the list are often ignored.
Restrictions
Next, we need to set up some restrictions to prevent the relay from storing events that are irrelevant to you and wasting storage space. nostr-relay-tray allows for flexible and fine-grained configuration of which events to accept, but some of this is more complex and will not be covered here. If you're interested, you can explore this further later.
For now, I'll introduce a simple and effective strategy: WoT (Web of Trust). You can enable this feature in the "WoT & PoW" tab. Before enabling, you'll need to input your pubkey.
There's another important parameter,
Depth
, which represents the relationship depth between you and others. Someone you follow has a depth of 1, someone they follow has a depth of 2, and so on.- Setting this parameter to 0 means your relay will only accept your own events.
- Setting it to 1 means your relay will accept events from you and the people you follow.
- Setting it to 2 means your relay will accept events from you, the people you follow, and the people they follow.
Currently, the maximum value for this parameter is 2.
Conclusion
You've now successfully run your own relay and set a simple restriction to prevent it from storing irrelevant events.
If you encounter any issues during use, feel free to submit an issue on GitHub, and I'll respond as soon as possible.
Not your relay, not your events.
-
@ 7d33ba57:1b82db35
2025-04-29 14:14:11Located in eastern Poland, Lublin is a city where history, culture, and youthful energy come together. Often called the "Gateway to the East," Lublin blends Gothic and Renaissance architecture, vibrant street life, and deep historical roots—especially as a center of Jewish heritage and intellectual life.
🏙️ Top Things to See in Lublin
🏰 Lublin Castle
- A striking hilltop castle with a neo-Gothic façade and a beautifully preserved Romanesque chapel (Chapel of the Holy Trinity)
- Don’t miss the frescoes inside—a rare mix of Byzantine and Western art styles
🚪 Old Town (Stare Miasto)
- Wander through cobblestone streets, pastel buildings, and arched gateways
- Filled with cozy cafes, galleries, and vibrant murals
- The Grodzka Gate symbolizes the passage between Christian and Jewish quarters
🕯️ Lublin’s Jewish Heritage
- Visit the Grodzka Gate – NN Theatre, a powerful memorial and museum telling the story of the once-vibrant Jewish community
- Nearby Majdanek Concentration Camp offers a sobering but important historical experience
🎭 Culture & Events
- Lublin is known for its festivals, like Carnaval Sztukmistrzów (Festival of Magicians and Street Performers) and the Night of Culture
- The city has a thriving theatre and music scene, supported by its large student population
🌳 Green Spaces
- Relax in Saski Garden, a peaceful park with walking paths and fountains
- Or take a walk along the Bystrzyca River for a quieter, more local feel
🍽️ Local Tastes
- Sample Polish classics like pierogi, żurek (sour rye soup), and bigos (hunter’s stew)
- Look for modern twists on traditional dishes in Lublin’s growing number of bistros and artisan cafés
🚆 Getting There
- Easy access by train or bus from Warsaw (2–2.5 hours)
- Compact center—easily walkable
-
@ e691f4df:1099ad65
2025-04-24 18:56:12Viewing Bitcoin Through the Light of Awakening
Ankh & Ohm Capital’s Overview of the Psycho-Spiritual Nature of Bitcoin
Glossary:
I. Preface: The Logos of Our Logo
II. An Oracular Introduction
III. Alchemizing Greed
IV. Layers of Fractalized Thought
V. Permissionless Individuation
VI. Dispelling Paradox Through Resonance
VII. Ego Deflation
VIII. The Coin of Great Price
Preface: The Logos of Our Logo
Before we offer our lens on Bitcoin, it’s important to illuminate the meaning behind Ankh & Ohm’s name and symbol. These elements are not ornamental—they are foundational, expressing the cosmological principles that guide our work.
Our mission is to bridge the eternal with the practical. As a Bitcoin-focused family office and consulting firm, we understand capital not as an end, but as a tool—one that, when properly aligned, becomes a vehicle for divine order. We see Bitcoin not simply as a technological innovation but as an emanation of the Divine Logos—a harmonic expression of truth, transparency, and incorruptible structure. Both the beginning and the end, the Alpha and Omega.
The Ankh (☥), an ancient symbol of eternal life, is a key to the integration of opposites. It unites spirit and matter, force and form, continuity and change. It reminds us that capital, like Life, must not only be generative, but regenerative; sacred. Money must serve Life, not siphon from it.
The Ohm (Ω) holds a dual meaning. In physics, it denotes a unit of electrical resistance—the formative tension that gives energy coherence. In the Vedic tradition, Om (ॐ) is the primordial vibration—the sound from which all existence unfolds. Together, these symbols affirm a timeless truth: resistance and resonance are both sacred instruments of the Creator.
Ankh & Ohm, then, represents our striving for union, for harmony —between the flow of life and intentional structure, between incalculable abundance and measured restraint, between the lightbulb’s electrical impulse and its light-emitting filament. We stand at the threshold where intention becomes action, and where capital is not extracted, but cultivated in rhythm with the cosmos.
We exist to shepherd this transformation, as guides of this threshold —helping families, founders, and institutions align with a deeper order, where capital serves not as the prize, but as a pathway to collective Presence, Purpose, Peace and Prosperity.
An Oracular Introduction
Bitcoin is commonly understood as the first truly decentralized and secure form of digital money—a breakthrough in monetary sovereignty. But this view, while technically correct, is incomplete and spiritually shallow. Bitcoin is more than a tool for economic disruption. Bitcoin represents a mythic threshold: a symbol of the psycho-spiritual shift that many ancient traditions have long foretold.
For millennia, sages and seers have spoken of a coming Golden Age. In the Vedic Yuga cycles, in Plato’s Great Year, in the Eagle and Condor prophecies of the Americas—there exists a common thread: that humanity will emerge from darkness into a time of harmony, cooperation, and clarity. That the veil of illusion (maya, materiality) will thin, and reality will once again become transparent to the transcendent. In such an age, systems based on scarcity, deception, and centralization fall away. A new cosmology takes root—one grounded in balance, coherence, and sacred reciprocity.
But we must ask—how does such a shift happen? How do we cross from the age of scarcity, fear, and domination into one of coherence, abundance, and freedom?
One possible answer lies in the alchemy of incentive.
Bitcoin operates not just on the rules of computer science or Austrian economics, but on something far more old and subtle: the logic of transformation. It transmutes greed—a base instinct rooted in scarcity—into cooperation, transparency, and incorruptibility.
In this light, Bitcoin becomes more than code—it becomes a psychoactive protocol, one that rewires human behavior by aligning individual gain with collective integrity. It is not simply a new form of money. It is a new myth of value. A new operating system for human consciousness.
Bitcoin does not moralize. It harmonizes. It transforms the instinct for self-preservation into a pathway for planetary coherence.
Alchemizing Greed
At the heart of Bitcoin lies the ancient alchemical principle of transmutation: that which is base may be refined into gold.
Greed, long condemned as a vice, is not inherently evil. It is a distorted longing. A warped echo of the drive to preserve life. But in systems built on scarcity and deception, this longing calcifies into hoarding, corruption, and decay.
Bitcoin introduces a new game. A game with memory. A game that makes deception inefficient and truth profitable. It does not demand virtue—it encodes consequence. Its design does not suppress greed; it reprograms it.
In traditional models, game theory often illustrates the fragility of trust. The Prisoner’s Dilemma reveals how self-interest can sabotage collective well-being. But Bitcoin inverts this. It creates an environment where self-interest and integrity converge—where the most rational action is also the most truthful.
Its ledger, immutable and transparent, exposes manipulation for what it is: energetically wasteful and economically self-defeating. Dishonesty burns energy and yields nothing. The network punishes incoherence, not by decree, but by natural law.
This is the spiritual elegance of Bitcoin: it does not suppress greed—it transmutes it. It channels the drive for personal gain into the architecture of collective order. Miners compete not to dominate, but to validate. Nodes collaborate not through trust, but through mathematical proof.
This is not austerity. It is alchemy.
Greed, under Bitcoin, is refined. Tempered. Re-forged into a generative force—no longer parasitic, but harmonic.
Layers of Fractalized Thought Fragments
All living systems are layered. So is the cosmos. So is the human being. So is a musical scale.
At its foundation lies the timechain—the pulsing, incorruptible record of truth. Like the heart, it beats steadily. Every block, like a pulse, affirms its life through continuity. The difficulty adjustment—Bitcoin’s internal calibration—functions like heart rate variability, adapting to pressure while preserving coherence.
Above this base layer is the Lightning Network—a second layer facilitating rapid, efficient transactions. It is the nervous system: transmitting energy, reducing latency, enabling real-time interaction across a distributed whole.
Beyond that, emerging tools like Fedimint and Cashu function like the capillaries—bringing vitality to the extremities, to those underserved by legacy systems. They empower the unbanked, the overlooked, the forgotten. Privacy and dignity in the palms of those the old system refused to see.
And then there is NOSTR—the decentralized protocol for communication and creation. It is the throat chakra, the vocal cords of the “freedom-tech” body. It reclaims speech from the algorithmic overlords, making expression sovereign once more. It is also the reproductive system, as it enables the propagation of novel ideas and protocols in fertile, uncensorable soil.
Each layer plays its part. Not in hierarchy, but in harmony. In holarchy. Bitcoin and other open source protocols grow not through exogenous command, but through endogenous coherence. Like cells in an organism. Like a song.
Imagine the cell as a piece of glass from a shattered holographic plate —by which its perspectival, moving image can be restructured from the single shard. DNA isn’t only a logical script of base pairs, but an evolving progressive song. Its lyrics imbued with wise reflections on relationships. The nucleus sings, the cell responds—not by command, but by memory. Life is not imposed; it is expressed. A reflection of a hidden pattern.
Bitcoin chants this. Each node, a living cell, holds the full timechain—Truth distributed, incorruptible. Remove one, and the whole remains. This isn’t redundancy. It’s a revelation on the power of protection in Truth.
Consensus is communion. Verification becomes a sacred rite—Truth made audible through math.
Not just the signal; the song. A web of self-expression woven from Truth.
No center, yet every point alive with the whole. Like Indra’s Net, each reflects all. This is more than currency and information exchange. It is memory; a self-remembering Mind, unfolding through consensus and code. A Mind reflecting the Truth of reality at the speed of thought.
Heuristics are mental shortcuts—efficient, imperfect, alive. Like cells, they must adapt or decay. To become unbiased is to have self-balancing heuristics which carry feedback loops within them: they listen to the environment, mutate when needed, and survive by resonance with reality. Mutation is not error, but evolution. Its rules are simple, but their expression is dynamic.
What persists is not rigidity, but pattern.
To think clearly is not necessarily to be certain, but to dissolve doubt by listening, adjusting, and evolving thought itself.
To understand Bitcoin is simply to listen—patiently, clearly, as one would to a familiar rhythm returning.
Permissionless Individuation
Bitcoin is a path. One that no one can walk for you.
Said differently, it is not a passive act. It cannot be spoon-fed. Like a spiritual path, it demands initiation, effort, and the willingness to question inherited beliefs.
Because Bitcoin is permissionless, no one can be forced to adopt it. One must choose to engage it—compelled by need, interest, or intuition. Each person who embarks undergoes their own version of the hero’s journey.
Carl Jung called this process Individuation—the reconciliation of fragmented psychic elements into a coherent, mature Self. Bitcoin mirrors this: it invites individuals to confront the unconscious assumptions of the fiat paradigm, and to re-integrate their relationship to time, value, and agency.
In Western traditions—alchemy, Christianity, Kabbalah—the individual is sacred, and salvation is personal. In Eastern systems—Daoism, Buddhism, the Vedas—the self is ultimately dissolved into the cosmic whole. Bitcoin, in a paradoxical way, echoes both: it empowers the individual, while aligning them with a holistic, transcendent order.
To truly see Bitcoin is to allow something false to die. A belief. A habit. A self-concept.
In that death—a space opens for deeper connection with the Divine itSelf.
In that dissolution, something luminous is reborn.
After the passing, Truth becomes resurrected.
Dispelling Paradox Through Resonance
There is a subtle paradox encoded into the hero’s journey: each starts in solidarity, yet the awakening affects the collective.
No one can be forced into understanding Bitcoin. Like a spiritual truth, it must be seen. And yet, once seen, it becomes nearly impossible to unsee—and easier for others to glimpse. The pattern catches.
This phenomenon mirrors the concept of morphic resonance, as proposed and empirically tested by biologist Rupert Sheldrake. Once a critical mass of individuals begins to embody a new behavior or awareness, it becomes easier—instinctive—for others to follow suit. Like the proverbial hundredth monkey who begins to wash the fruit in the sea water, and suddenly, monkeys across islands begin doing the same—without ever meeting.
When enough individuals embody a pattern, it ripples outward. Not through propaganda, but through field effect and wave propagation. It becomes accessible, instinctive, familiar—even across great distance.
Bitcoin spreads in this way. Not through centralized broadcast, but through subtle resonance. Each new node, each individual who integrates the protocol into their life, strengthens the signal for others. The protocol doesn’t shout; it hums, oscillates and vibrates——persistently, coherently, patiently.
One awakens. Another follows. The current builds. What was fringe becomes familiar. What was radical becomes obvious.
This is the sacred geometry of spiritual awakening. One awakens, another follows, and soon the fluidic current is strong enough to carry the rest. One becomes two, two become many, and eventually the many become One again. This tessellation reverberates through the human aura, not as ideology, but as perceivable pattern recognition.
Bitcoin’s most powerful marketing tool is truth. Its most compelling evangelist is reality. Its most unstoppable force is resonance.
Therefore, Bitcoin is not just financial infrastructure—it is psychic scaffolding. It is part of the subtle architecture through which new patterns of coherence ripple across the collective field.
The training wheels from which humanity learns to embody Peace and Prosperity.
Ego Deflation
The process of awakening is not linear, and its beginning is rarely gentle—it usually begins with disruption, with ego inflation and destruction.
To individuate is to shape a center; to recognize peripherals and create boundaries—to say, “I am.” But without integration, the ego tilts—collapsing into void or inflating into noise. Fiat reflects this pathology: scarcity hoarded, abundance simulated. Stagnation becomes disguised as safety, and inflation masquerades as growth.
In other words, to become whole, the ego must first rise—claiming agency, autonomy, and identity. However, when left unbalanced, it inflates, or implodes. It forgets its context. It begins to consume rather than connect. And so the process must reverse: what inflates must deflate.
In the fiat paradigm, this inflation is literal. More is printed, and ethos is diluted. Savings decay. Meaning erodes. Value is abstracted. The economy becomes bloated with inaudible noise. And like the psyche that refuses to confront its own shadow, it begins to collapse under the weight of its own illusions.
But under Bitcoin, time is honored. Value is preserved. Energy is not abstracted but grounded.
Bitcoin is inherently deflationary—in both economic and spiritual senses. With a fixed supply, it reveals what is truly scarce. Not money, not status—but the finite number of heartbeats we each carry.
To see Bitcoin is to feel that limit in one’s soul. To hold Bitcoin is to feel Time’s weight again. To sense the importance of Bitcoin is to feel the value of preserved, potential energy. It is to confront the reality that what matters cannot be printed, inflated, or faked. In this way, Bitcoin gently confronts the ego—not through punishment, but through clarity.
Deflation, rightly understood, is not collapse—it is refinement. It strips away illusion, bloat, and excess. It restores the clarity of essence.
Spiritually, this is liberation.
The Coin of Great Price
There is an ancient parable told by a wise man:
“The kingdom of heaven is like a merchant seeking fine pearls, who, upon finding one of great price, sold all he had and bought it.”
Bitcoin is such a pearl.
But the ledger is more than a chest full of treasure. It is a key to the heart of things.
It is not just software—it is sacrament.
A symbol of what cannot be corrupted. A mirror of divine order etched into code. A map back to the sacred center.
It reflects what endures. It encodes what cannot be falsified. It remembers what we forgot: that Truth, when aligned with form, becomes Light once again.
Its design is not arbitrary. It speaks the language of life itself—
The elliptic orbits of the planets mirrored in its cryptography,
The logarithmic spiral of the nautilus shell discloses its adoption rate,
The interconnectivity of mycelium in soil reflect the network of nodes in cyberspace,
A webbed breadth of neurons across synaptic space fires with each new confirmed transaction.
It is geometry in devotion. Stillness in motion.
It is the Logos clothed in protocol.
What this key unlocks is beyond external riches. It is the eternal gold within us.
Clarity. Sovereignty. The unshakeable knowing that what is real cannot be taken. That what is sacred was never for sale.
Bitcoin is not the destination.
It is the Path.
And we—when we are willing to see it—are the Temple it leads back to.
-
@ 40b9c85f:5e61b451
2025-04-24 15:27:02Introduction
Data Vending Machines (DVMs) have emerged as a crucial component of the Nostr ecosystem, offering specialized computational services to clients across the network. As defined in NIP-90, DVMs operate on an apparently simple principle: "data in, data out." They provide a marketplace for data processing where users request specific jobs (like text translation, content recommendation, or AI text generation)
While DVMs have gained significant traction, the current specification faces challenges that hinder widespread adoption and consistent implementation. This article explores some ideas on how we can apply the reflection pattern, a well established approach in RPC systems, to address these challenges and improve the DVM ecosystem's clarity, consistency, and usability.
The Current State of DVMs: Challenges and Limitations
The NIP-90 specification provides a broad framework for DVMs, but this flexibility has led to several issues:
1. Inconsistent Implementation
As noted by hzrd149 in "DVMs were a mistake" every DVM implementation tends to expect inputs in slightly different formats, even while ostensibly following the same specification. For example, a translation request DVM might expect an event ID in one particular format, while an LLM service could expect a "prompt" input that's not even specified in NIP-90.
2. Fragmented Specifications
The DVM specification reserves a range of event kinds (5000-6000), each meant for different types of computational jobs. While creating sub-specifications for each job type is being explored as a possible solution for clarity, in a decentralized and permissionless landscape like Nostr, relying solely on specification enforcement won't be effective for creating a healthy ecosystem. A more comprehensible approach is needed that works with, rather than against, the open nature of the protocol.
3. Ambiguous API Interfaces
There's no standardized way for clients to discover what parameters a specific DVM accepts, which are required versus optional, or what output format to expect. This creates uncertainty and forces developers to rely on documentation outside the protocol itself, if such documentation exists at all.
The Reflection Pattern: A Solution from RPC Systems
The reflection pattern in RPC systems offers a compelling solution to many of these challenges. At its core, reflection enables servers to provide metadata about their available services, methods, and data types at runtime, allowing clients to dynamically discover and interact with the server's API.
In established RPC frameworks like gRPC, reflection serves as a self-describing mechanism where services expose their interface definitions and requirements. In MCP reflection is used to expose the capabilities of the server, such as tools, resources, and prompts. Clients can learn about available capabilities without prior knowledge, and systems can adapt to changes without requiring rebuilds or redeployments. This standardized introspection creates a unified way to query service metadata, making tools like
grpcurl
possible without requiring precompiled stubs.How Reflection Could Transform the DVM Specification
By incorporating reflection principles into the DVM specification, we could create a more coherent and predictable ecosystem. DVMs already implement some sort of reflection through the use of 'nip90params', which allow clients to discover some parameters, constraints, and features of the DVMs, such as whether they accept encryption, nutzaps, etc. However, this approach could be expanded to provide more comprehensive self-description capabilities.
1. Defined Lifecycle Phases
Similar to the Model Context Protocol (MCP), DVMs could benefit from a clear lifecycle consisting of an initialization phase and an operation phase. During initialization, the client and DVM would negotiate capabilities and exchange metadata, with the DVM providing a JSON schema containing its input requirements. nip-89 (or other) announcements can be used to bootstrap the discovery and negotiation process by providing the input schema directly. Then, during the operation phase, the client would interact with the DVM according to the negotiated schema and parameters.
2. Schema-Based Interactions
Rather than relying on rigid specifications for each job type, DVMs could self-advertise their schemas. This would allow clients to understand which parameters are required versus optional, what type validation should occur for inputs, what output formats to expect, and what payment flows are supported. By internalizing the input schema of the DVMs they wish to consume, clients gain clarity on how to interact effectively.
3. Capability Negotiation
Capability negotiation would enable DVMs to advertise their supported features, such as encryption methods, payment options, or specialized functionalities. This would allow clients to adjust their interaction approach based on the specific capabilities of each DVM they encounter.
Implementation Approach
While building DVMCP, I realized that the RPC reflection pattern used there could be beneficial for constructing DVMs in general. Since DVMs already follow an RPC style for their operation, and reflection is a natural extension of this approach, it could significantly enhance and clarify the DVM specification.
A reflection enhanced DVM protocol could work as follows: 1. Discovery: Clients discover DVMs through existing NIP-89 application handlers, input schemas could also be advertised in nip-89 announcements, making the second step unnecessary. 2. Schema Request: Clients request the DVM's input schema for the specific job type they're interested in 3. Validation: Clients validate their request against the provided schema before submission 4. Operation: The job proceeds through the standard NIP-90 flow, but with clearer expectations on both sides
Parallels with Other Protocols
This approach has proven successful in other contexts. The Model Context Protocol (MCP) implements a similar lifecycle with capability negotiation during initialization, allowing any client to communicate with any server as long as they adhere to the base protocol. MCP and DVM protocols share fundamental similarities, both aim to expose and consume computational resources through a JSON-RPC-like interface, albeit with specific differences.
gRPC's reflection service similarly allows clients to discover service definitions at runtime, enabling generic tools to work with any gRPC service without prior knowledge. In the REST API world, OpenAPI/Swagger specifications document interfaces in a way that makes them discoverable and testable.
DVMs would benefit from adopting these patterns while maintaining the decentralized, permissionless nature of Nostr.
Conclusion
I am not attempting to rewrite the DVM specification; rather, explore some ideas that could help the ecosystem improve incrementally, reducing fragmentation and making the ecosystem more comprehensible. By allowing DVMs to self describe their interfaces, we could maintain the flexibility that makes Nostr powerful while providing the structure needed for interoperability.
For developers building DVM clients or libraries, this approach would simplify consumption by providing clear expectations about inputs and outputs. For DVM operators, it would establish a standard way to communicate their service's requirements without relying on external documentation.
I am currently developing DVMCP following these patterns. Of course, DVMs and MCP servers have different details; MCP includes capabilities such as tools, resources, and prompts on the server side, as well as 'roots' and 'sampling' on the client side, creating a bidirectional way to consume capabilities. In contrast, DVMs typically function similarly to MCP tools, where you call a DVM with an input and receive an output, with each job type representing a different categorization of the work performed.
Without further ado, I hope this article has provided some insight into the potential benefits of applying the reflection pattern to the DVM specification.
-
@ fd0bcf8c:521f98c0
2025-04-29 13:38:49The vag' sits on the edge of the highway, broken, hungry. Overhead flies a transcontinental plane filled with highly paid executives. The upper class has taken to the air, the lower class to the roads: there is no longer any bond between them, they are two nations."—The Sovereign Individual
Fire
I was talking to a friend last night. Coffee in hand. Watching flames consume branches. Spring night on his porch.
He believed in America's happy ending. Debt would vanish. Inflation would cool. Manufacturing would return. Good guys win.
I nodded. I wanted to believe.
He leaned forward, toward the flame. I sat back, watching both fire and sky.
His military photos hung inside. Service medals displayed. Patriotism bone-deep.
The pendulum clock on his porch wall swung steadily. Tick. Tock. Measuring moments. Marking epochs.
History tells another story. Not tragic. Just true.
Our time has come. America cut off couldn't compete. Factories sit empty. Supply chains span oceans. Skills lack. Children lag behind. Rebuilding takes decades.
Truth hurts. Truth frees.
Cycles
History moves in waves. Every 500 years, power shifts. Systems fall. Systems rise.
500 BC - Greek coins changed everything. Markets flourished. Athens dominated.
1 AD - Rome ruled commerce. One currency. Endless roads. Bustling ports.
500 AD - Rome faded. Not overnight. Slowly. Trade withered. Cities emptied. Money debased. Roads crumbled. Local strongmen rose. Peasants sought protection. Feudalism emerged.
People still lived. Still worked. Horizons narrowed. Knowledge concentrated. Most barely survived. Rich adapted. Poor suffered.
Self-reliance determined survival. Those growing food endured. Those making essential goods continued. Those dependent on imperial systems suffered most.
1000 AD - Medieval revival began. Venice dominated seas. China printed money. Cathedrals rose. Universities formed.
1500 AD - Europeans sailed everywhere. Spanish silver flowed. Banks financed kingdoms. Companies colonized continents. Power moved west.
The pendulum swung. East to West. West to East. Civilizations rose. Civilizations fell.
2000 AD - Pattern repeats. America strains. Digital networks expand. China rises. Debt swells. Old systems break.
We stand at the hinge.
Warnings
Signs everywhere. Dollar weakens globally. BRICS builds alternatives. Yuan buys oil. Factories rust. Debt exceeds GDP. Interest consumes budgets.
Bridges crumble. Education falters. Politicians chase votes. We consume. We borrow.
Rome fell gradually. Citizens barely noticed. Taxes increased. Currency devalued. Military weakened. Services decayed. Life hardened by degrees.
East Rome adapted. Survived centuries. West fragmented. Trade shrank. Some thrived. Others suffered. Life changed permanently.
Those who could feed themselves survived best. Those who needed the system suffered worst.
Pendulum
My friend poured another coffee. The burn pile popped loudly. Sparks flew upward like dying stars.
His face changed as facts accumulated. Military man. Trained to assess threats. Detect weaknesses.
He stared at the fire. National glory reduced to embers. Something shifted in his expression. Recognition.
His fingers tightened around his mug. Knuckles white. Eyes fixed on dying flames.
I traced the horizon instead. Observing landscape. Noting the contrast.
He touched the flag on his t-shirt. I adjusted my plain gray one.
The unpayable debt. The crumbling infrastructure. The forgotten manufacturing. The dependent supply chains. The devaluing currency.
The pendulum clock ticked. Relentless. Indifferent to empires.
His eyes said what his patriotism couldn't voice. Something fundamental breaking.
I'd seen this coming. Years traveling showed me. Different systems. Different values. American exceptionalism viewed from outside.
Pragmatism replaced my old idealism. See things as they are. Not as wished.
The logs shifted. Flames reached higher. Then lower. The cycle of fire.
Divergence
Society always splits during shifts.
Some adapt. Some don't.
Printing arrived. Scribes starved. Publishers thrived. Information accelerated. Readers multiplied. Ideas spread. Adapters prospered.
Steam engines came. Weavers died. Factory owners flourished. Villages emptied. Cities grew. Coal replaced farms. Railways replaced wagons. New skills meant survival.
Computers transformed everything. Typewriters vanished. Software boomed. Data replaced paper. Networks replaced cabinets. Programmers replaced typists. Digital skills determined success.
The self-reliant thrived in each transition. Those waiting for rescue fell behind.
Now AI reshapes creativity. Some artists resist. Some harness it. Gap widens daily.
Bitcoin offers refuge. Critics mock. Adopters build wealth. The distance grows.
Remote work redraws maps. Office-bound struggle. Location-free flourish.
The pendulum swings. Power shifts. Some rise with it. Some fall against it.
Two societies emerge. Adaptive. Resistant. Prepared. Pretending.
Advantage
Early adapters win. Not through genius. Through action.
First printers built empires. First factories created dynasties. First websites became giants.
Bitcoin followed this pattern. Laptop miners became millionaires. Early buyers became legends.
Critics repeat themselves: "Too volatile." "No value." "Government ban coming."
Doubters doubt. Builders build. Gap widens.
Self-reliance accelerates adaptation. No permission needed. No consensus required. Act. Learn. Build.
The burn pile flames like empire's glory. Bright. Consuming. Temporary.
Blindness
Our brains see tigers. Not economic shifts.
We panic at headlines. We ignore decades-long trends.
We notice market drops. We miss debt cycles.
We debate tweets. We ignore revolutions.
Not weakness. Just humanity. Foresight requires work. Study. Thought.
Self-reliant thinking means seeing clearly. No comforting lies. No pleasing narratives. Just reality.
The clock pendulum swings. Time passes regardless of observation.
Action
Empires fall. Families need security. Children need futures. Lives need meaning.
You can adapt faster than nations.
Assess honestly. What skills matter now? What preserves wealth? Who helps when needed?
Never stop learning. Factory workers learned code. Taxi drivers joined apps. Photographers went digital.
Diversify globally. No country owns tomorrow. Learn languages. Make connections. Stay mobile.
Protect your money. Dying empires debase currencies. Romans kept gold. Bitcoin offers similar shelter.
Build resilience. Grow food. Make energy. Stay strong. Keep friends. Read old books. Some things never change.
Self-reliance matters most. Can you feed yourself? Can you fix things? Can you solve problems? Can you create value without systems?
Movement
Humans were nomads first. Settlers second. Movement in our blood.
Our ancestors followed herds. Sought better lands. Survival meant mobility.
The pendulum swings here too. Nomad to farmer. City-dweller to digital nomad.
Rome fixed people to land. Feudalism bound serfs to soil. Nations created borders. Companies demanded presence.
Now technology breaks chains. Work happens anywhere. Knowledge flows everywhere.
The rebuild America seeks requires fixed positions. Factory workers. Taxpaying citizens in permanent homes.
But technology enables escape. Remote work. Digital currencies. Borderless businesses.
The self-reliant understand mobility as freedom. One location means one set of rules. One economy. One fate.
Many locations mean options. Taxes become predatory? Leave. Opportunities disappear? Find new ones.
Patriotism celebrates roots. Wisdom remembers wings.
My friend's boots dug into his soil. Planted. Territorial. Defending.
My Chucks rested lightly. Ready. Adaptable. Departing.
His toolshed held equipment to maintain boundaries. Fences. Hedges. Property lines.
My backpack contained tools for crossing them. Chargers. Adapters. Currency.
The burn pile flame flickers. Fixed in place. The spark flies free. Movement its nature.
During Rome's decline, the mobile survived best. Merchants crossing borders. Scholars seeking patrons. Those tied to crumbling systems suffered most.
Location independence means personal resilience. Economic downturns become geographic choices. Political oppression becomes optional suffering.
Technology shrinks distance. Digital work. Video relationships. Online learning.
Self-sovereignty requires mobility. The option to walk away. The freedom to arrive elsewhere.
Two more worlds diverge. The rooted. The mobile. The fixed. The fluid. The loyal. The free.
Hope
Not decline. Transition. Painful but temporary.
America may weaken. Humanity advances. Technology multiplies possibilities. Poverty falls. Knowledge grows.
Falling empires see doom. Rising ones see opportunity. Both miss half the picture.
Every shift brings destruction and creation. Rome fell. Europe struggled. Farms produced less. Cities shrank. Trade broke down.
Yet innovation continued. Water mills appeared. New plows emerged. Monks preserved books. New systems evolved.
Different doesn't mean worse for everyone.
Some industries die. Others birth. Some regions fade. Others bloom. Some skills become useless. Others become gold.
The self-reliant thrive in any world. They adapt. They build. They serve. They create.
Choose your role. Nostalgia or building.
The pendulum swings. East rises again. The cycle continues.
Fading
The burn pile dimmed. Embers fading. Night air cooling.
My friend's shoulders changed. Tension releasing. Something accepted.
His patriotism remained. His illusions departed.
The pendulum clock ticked steadily. Measuring more than minutes. Measuring eras.
Two coffee cups. His: military-themed, old and chipped but cherished. Mine: plain porcelain, new and unmarked.
His eyes remained on smoldering embers. Mine moved between him and the darkening trees.
His calendar marked local town meetings. Mine tracked travel dates.
The last flame flickered out. Spring peepers filled the silence.
In darkness, we watched smoke rise. The world changing. New choices ahead.
No empire lasts forever. No comfort in denial. Only clarity in acceptance.
Self-reliance the ancient answer. Build your skills. Secure your resources. Strengthen your body. Feed your mind. Help your neighbors.
The burn pile turned to ash. Empire's glory extinguished.
He stood facing his land. I faced the road.
A nod between us. Respect across division. Different strategies for the same storm.
He turned toward his home. I toward my vehicle.
The pendulum continued swinging. Power flowing east once more. Five centuries ending. Five centuries beginning.
"Bear in mind that everything that exists is already fraying at the edges." — Marcus Aurelius
Tomorrow depends not on nations. On us.
-
@ 61bf790b:fe18b062
2025-04-29 12:23:09In a vast digital realm, two cities stood side by side: the towering, flashing metropolis of Feedia, and the decentralized, quiet city of Nostra.
Feedia was loud—blinding, buzzing, and always on. Screens plastered every wall, whispering the latest trends into citizens’ ears. But in this city, what you saw wasn’t up to you. It was determined by a towering, unseen force known as The Algorithm. It didn’t care what was true, meaningful, or helpful—only what would keep your eyes glued and your attention sold.
In Feedia, discovery wasn’t earned. It was assigned.
And worse—there was a caste system. To have a voice, you needed a Blue Check—a glowing badge that marked you as “worthy.” To get one, you had to pay or play. Pay monthly dues to the high towers or entertain The Algorithm enough to be deemed “valuable.” If you refused or couldn’t afford it, your voice was cast into the noise—buried beneath outrage bait and celebrity screams.
The unmarked were like ghosts—speaking into the void while the checked dined in Algorithm-favored towers. It was a digital monarchy dressed up as a democracy.
Then, there was Nostra.
There were no glowing checkmarks in Nostra—just signal. Every citizen had a light they carried, one that grew brighter the more they contributed: thoughtful posts, reshared ideas, built tools, or boosted others. Discovery was based not on payment or privilege, but participation and value.
In Nostra, you didn’t rise because you paid the gatekeeper—you rose because others lifted you. You weren’t spoon-fed; you sought, you found, you earned attention. It was harder, yes. But it was real.
And slowly, some in Feedia began to awaken. They grew tired of being fed fast-food content and ignored despite their voices. They looked across the river to Nostra, where minds weren’t bought—they were built.
And one by one, they began to cross.
-
@ 6e64b83c:94102ee8
2025-04-23 20:23:34How to Run Your Own Nostr Relay on Android with Cloudflare Domain
Prerequisites
- Install Citrine on your Android device:
- Visit https://github.com/greenart7c3/Citrine/releases
- Download the latest release using:
- zap.store
- Obtainium
- F-Droid
- Or download the APK directly
-
Note: You may need to enable "Install from Unknown Sources" in your Android settings
-
Domain Requirements:
- Purchase a domain if you don't have one
-
Transfer your domain to Cloudflare if it's not already there (for free SSL certificates and cloudflared support)
-
Tools to use:
- nak (the nostr army knife):
- Download from https://github.com/fiatjaf/nak/releases
- Installation steps:
-
For Linux/macOS: ```bash # Download the appropriate version for your system wget https://github.com/fiatjaf/nak/releases/latest/download/nak-linux-amd64 # for Linux # or wget https://github.com/fiatjaf/nak/releases/latest/download/nak-darwin-amd64 # for macOS
# Make it executable chmod +x nak-*
# Move to a directory in your PATH sudo mv nak-* /usr/local/bin/nak
- For Windows:
batch # Download the Windows version curl -L -o nak.exe https://github.com/fiatjaf/nak/releases/latest/download/nak-windows-amd64.exe# Move to a directory in your PATH (e.g., C:\Windows) move nak.exe C:\Windows\nak.exe
- Verify installation:
bash nak --version ```
Setting Up Citrine
- Open the Citrine app
- Start the server
- You'll see it running on
ws://127.0.0.1:4869
(local network only) - Go to settings and paste your npub into "Accept events signed by" inbox and press the + button. This prevents others from publishing events to your personal relay.
Installing Required Tools
- Install Termux from Google Play Store
- Open Termux and run:
bash pkg update && pkg install wget wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm64.deb dpkg -i cloudflared-linux-arm64.deb
Cloudflare Authentication
- Run the authentication command:
bash cloudflared tunnel login
- Follow the instructions:
- Copy the provided URL to your browser
- Log in to your Cloudflare account
- If the URL expires, copy it again after logging in
Creating the Tunnel
- Create a new tunnel:
bash cloudflared tunnel create <TUNNEL_NAME>
- Choose any name you prefer for your tunnel
-
Copy the tunnel ID after creating the tunnel
-
Create and configure the tunnel config:
bash touch ~/.cloudflared/config.yml nano ~/.cloudflared/config.yml
-
Add this configuration (replace the placeholders with your values): ```yaml tunnel:
credentials-file: /data/data/com.termux/files/home/.cloudflared/ .json ingress: - hostname: nostr.yourdomain.com service: ws://localhost:4869
- service: http_status:404 ```
- Note: In nano editor:
CTRL+O
and Enter to saveCTRL+X
to exit
-
Note: Check the credentials file path in the logs
-
Validate your configuration:
bash cloudflared tunnel validate
-
Start the tunnel:
bash cloudflared tunnel run my-relay
Preventing Android from Killing the Tunnel
Run these commands to maintain tunnel stability:
bash date && apt install termux-tools && termux-setup-storage && termux-wake-lock echo "nameserver 1.1.1.1" > $PREFIX/etc/resolv.conf
Tip: You can open multiple Termux sessions by swiping from the left edge of the screen while keeping your tunnel process running.
Updating Your Outbox Model Relays
Once your relay is running and accessible via your domain, you'll want to update your relay list in the Nostr network. This ensures other clients know about your relay and can connect to it.
Decoding npub (Public Key)
Private keys (nsec) and public keys (npub) are encoded in bech32 format, which includes: - A prefix (like nsec1, npub1 etc.) - The encoded data - A checksum
This format makes keys: - Easy to distinguish - Hard to copy incorrectly
However, most tools require these keys in hexadecimal (hex) format.
To decode an npub string to its hex format:
bash nak decode nostr:npub1dejts0qlva8mqzjlrxqkc2tmvs2t7elszky5upxaf3jha9qs9m5q605uc4
Change it with your own npub.
bash { "pubkey": "6e64b83c1f674fb00a5f19816c297b6414bf67f015894e04dd4c657e94102ee8" }
Copy the pubkey value in quotes.
Create a kind 10002 event with your relay list:
- Include your new relay with write permissions
- Include other relays you want to read from and write to, omit 3rd parameter to make it both read and write
Example format:
json { "kind": 10002, "tags": [ ["r", "wss://your-relay-domain.com", "write"], ["r", "wss://eden.nostr.land/"], ["r", "wss://nos.lol/"], ["r", "wss://nostr.bitcoiner.social/"], ["r", "wss://nostr.mom/"], ["r", "wss://relay.primal.net/"], ["r", "wss://nostr.wine/", "read"], ["r", "wss://relay.damus.io/"], ["r", "wss://relay.nostr.band/"], ["r", "wss://relay.snort.social/"] ], "content": "" }
Save it to a file called
event.json
Note: Add or remove any relays you want. To check your existing 10002 relays: - Visit https://nostr.band/?q=by%3Anpub1dejts0qlva8mqzjlrxqkc2tmvs2t7elszky5upxaf3jha9qs9m5q605uc4+++kind%3A10002 - nostr.band is an indexing service, it probably has your relay list. - Replace
npub1xxx
in the URL with your own npub - Click "VIEW JSON" from the menu to see the raw event - Or use thenak
tool if you know the relaysbash nak req -k 10002 -a <your-pubkey> wss://relay1.com wss://relay2.com
Replace `<your-pubkey>` with your public key in hex format (you can get it using `nak decode <your-npub>`)
- Sign and publish the event:
- Use a Nostr client that supports kind 10002 events
- Or use the
nak
command-line tool:bash nak event --sec ncryptsec1... wss://relay1.com wss://relay2.com $(cat event.json)
Important Security Notes: 1. Never share your nsec (private key) with anyone 2. Consider using NIP-49 encrypted keys for better security 3. Never paste your nsec or private key into the terminal. The command will be saved in your shell history, exposing your private key. To clear the command history: - For bash: use
history -c
- For zsh: usefc -W
to write history to file, thenfc -p
to read it back - Or manually edit your shell history file (e.g.,~/.zsh_history
or~/.bash_history
) 4. if you're usingzsh
, usefc -p
to prevent the next command from being saved to history 5. Or temporarily disable history before running sensitive commands:bash unset HISTFILE nak key encrypt ... set HISTFILE
How to securely create NIP-49 encypted private key
```bash
Read your private key (input will be hidden)
read -s SECRET
Read your password (input will be hidden)
read -s PASSWORD
encrypt command
echo "$SECRET" | nak key encrypt "$PASSWORD"
copy and paste the ncryptsec1 text from the output
read -s ENCRYPTED nak key decrypt "$ENCRYPTED"
clear variables from memory
unset SECRET PASSWORD ENCRYPTED ```
On a Windows command line, to read from stdin and use the variables in
nak
commands, you can use a combination ofset /p
to read input and then use those variables in your command. Here's an example:```bash @echo off set /p "SECRET=Enter your secret key: " set /p "PASSWORD=Enter your password: "
echo %SECRET%| nak key encrypt %PASSWORD%
:: Clear the sensitive variables set "SECRET=" set "PASSWORD=" ```
If your key starts with
ncryptsec1
, thenak
tool will securely prompt you for a password when using the--sec
parameter, unless the command is used with a pipe< >
or|
.bash nak event --sec ncryptsec1... wss://relay1.com wss://relay2.com $(cat event.json)
- Verify the event was published:
- Check if your relay list is visible on other relays
-
Use the
nak
tool to fetch your kind 10002 events:bash nak req -k 10002 -a <your-pubkey> wss://relay1.com wss://relay2.com
-
Testing your relay:
- Try connecting to your relay using different Nostr clients
- Verify you can both read from and write to your relay
- Check if events are being properly stored and retrieved
- Tip: Use multiple Nostr clients to test different aspects of your relay
Note: If anyone in the community has a more efficient method of doing things like updating outbox relays, please share your insights in the comments. Your expertise would be greatly appreciated!
-
@ f32184ee:6d1c17bf
2025-04-23 13:21:52Ads Fueling Freedom
Ross Ulbricht’s "Decentralize Social Media" painted a picture of a user-centric, decentralized future that transcended the limitations of platforms like the tech giants of today. Though focused on social media, his concept provided a blueprint for decentralized content systems writ large. The PROMO Protocol, designed by NextBlock while participating in Sovereign Engineering, embodies this blueprint in the realm of advertising, leveraging Nostr and Bitcoin’s Lightning Network to give individuals control, foster a multi-provider ecosystem, and ensure secure value exchange. In this way, Ulbricht’s 2021 vision can be seen as a prescient prediction of the PROMO Protocol’s structure. This is a testament to the enduring power of his ideas, now finding form in NextBlock’s innovative approach.
[Current Platform-Centric Paradigm, source: Ross Ulbricht's Decentralize Social Media]
Ulbricht’s Vision: A Decentralized Social Protocol
In his 2021 Medium article Ulbricht proposed a revolutionary vision for a decentralized social protocol (DSP) to address the inherent flaws of centralized social media platforms, such as privacy violations and inconsistent content moderation. Writing from prison, Ulbricht argued that decentralization could empower users by giving them control over their own content and the value they create, while replacing single, monolithic platforms with a competitive ecosystem of interface providers, content servers, and advertisers. Though his focus was on social media, Ulbricht’s ideas laid a conceptual foundation that strikingly predicts the structure of NextBlock’s PROMO Protocol, a decentralized advertising system built on the Nostr protocol.
[A Decentralized Social Protocol (DSP), source: Ross Ulbricht's Decentralize Social Media]
Ulbricht’s Principles
Ulbricht’s article outlines several key principles for his DSP: * User Control: Users should own their content and dictate how their data and creations generate value, rather than being subject to the whims of centralized corporations. * Decentralized Infrastructure: Instead of a single platform, multiple interface providers, content hosts, and advertisers interoperate, fostering competition and resilience. * Privacy and Autonomy: Decentralized solutions for profile management, hosting, and interactions would protect user privacy and reduce reliance on unaccountable intermediaries. * Value Creation: Users, not platforms, should capture the economic benefits of their contributions, supported by decentralized mechanisms for transactions.
These ideas were forward-thinking in 2021, envisioning a shift away from the centralized giants dominating social media at the time. While Ulbricht didn’t specifically address advertising protocols, his framework for decentralization and user empowerment extends naturally to other domains, like NextBlock’s open-source offering: the PROMO Protocol.
NextBlock’s Implementation of PROMO Protocol
The PROMO Protocol powers NextBlock's Billboard app, a decentralized advertising protocol built on Nostr, a simple, open protocol for decentralized communication. The PROMO Protocol reimagines advertising by: * Empowering People: Individuals set their own ad prices (e.g., 500 sats/minute), giving them direct control over how their attention or space is monetized. * Marketplace Dynamics: Advertisers set budgets and maximum bids, competing within a decentralized system where a 20% service fee ensures operational sustainability. * Open-Source Flexibility: As an open-source protocol, it allows multiple developers to create interfaces or apps on top of it, avoiding the single-platform bottleneck Ulbricht critiqued. * Secure Payments: Using Strike Integration with Bitcoin Lightning Network, NextBlock enables bot-resistant and intermediary-free transactions, aligning value transfer with each person's control.
This structure decentralizes advertising in a way that mirrors Ulbricht’s broader vision for social systems, with aligned principles showing a specific use case: monetizing attention on Nostr.
Aligned Principles
Ulbricht’s 2021 article didn’t explicitly predict the PROMO Protocol, but its foundational concepts align remarkably well with NextBlock's implementation the protocol’s design: * Autonomy Over Value: Ulbricht argued that users should control their content and its economic benefits. In the PROMO Protocol, people dictate ad pricing, directly capturing the value of their participation. Whether it’s their time, influence, or digital space, rather than ceding it to a centralized ad network. * Ecosystem of Providers: Ulbricht envisioned multiple providers replacing a single platform. The PROMO Protocol’s open-source nature invites a similar diversity: anyone can build interfaces or tools on top of it, creating a competitive, decentralized advertising ecosystem rather than a walled garden. * Decentralized Transactions: Ulbricht’s DSP implied decentralized mechanisms for value exchange. NextBlock delivers this through the Bitcoin Lightning Network, ensuring that payments for ads are secure, instantaneous and final, a practical realization of Ulbricht’s call for user-controlled value flows. * Privacy and Control: While Ulbricht emphasized privacy in social interactions, the PROMO Protocol is public by default. Individuals are fully aware of all data that they generate since all Nostr messages are signed. All participants interact directly via Nostr.
[Blueprint Match, source NextBlock]
Who We Are
NextBlock is a US-based new media company reimagining digital ads for a decentralized future. Our founders, software and strategy experts, were hobbyist podcasters struggling to promote their work online without gaming the system. That sparked an idea: using new tech like Nostr and Bitcoin to build a decentralized attention market for people who value control and businesses seeking real connections.
Our first product, Billboard, is launching this June.
Open for All
Our model’s open-source! Check out the PROMO Protocol, built for promotion and attention trading. Anyone can join this decentralized ad network. Run your own billboard or use ours. This is a growing ecosystem for a new ad economy.
Our Vision
NextBlock wants to help build a new decentralized internet. Our revolutionary and transparent business model will bring honest revenue to companies hosting valuable digital spaces. Together, we will discover what our attention is really worth.
Read our Manifesto to learn more.
NextBlock is registered in Texas, USA.
-
@ e7454994:7bb2dac7
2025-04-29 16:28:59Imagine
According to Cazoomi, total revenue for nonprofits in the U.S. reached approximately $3.7 trillion in 2024.
I know in some cases a billion is a hundred million instead of a thousand million (presumably so that some millionaires can call themselves billionaires to distinguish themselves from the riffraff). But that’s not the case here. A trillion is one followed by 12 zeros, so in 2024, US non-profits’ expenses were
3,700,000,000,000 dollars.
How much is a trillion?
And that’s just the USA. We could safely double it for worldwide non-profits and still be well below the actual figure. To be conservative, let’s say 6 trillion of our dollars each year goes on the kinds of projects that non-profits are allowed to do (essentially, making the world a better place).
Think what you could do with just one million dollars. Now think of that times six million! The entire population of Congo, each man, woman, and child, could become a millionaire!. It’s not really imaginable.
That’s how much nonprofits have. What the hell have they done with all that money? In most places where poverty and malnutrition are rife, two thousand dollars a year per family would be more than enough to enable people to sort out whatever problems they have and convert their local community to abundance over three years. Six trillion divided by two thousand is three billion.
The people whom we allow to manage ‘aid’ for us are (to be polite) inept, and we need to bypass them urgently. Directsponsor.org and clickforcharity.net are part of a proof of concept, and our aim is to prove that a better way is possible by doing it.
When a hierarchy exists, it presents a focus of power that power-seeking individuals and cliques can over time turn to their advantage… Even volunteer organizations are subject to intrigues, power grabs, covert arrangements, misallocation of funds, etc. The problem is made worse by the fact that those who most desire power and who are the most ruthless are the very ones who tend to work their way to the top of hierarchies.
More Fun With Figures
Oxfam UK raised £368,000,000 in 2013-14. That’s around 450,000,000 euros. What could we do with that kind of money? A direct sponsorship project is, for a family, 120 per month = 1440 per year. 1440 / 450,000,000 = 312,500 families.
312,500 families, x 4 = 1,250,000 people, would move from poverty into abundance every 3 years with the money that goes through Oxfam. Does Oxfam achieve anything like this with our money?
Oxfam UK is just one of many, many such charities and is small fry when you look at things like USAID, which ran through 27 billion dollars in the year to 2025. What could we do with that?
Its well over 15 million families. Over 60 million people! Or, the entire population of Botswana, Namibia, Mauritania, Liberia, the Republic of Congo, the Central African Republic, Libya, Sierra Leone, Eritrea, Togo, and Guinea combined. This is only to make a point, not to suggest that we would ever achieve such numbers. It shows how wasteful and scandalous our present “aid” efforts really are.
NGOs and governments waste our money.
Solution
Until recently, it was impossible to send money directly to another human without going through the banking system. The big charity organisations were a necessary part of the process, and they made the most of their position. Think about it: you have a family to feed, rent or a mortgage that has to be paid, or you’ll be homeless and destitute. What would your priority be? Apart from the top level, these are generally good people with the best intentions.
But now we have Bitcoin. People can send money all over the world at extremely low cost. The recipients aren’t stupid; they know what they need better than any NGO “expert,” and any expertise or teaching they need, they can get if they have the money to pay for it. This way, the power relationship is reversed in favor of our recipients.
All we need is a system (open and distributed) that ensures sponsors’ funds are not being squandered and the projects being supported are not scams.
A few people decided to start such a project. We have a system almost fully built and currently being tested out. It will enable people to get together into small groups of sponsors to fund small, local projects by directly funding the individuals working on the project. Anything from a regular monthly commitment to a click-for-charity system where you don’t even need any money to occasional one-off purchases of items for a project will be possible.
Here’s our pilot project in Badilisha, on Lake Victoria.
If you like doing stuff on social media, please sign up on our beta site (no money needed) and say hi; we need a few people to get it started. clickforcharity.net.
-
@ 9bde4214:06ca052b
2025-04-22 18:13:37"It's gonna be permissionless or hell."
Gigi and gzuuus are vibing towards dystopia.
Books & articles mentioned:
- AI 2027
- DVMs were a mistake
- Careless People by Sarah Wynn-Williams
- Takedown by Laila michelwait
- The Ultimate Resource by Julian L. Simon
- Harry Potter by J.K. Rowling
- Momo by Michael Ende
In this dialogue:
- Pablo's Roo Setup
- Tech Hype Cycles
- AI 2027
- Prompt injection and other attacks
- Goose and DVMCP
- Cursor vs Roo Code
- Staying in control thanks to Amber and signing delegation
- Is YOLO mode here to stay?
- What agents to trust?
- What MCP tools to trust?
- What code snippets to trust?
- Everyone will run into the issues of trust and micropayments
- Nostr solves Web of Trust & micropayments natively
- Minimalistic & open usually wins
- DVMCP exists thanks to Totem
- Relays as Tamagochis
- Agents aren't nostr experts, at least not right now
- Fix a mistake once & it's fixed forever
- Giving long-term memory to LLMs
- RAG Databases signed by domain experts
- Human-agent hybrids & Chess
- Nostr beating heart
- Pluggable context & experts
- "You never need an API key for anything"
- Sats and social signaling
- Difficulty-adjusted PoW as a rare-limiting mechanism
- Certificate authorities and centralization
- No solutions to policing speech!
- OAuth and how it centralized
- Login with nostr
- Closed vs open-source models
- Tiny models vs large models
- The minions protocol (Stanford paper)
- Generalist models vs specialized models
- Local compute & encrypted queries
- Blinded compute
- "In the eyes of the state, agents aren't people"
- Agents need identity and money; nostr provides both
- "It's gonna be permissionless or hell"
- We already have marketplaces for MCP stuff, code snippets, and other things
- Most great stuff came from marketplaces (browsers, games, etc)
- Zapstore shows that this is already working
- At scale, central control never works. There's plenty scams and viruses in the app stores.
- Using nostr to archive your user-generated content
- HAVEN, blossom, novia
- The switcharoo from advertisements to training data
- What is Truth?
- What is Real?
- "We're vibing into dystopia"
- Who should be the arbiter of Truth?
- First Amendment & why the Logos is sacred
- Silicon Valley AI bros arrogantly dismiss wisdom and philosophy
- Suicide rates & the meaning crisis
- Are LLMs symbiotic or parasitic?
- The Amish got it right
- Are we gonna make it?
- Careless People by Sarah Wynn-Williams
- Takedown by Laila michelwait
- Harry Potter dementors & Momo's time thieves
- Facebook & Google as non-human (superhuman) agents
- Zapping as a conscious action
- Privacy and the internet
- Plausible deniability thanks to generative models
- Google glasses, glassholes, and Meta's Ray Ben's
- People crave realness
- Bitcoin is the realest money we ever had
- Nostr allows for real and honest expression
- How do we find out what's real?
- Constraints, policing, and chilling effects
- Jesus' plans for DVMCP
- Hzrd's article on how DVMs are broken (DVMs were a mistake)
- Don't believe the hype
- DVMs pre-date MCP tools
- Data Vending Machines were supposed to be stupid: put coin in, get stuff out.
- Self-healing vibe-coding
- IP addresses as scarce assets
- Atomic swaps and the ASS protocol
- More marketplaces, less silos
- The intensity of #SovEng and the last 6 weeks
- If you can vibe-code everything, why build anything?
- Time, the ultimate resource
- What are the LLMs allowed to think?
- Natural language interfaces are inherently dialogical
- Sovereign Engineering is dialogical too
-
@ a39d19ec:3d88f61e
2025-04-22 12:44:42Die Debatte um Migration, Grenzsicherung und Abschiebungen wird in Deutschland meist emotional geführt. Wer fordert, dass illegale Einwanderer abgeschoben werden, sieht sich nicht selten dem Vorwurf des Rassismus ausgesetzt. Doch dieser Vorwurf ist nicht nur sachlich unbegründet, sondern verkehrt die Realität ins Gegenteil: Tatsächlich sind es gerade diejenigen, die hinter jeder Forderung nach Rechtssicherheit eine rassistische Motivation vermuten, die selbst in erster Linie nach Hautfarbe, Herkunft oder Nationalität urteilen.
Das Recht steht über Emotionen
Deutschland ist ein Rechtsstaat. Das bedeutet, dass Regeln nicht nach Bauchgefühl oder politischer Stimmungslage ausgelegt werden können, sondern auf klaren gesetzlichen Grundlagen beruhen müssen. Einer dieser Grundsätze ist in Artikel 16a des Grundgesetzes verankert. Dort heißt es:
„Auf Absatz 1 [Asylrecht] kann sich nicht berufen, wer aus einem Mitgliedstaat der Europäischen Gemeinschaften oder aus einem anderen Drittstaat einreist, in dem die Anwendung des Abkommens über die Rechtsstellung der Flüchtlinge und der Europäischen Menschenrechtskonvention sichergestellt ist.“
Das bedeutet, dass jeder, der über sichere Drittstaaten nach Deutschland einreist, keinen Anspruch auf Asyl hat. Wer dennoch bleibt, hält sich illegal im Land auf und unterliegt den geltenden Regelungen zur Rückführung. Die Forderung nach Abschiebungen ist daher nichts anderes als die Forderung nach der Einhaltung von Recht und Gesetz.
Die Umkehrung des Rassismusbegriffs
Wer einerseits behauptet, dass das deutsche Asyl- und Aufenthaltsrecht strikt durchgesetzt werden soll, und andererseits nicht nach Herkunft oder Hautfarbe unterscheidet, handelt wertneutral. Diejenigen jedoch, die in einer solchen Forderung nach Rechtsstaatlichkeit einen rassistischen Unterton sehen, projizieren ihre eigenen Denkmuster auf andere: Sie unterstellen, dass die Debatte ausschließlich entlang ethnischer, rassistischer oder nationaler Kriterien geführt wird – und genau das ist eine rassistische Denkweise.
Jemand, der illegale Einwanderung kritisiert, tut dies nicht, weil ihn die Herkunft der Menschen interessiert, sondern weil er den Rechtsstaat respektiert. Hingegen erkennt jemand, der hinter dieser Kritik Rassismus wittert, offenbar in erster Linie die „Rasse“ oder Herkunft der betreffenden Personen und reduziert sie darauf.
Finanzielle Belastung statt ideologischer Debatte
Neben der rechtlichen gibt es auch eine ökonomische Komponente. Der deutsche Wohlfahrtsstaat basiert auf einem Solidarprinzip: Die Bürger zahlen in das System ein, um sich gegenseitig in schwierigen Zeiten zu unterstützen. Dieser Wohlstand wurde über Generationen hinweg von denjenigen erarbeitet, die hier seit langem leben. Die Priorität liegt daher darauf, die vorhandenen Mittel zuerst unter denjenigen zu verteilen, die durch Steuern, Sozialabgaben und Arbeit zum Erhalt dieses Systems beitragen – nicht unter denen, die sich durch illegale Einreise und fehlende wirtschaftliche Eigenleistung in das System begeben.
Das ist keine ideologische Frage, sondern eine rein wirtschaftliche Abwägung. Ein Sozialsystem kann nur dann nachhaltig funktionieren, wenn es nicht unbegrenzt belastet wird. Würde Deutschland keine klaren Regeln zur Einwanderung und Abschiebung haben, würde dies unweigerlich zur Überlastung des Sozialstaates führen – mit negativen Konsequenzen für alle.
Sozialpatriotismus
Ein weiterer wichtiger Aspekt ist der Schutz der Arbeitsleistung jener Generationen, die Deutschland nach dem Zweiten Weltkrieg mühsam wieder aufgebaut haben. Während oft betont wird, dass die Deutschen moralisch kein Erbe aus der Zeit vor 1945 beanspruchen dürfen – außer der Verantwortung für den Holocaust –, ist es umso bedeutsamer, das neue Erbe nach 1945 zu respektieren, das auf Fleiß, Disziplin und harter Arbeit beruht. Der Wiederaufbau war eine kollektive Leistung deutscher Menschen, deren Früchte nicht bedenkenlos verteilt werden dürfen, sondern vorrangig denjenigen zugutekommen sollten, die dieses Fundament mitgeschaffen oder es über Generationen mitgetragen haben.
Rechtstaatlichkeit ist nicht verhandelbar
Wer sich für eine konsequente Abschiebepraxis ausspricht, tut dies nicht aus rassistischen Motiven, sondern aus Respekt vor der Rechtsstaatlichkeit und den wirtschaftlichen Grundlagen des Landes. Der Vorwurf des Rassismus in diesem Kontext ist daher nicht nur falsch, sondern entlarvt eine selektive Wahrnehmung nach rassistischen Merkmalen bei denjenigen, die ihn erheben.
-
@ 4ba8e86d:89d32de4
2025-04-28 22:39:20Como funciona o PGP.
O texto a seguir foi retirado do capítulo 1 do documento Introdução à criptografia na documentação do PGP 6.5.1. Copyright © 1990-1999 Network Associates, Inc. Todos os direitos reservados.
-O que é criptografia? -Criptografia forte -Como funciona a criptografia? -Criptografia convencional -Cifra de César -Gerenciamento de chaves e criptografia convencional -Criptografia de chave pública -Como funciona o PGP - Chaves • Assinaturas digitais -Funções hash • Certificados digitais -Distribuição de certificados -Formatos de certificado •Validade e confiança -Verificando validade -Estabelecendo confiança -Modelos de confiança • Revogação de certificado -Comunicar que um certificado foi revogado -O que é uma senha? -Divisão de chave
Os princípios básicos da criptografia.
Quando Júlio César enviou mensagens aos seus generais, ele não confiou nos seus mensageiros. Então ele substituiu cada A em suas mensagens por um D, cada B por um E, e assim por diante através do alfabeto. Somente alguém que conhecesse a regra “shift by 3” poderia decifrar suas mensagens. E assim começamos.
Criptografia e descriptografia.
Os dados que podem ser lidos e compreendidos sem quaisquer medidas especiais são chamados de texto simples ou texto não criptografado. O método de disfarçar o texto simples de forma a ocultar sua substância é chamado de criptografia. Criptografar texto simples resulta em um jargão ilegível chamado texto cifrado. Você usa criptografia para garantir que as informações sejam ocultadas de qualquer pessoa a quem não se destinam, mesmo daqueles que podem ver os dados criptografados. O processo de reverter o texto cifrado ao texto simples original é chamado de descriptografia . A Figura 1-1 ilustra esse processo.
https://image.nostr.build/0e2fcb71ed86a6083e083abbb683f8c103f44a6c6db1aeb2df10ae51ec97ebe5.jpg
Figura 1-1. Criptografia e descriptografia
O que é criptografia?
Criptografia é a ciência que usa a matemática para criptografar e descriptografar dados. A criptografia permite armazenar informações confidenciais ou transmiti-las através de redes inseguras (como a Internet) para que não possam ser lidas por ninguém, exceto pelo destinatário pretendido. Embora a criptografia seja a ciência que protege os dados, a criptoanálise é a ciência que analisa e quebra a comunicação segura. A criptoanálise clássica envolve uma combinação interessante de raciocínio analítico, aplicação de ferramentas matemáticas, descoberta de padrões, paciência, determinação e sorte. Os criptoanalistas também são chamados de atacantes. A criptologia abrange tanto a criptografia quanto a criptoanálise.
Criptografia forte.
"Existem dois tipos de criptografia neste mundo: a criptografia que impedirá a sua irmã mais nova de ler os seus arquivos, e a criptografia que impedirá os principais governos de lerem os seus arquivos. Este livro é sobre o último." --Bruce Schneier, Criptografia Aplicada: Protocolos, Algoritmos e Código Fonte em C. PGP também trata deste último tipo de criptografia. A criptografia pode ser forte ou fraca, conforme explicado acima. A força criptográfica é medida no tempo e nos recursos necessários para recuperar o texto simples. O resultado de uma criptografia forte é um texto cifrado que é muito difícil de decifrar sem a posse da ferramenta de decodificação apropriada. Quão díficil? Dado todo o poder computacional e o tempo disponível de hoje – mesmo um bilhão de computadores fazendo um bilhão de verificações por segundo – não é possível decifrar o resultado de uma criptografia forte antes do fim do universo. Alguém poderia pensar, então, que uma criptografia forte resistiria muito bem até mesmo contra um criptoanalista extremamente determinado. Quem pode realmente dizer? Ninguém provou que a criptografia mais forte disponível hoje resistirá ao poder computacional de amanhã. No entanto, a criptografia forte empregada pelo PGP é a melhor disponível atualmente.
Contudo, a vigilância e o conservadorismo irão protegê-lo melhor do que as alegações de impenetrabilidade.
Como funciona a criptografia?
Um algoritmo criptográfico, ou cifra, é uma função matemática usada no processo de criptografia e descriptografia. Um algoritmo criptográfico funciona em combinação com uma chave – uma palavra, número ou frase – para criptografar o texto simples. O mesmo texto simples é criptografado em texto cifrado diferente com chaves diferentes. A segurança dos dados criptografados depende inteiramente de duas coisas: a força do algoritmo criptográfico e o sigilo da chave. Um algoritmo criptográfico, mais todas as chaves possíveis e todos os protocolos que o fazem funcionar constituem um criptossistema. PGP é um criptossistema.
Criptografia convencional.
Na criptografia convencional, também chamada de criptografia de chave secreta ou de chave simétrica , uma chave é usada tanto para criptografia quanto para descriptografia. O Data Encryption Standard (DES) é um exemplo de criptossistema convencional amplamente empregado pelo Governo Federal. A Figura 1-2 é uma ilustração do processo de criptografia convencional. https://image.nostr.build/328b73ebaff84c949df2560bbbcec4bc3b5e3a5163d5fbb2ec7c7c60488f894c.jpg
Figura 1-2. Criptografia convencional
Cifra de César.
Um exemplo extremamente simples de criptografia convencional é uma cifra de substituição. Uma cifra de substituição substitui uma informação por outra. Isso é feito com mais frequência compensando as letras do alfabeto. Dois exemplos são o Anel Decodificador Secreto do Capitão Meia-Noite, que você pode ter possuído quando era criança, e a cifra de Júlio César. Em ambos os casos, o algoritmo serve para compensar o alfabeto e a chave é o número de caracteres para compensá-lo. Por exemplo, se codificarmos a palavra "SEGREDO" usando o valor chave de César de 3, deslocaremos o alfabeto para que a terceira letra abaixo (D) comece o alfabeto. Então começando com A B C D E F G H I J K L M N O P Q R S T U V W X Y Z e deslizando tudo para cima em 3, você obtém DEFGHIJKLMNOPQRSTUVWXYZABC onde D=A, E=B, F=C e assim por diante. Usando este esquema, o texto simples, "SECRET" é criptografado como "VHFUHW". Para permitir que outra pessoa leia o texto cifrado, você diz a ela que a chave é 3. Obviamente, esta é uma criptografia extremamente fraca para os padrões atuais, mas, ei, funcionou para César e ilustra como funciona a criptografia convencional.
Gerenciamento de chaves e criptografia convencional.
A criptografia convencional tem benefícios. É muito rápido. É especialmente útil para criptografar dados que não vão a lugar nenhum. No entanto, a criptografia convencional por si só como meio de transmissão segura de dados pode ser bastante cara, simplesmente devido à dificuldade de distribuição segura de chaves. Lembre-se de um personagem do seu filme de espionagem favorito: a pessoa com uma pasta trancada e algemada ao pulso. Afinal, o que há na pasta? Provavelmente não é o código de lançamento de mísseis/fórmula de biotoxina/plano de invasão em si. É a chave que irá descriptografar os dados secretos. Para que um remetente e um destinatário se comuniquem com segurança usando criptografia convencional, eles devem chegar a um acordo sobre uma chave e mantê-la secreta entre si. Se estiverem em locais físicos diferentes, devem confiar em um mensageiro, no Bat Phone ou em algum outro meio de comunicação seguro para evitar a divulgação da chave secreta durante a transmissão. Qualquer pessoa que ouvir ou interceptar a chave em trânsito poderá posteriormente ler, modificar e falsificar todas as informações criptografadas ou autenticadas com essa chave. Do DES ao Anel Decodificador Secreto do Capitão Midnight, o problema persistente com a criptografia convencional é a distribuição de chaves: como você leva a chave ao destinatário sem que alguém a intercepte?
Criptografia de chave pública.
Os problemas de distribuição de chaves são resolvidos pela criptografia de chave pública, cujo conceito foi introduzido por Whitfield Diffie e Martin Hellman em 1975. (Há agora evidências de que o Serviço Secreto Britânico a inventou alguns anos antes de Diffie e Hellman, mas a manteve um segredo militar - e não fez nada com isso.
[JH Ellis: The Possibility of Secure Non-Secret Digital Encryption, CESG Report, January 1970]) A criptografia de chave pública é um esquema assimétrico que usa um par de chaves para criptografia: uma chave pública, que criptografa os dados, e uma chave privada ou secreta correspondente para descriptografia. Você publica sua chave pública para o mundo enquanto mantém sua chave privada em segredo. Qualquer pessoa com uma cópia da sua chave pública pode criptografar informações que somente você pode ler. Até mesmo pessoas que você nunca conheceu. É computacionalmente inviável deduzir a chave privada da chave pública. Qualquer pessoa que possua uma chave pública pode criptografar informações, mas não pode descriptografá-las. Somente a pessoa que possui a chave privada correspondente pode descriptografar as informações. https://image.nostr.build/fdb71ae7a4450a523456827bdd509b31f0250f63152cc6f4ba78df290887318b.jpg
Figura 1-3. Criptografia de chave pública O principal benefício da criptografia de chave pública é que ela permite que pessoas que não possuem nenhum acordo de segurança pré-existente troquem mensagens com segurança. A necessidade de remetente e destinatário compartilharem chaves secretas através de algum canal seguro é eliminada; todas as comunicações envolvem apenas chaves públicas e nenhuma chave privada é transmitida ou compartilhada. Alguns exemplos de criptossistemas de chave pública são Elgamal (nomeado em homenagem a seu inventor, Taher Elgamal), RSA (nomeado em homenagem a seus inventores, Ron Rivest, Adi Shamir e Leonard Adleman), Diffie-Hellman (nomeado, você adivinhou, em homenagem a seus inventores). ) e DSA, o algoritmo de assinatura digital (inventado por David Kravitz). Como a criptografia convencional já foi o único meio disponível para transmitir informações secretas, o custo dos canais seguros e da distribuição de chaves relegou a sua utilização apenas àqueles que podiam pagar, como governos e grandes bancos (ou crianças pequenas com anéis descodificadores secretos). A criptografia de chave pública é a revolução tecnológica que fornece criptografia forte para as massas adultas. Lembra do mensageiro com a pasta trancada e algemada ao pulso? A criptografia de chave pública o tira do mercado (provavelmente para seu alívio).
Como funciona o PGP.
O PGP combina alguns dos melhores recursos da criptografia convencional e de chave pública. PGP é um criptossistema híbrido. Quando um usuário criptografa texto simples com PGP, o PGP primeiro compacta o texto simples. A compactação de dados economiza tempo de transmissão do modem e espaço em disco e, mais importante ainda, fortalece a segurança criptográfica. A maioria das técnicas de criptoanálise explora padrões encontrados no texto simples para quebrar a cifra. A compressão reduz esses padrões no texto simples, aumentando assim enormemente a resistência à criptoanálise. (Arquivos que são muito curtos para compactar ou que não são compactados bem não são compactados.) O PGP então cria uma chave de sessão, que é uma chave secreta única. Esta chave é um número aleatório gerado a partir dos movimentos aleatórios do mouse e das teclas digitadas. Esta chave de sessão funciona com um algoritmo de criptografia convencional rápido e muito seguro para criptografar o texto simples; o resultado é texto cifrado. Depois que os dados são criptografados, a chave da sessão é criptografada na chave pública do destinatário. Essa chave de sessão criptografada com chave pública é transmitida junto com o texto cifrado ao destinatário.
Figura 1-4. Como funciona a criptografia PGP A descriptografia funciona ao contrário. A cópia do PGP do destinatário usa sua chave privada para recuperar a chave de sessão temporária, que o PGP usa para descriptografar o texto cifrado criptografado convencionalmente.
Figura 1-5. Como funciona a descriptografia PGP A combinação dos dois métodos de criptografia combina a conveniência da criptografia de chave pública com a velocidade da criptografia convencional. A criptografia convencional é cerca de 1.000 vezes mais rápida que a criptografia de chave pública. A criptografia de chave pública, por sua vez, fornece uma solução para
problemas de distribuição de chaves e transmissão de dados. Usados em conjunto, o desempenho e a distribuição de chaves são melhorados sem qualquer sacrifício na segurança.
Chaves.
Uma chave é um valor que funciona com um algoritmo criptográfico para produzir um texto cifrado específico. As chaves são basicamente números muito, muito, muito grandes. O tamanho da chave é medido em bits; o número que representa uma chave de 1024 bits é enorme. Na criptografia de chave pública, quanto maior a chave, mais seguro é o texto cifrado. No entanto, o tamanho da chave pública e o tamanho da chave secreta da criptografia convencional não têm nenhuma relação. Uma chave convencional de 80 bits tem a força equivalente a uma chave pública de 1.024 bits. Uma chave convencional de 128 bits é equivalente a uma chave pública de 3.000 bits. Novamente, quanto maior a chave, mais segura, mas os algoritmos usados para cada tipo de criptografia são muito diferentes e, portanto, a comparação é como a de maçãs com laranjas. Embora as chaves pública e privada estejam matematicamente relacionadas, é muito difícil derivar a chave privada dada apenas a chave pública; no entanto, derivar a chave privada é sempre possível, desde que haja tempo e capacidade computacional suficientes. Isto torna muito importante escolher chaves do tamanho certo; grande o suficiente para ser seguro, mas pequeno o suficiente para ser aplicado rapidamente. Além disso, você precisa considerar quem pode estar tentando ler seus arquivos, quão determinados eles estão, quanto tempo têm e quais podem ser seus recursos. Chaves maiores serão criptograficamente seguras por um longo período de tempo. Se o que você deseja criptografar precisar ficar oculto por muitos anos, você pode usar uma chave muito grande. Claro, quem sabe quanto tempo levará para determinar sua chave usando os computadores mais rápidos e eficientes de amanhã? Houve um tempo em que uma chave simétrica de 56 bits era considerada extremamente segura. As chaves são armazenadas de forma criptografada. O PGP armazena as chaves em dois arquivos no seu disco rígido; um para chaves públicas e outro para chaves privadas. Esses arquivos são chamados de chaveiros. Ao usar o PGP, você normalmente adicionará as chaves públicas dos seus destinatários ao seu chaveiro público. Suas chaves privadas são armazenadas em seu chaveiro privado. Se você perder seu chaveiro privado, não será possível descriptografar nenhuma informação criptografada nas chaves desse anel.
Assinaturas digitais.
Um grande benefício da criptografia de chave pública é que ela fornece um método para empregar assinaturas digitais. As assinaturas digitais permitem ao destinatário da informação verificar a autenticidade da origem da informação e também verificar se a informação está intacta. Assim, as assinaturas digitais de chave pública fornecem autenticação e integridade de dados. A assinatura digital também proporciona o não repúdio, o que significa que evita que o remetente alegue que não enviou realmente as informações. Esses recursos são tão fundamentais para a criptografia quanto a privacidade, se não mais. Uma assinatura digital tem a mesma finalidade de uma assinatura manuscrita. No entanto, uma assinatura manuscrita é fácil de falsificar. Uma assinatura digital é superior a uma assinatura manuscrita porque é quase impossível de ser falsificada, além de atestar o conteúdo da informação, bem como a identidade do signatário.
Algumas pessoas tendem a usar mais assinaturas do que criptografia. Por exemplo, você pode não se importar se alguém souber que você acabou de depositar US$ 1.000 em sua conta, mas quer ter certeza de que foi o caixa do banco com quem você estava lidando. A maneira básica pela qual as assinaturas digitais são criadas é ilustrada na Figura 1-6 . Em vez de criptografar informações usando a chave pública de outra pessoa, você as criptografa com sua chave privada. Se as informações puderem ser descriptografadas com sua chave pública, elas deverão ter se originado em você.
Figura 1-6. Assinaturas digitais simples
Funções hash.
O sistema descrito acima apresenta alguns problemas. É lento e produz um enorme volume de dados – pelo menos o dobro do tamanho da informação original. Uma melhoria no esquema acima é a adição de uma função hash unidirecional no processo. Uma função hash unidirecional recebe uma entrada de comprimento variável – neste caso, uma mensagem de qualquer comprimento, até mesmo milhares ou milhões de bits – e produz uma saída de comprimento fixo; digamos, 160 bits. A função hash garante que, se a informação for alterada de alguma forma – mesmo que por apenas um bit – seja produzido um valor de saída totalmente diferente. O PGP usa uma função hash criptograficamente forte no texto simples que o usuário está assinando. Isso gera um item de dados de comprimento fixo conhecido como resumo da mensagem. (Novamente, qualquer alteração nas informações resulta em um resumo totalmente diferente.) Então o PGP usa o resumo e a chave privada para criar a “assinatura”. O PGP transmite a assinatura e o texto simples juntos. Ao receber a mensagem, o destinatário utiliza o PGP para recalcular o resumo, verificando assim a assinatura. O PGP pode criptografar o texto simples ou não; assinar texto simples é útil se alguns dos destinatários não estiverem interessados ou não forem capazes de verificar a assinatura. Desde que uma função hash segura seja usada, não há como retirar a assinatura de alguém de um documento e anexá-la a outro, ou alterar uma mensagem assinada de qualquer forma. A menor alteração em um documento assinado causará falha no processo de verificação da assinatura digital.
Figura 1-7. Assinaturas digitais seguras As assinaturas digitais desempenham um papel importante na autenticação e validação de chaves de outros usuários PGP.
Certificados digitais.
Um problema com os criptosistemas de chave pública é que os usuários devem estar constantemente vigilantes para garantir que estão criptografando com a chave da pessoa correta. Num ambiente onde é seguro trocar chaves livremente através de servidores públicos, os ataques man-in-the-middle são uma ameaça potencial. Neste tipo de ataque, alguém publica uma chave falsa com o nome e ID de usuário do destinatário pretendido. Os dados criptografados – e interceptados por – o verdadeiro proprietário desta chave falsa estão agora em mãos erradas. Em um ambiente de chave pública, é vital que você tenha certeza de que a chave pública para a qual você está criptografando os dados é de fato a chave pública do destinatário pretendido e não uma falsificação. Você pode simplesmente criptografar apenas as chaves que foram entregues fisicamente a você. Mas suponha que você precise trocar informações com pessoas que nunca conheceu; como você pode saber se tem a chave correta? Os certificados digitais, ou certs, simplificam a tarefa de estabelecer se uma chave pública realmente pertence ao suposto proprietário. Um certificado é uma forma de credencial. Exemplos podem ser sua carteira de motorista, seu cartão de previdência social ou sua certidão de nascimento. Cada um deles contém algumas informações que identificam você e alguma autorização informando que outra pessoa confirmou sua identidade. Alguns certificados, como o seu passaporte, são uma confirmação importante o suficiente da sua identidade para que você não queira perdê-los, para que ninguém os use para se passar por você.
Um certificado digital são dados que funcionam como um certificado físico. Um certificado digital é uma informação incluída na chave pública de uma pessoa que ajuda outras pessoas a verificar se uma chave é genuína ou válida. Os certificados digitais são usados para impedir tentativas de substituir a chave de uma pessoa por outra.
Um certificado digital consiste em três coisas:
● Uma chave pública.
● Informações do certificado. (Informações de "identidade" sobre o usuário, como nome, ID do usuário e assim por diante.) ● Uma ou mais assinaturas digitais.
O objetivo da assinatura digital em um certificado é afirmar que as informações do certificado foram atestadas por alguma outra pessoa ou entidade. A assinatura digital não atesta a autenticidade do certificado como um todo; ele atesta apenas que as informações de identidade assinadas acompanham ou estão vinculadas à chave pública. Assim, um certificado é basicamente uma chave pública com uma ou duas formas de identificação anexadas, além de um forte selo de aprovação de algum outro indivíduo confiável.
Figura 1-8. Anatomia de um certificado PGP
Distribuição de certificados.
Os certificados são utilizados quando é necessário trocar chaves públicas com outra pessoa. Para pequenos grupos de pessoas que desejam se comunicar com segurança, é fácil trocar manualmente disquetes ou e-mails contendo a chave pública de cada proprietário. Esta é a distribuição manual de chave pública e é prática apenas até certo ponto. Além desse ponto, é necessário implementar sistemas que possam fornecer os mecanismos necessários de segurança, armazenamento e troca para que colegas de trabalho, parceiros de negócios ou estranhos possam se comunicar, se necessário. Eles podem vir na forma de repositórios somente de armazenamento, chamados Servidores de Certificados, ou sistemas mais estruturados que fornecem recursos adicionais de gerenciamento de chaves e são chamados de Infraestruturas de Chave Pública (PKIs).
Servidores de certificados.
Um servidor de certificados, também chamado de servidor certificado ou servidor de chaves, é um banco de dados que permite aos usuários enviar e recuperar certificados digitais. Um servidor certificado geralmente fornece alguns recursos administrativos que permitem que uma empresa mantenha suas políticas de segurança – por exemplo, permitindo que apenas as chaves que atendam a determinados requisitos sejam armazenadas.
Infraestruturas de Chave Pública.
Uma PKI contém os recursos de armazenamento de certificados de um servidor de certificados, mas também fornece recursos de gerenciamento de certificados (a capacidade de emitir, revogar, armazenar, recuperar e confiar em certificados). A principal característica de uma PKI é a introdução do que é conhecido como Autoridade Certificadora,ou CA, que é uma entidade humana — uma pessoa, grupo, departamento, empresa ou outra associação — que uma organização autorizou a emitir certificados para seus usuários de computador. (A função de uma CA é análoga à do Passport Office do governo de um país.) Uma CA cria certificados e os assina digitalmente usando a chave privada da CA. Devido ao seu papel na criação de certificados, a CA é o componente central de uma PKI. Usando a chave pública da CA, qualquer pessoa que queira verificar a autenticidade de um certificado verifica a assinatura digital da CA emissora e, portanto, a integridade do conteúdo do certificado (mais importante ainda, a chave pública e a identidade do titular do certificado).
Formatos de certificado.
Um certificado digital é basicamente uma coleção de informações de identificação vinculadas a uma chave pública e assinadas por um terceiro confiável para provar sua autenticidade. Um certificado digital pode ter vários formatos diferentes.
O PGP reconhece dois formatos de certificado diferentes:
● Certificados PGP ● Certificados X.509 Formato do certificado PGP. Um certificado PGP inclui (mas não está limitado a) as seguintes informações: ● O número da versão do PGP — identifica qual versão do PGP foi usada para criar a chave associada ao certificado. A chave pública do titular do certificado — a parte pública do seu par de chaves, juntamente com o algoritmo da chave: RSA, DH (Diffie-Hellman) ou DSA (Algoritmo de Assinatura Digital).
● As informações do detentor do certificado — consistem em informações de “identidade” sobre o usuário, como seu nome, ID de usuário, fotografia e assim por diante. ● A assinatura digital do proprietário do certificado — também chamada de autoassinatura, é a assinatura que utiliza a chave privada correspondente da chave pública associada ao certificado. ● O período de validade do certificado — a data/hora de início e a data/hora de expiração do certificado; indica quando o certificado irá expirar. ● O algoritmo de criptografia simétrica preferido para a chave — indica o algoritmo de criptografia para o qual o proprietário do certificado prefere que as informações sejam criptografadas. Os algoritmos suportados são CAST, IDEA ou Triple-DES. Você pode pensar em um certificado PGP como uma chave pública com um ou mais rótulos vinculados a ele (veja a Figura 1.9 ). Nessas 'etiquetas' você encontrará informações que identificam o proprietário da chave e uma assinatura do proprietário da chave, que afirma que a chave e a identificação andam juntas. (Essa assinatura específica é chamada de autoassinatura; todo certificado PGP contém uma autoassinatura.) Um aspecto único do formato de certificado PGP é que um único certificado pode conter múltiplas assinaturas. Várias ou muitas pessoas podem assinar o par chave/identificação para atestar a sua própria garantia de que a chave pública pertence definitivamente ao proprietário especificado. Se você procurar em um servidor de certificados público, poderá notar que certos certificados, como o do criador do PGP, Phil Zimmermann, contêm muitas assinaturas. Alguns certificados PGP consistem em uma chave pública com vários rótulos, cada um contendo um meio diferente de identificar o proprietário da chave (por exemplo, o nome do proprietário e a conta de e-mail corporativa, o apelido do proprietário e a conta de e-mail residencial, uma fotografia do proprietário — tudo em um certificado). A lista de assinaturas de cada uma dessas identidades pode ser diferente; as assinaturas atestam a autenticidade de que um dos rótulos pertence à chave pública, e não que todos os rótulos da chave sejam autênticos. (Observe que 'autêntico' está nos olhos de quem vê - assinaturas são opiniões, e diferentes pessoas dedicam diferentes níveis de devida diligência na verificação da autenticidade antes de assinar uma chave.)
Figura 1-9. Um certificado PGP
Formato de certificado X.509.
X.509 é outro formato de certificado muito comum. Todos os certificados X.509 estão em conformidade com o padrão internacional ITU-T X.509; assim (teoricamente) os certificados X.509 criados para um aplicativo podem ser usados por qualquer aplicativo compatível com X.509. Na prática, porém, diferentes empresas criaram suas próprias extensões para certificados X.509, e nem todas funcionam juntas. Um certificado exige que alguém valide que uma chave pública e o nome do proprietário da chave andam juntos. Com os certificados PGP, qualquer pessoa pode desempenhar o papel de validador. Com certificados X.509, o validador é sempre uma Autoridade Certificadora ou alguém designado por uma CA. (Tenha em mente que os certificados PGP também suportam totalmente uma estrutura hierárquica usando uma CA para validar certificados.)
Um certificado X.509 é uma coleção de um conjunto padrão de campos contendo informações sobre um usuário ou dispositivo e sua chave pública correspondente. O padrão X.509 define quais informações vão para o certificado e descreve como codificá-lo (o formato dos dados). Todos os certificados X.509 possuem os seguintes dados:
O número da versão X.509
— identifica qual versão do padrão X.509 se aplica a este certificado, o que afeta quais informações podem ser especificadas nele. A mais atual é a versão 3.
A chave pública do titular do certificado
— a chave pública do titular do certificado, juntamente com um identificador de algoritmo que especifica a qual sistema criptográfico a chave pertence e quaisquer parâmetros de chave associados.
O número de série do certificado
— a entidade (aplicação ou pessoa) que criou o certificado é responsável por atribuir-lhe um número de série único para distingui-lo de outros certificados que emite. Esta informação é usada de diversas maneiras; por exemplo, quando um certificado é revogado, seu número de série é colocado em uma Lista de Revogação de Certificados ou CRL.
O identificador exclusivo do detentor do certificado
— (ou DN — nome distinto). Este nome pretende ser exclusivo na Internet. Este nome pretende ser exclusivo na Internet. Um DN consiste em múltiplas subseções e pode ser parecido com isto: CN=Bob Allen, OU=Divisão Total de Segurança de Rede, O=Network Associates, Inc., C=EUA (Referem-se ao nome comum, à unidade organizacional, à organização e ao país do sujeito .)
O período de validade do certificado
— a data/hora de início e a data/hora de expiração do certificado; indica quando o certificado irá expirar.
O nome exclusivo do emissor do certificado
— o nome exclusivo da entidade que assinou o certificado. Normalmente é uma CA. A utilização do certificado implica confiar na entidade que assinou este certificado. (Observe que em alguns casos, como certificados de CA raiz ou de nível superior , o emissor assina seu próprio certificado.)
A assinatura digital do emitente
— a assinatura utilizando a chave privada da entidade que emitiu o certificado.
O identificador do algoritmo de assinatura
— identifica o algoritmo usado pela CA para assinar o certificado.
Existem muitas diferenças entre um certificado X.509 e um certificado PGP, mas as mais importantes são as seguintes: você pode criar seu próprio certificado PGP;
● você deve solicitar e receber um certificado X.509 de uma autoridade de certificação
● Os certificados X.509 suportam nativamente apenas um único nome para o proprietário da chave
● Os certificados X.509 suportam apenas uma única assinatura digital para atestar a validade da chave
Para obter um certificado X.509, você deve solicitar a uma CA a emissão de um certificado. Você fornece sua chave pública, prova de que possui a chave privada correspondente e algumas informações específicas sobre você. Em seguida, você assina digitalmente as informações e envia o pacote completo – a solicitação de certificado – para a CA. A CA então realiza algumas diligências para verificar se as informações fornecidas estão corretas e, em caso afirmativo, gera o certificado e o devolve.
Você pode pensar em um certificado X.509 como um certificado de papel padrão (semelhante ao que você recebeu ao concluir uma aula de primeiros socorros básicos) com uma chave pública colada nele. Ele contém seu nome e algumas informações sobre você, além da assinatura da pessoa que o emitiu para você.
Figura 1-10. Um certificado X.509 Provavelmente, o uso mais visível dos certificados X.509 atualmente é em navegadores da web.
Validade e confiança Cada usuário em um sistema de chave pública está vulnerável a confundir uma chave falsa (certificado) com uma chave real. Validade é a confiança de que um certificado de chave pública pertence ao seu suposto proprietário. A validade é essencial em um ambiente de chave pública onde você deve estabelecer constantemente se um determinado certificado é autêntico ou não. Depois de ter certeza de que um certificado pertencente a outra pessoa é válido, você pode assinar a cópia em seu chaveiro para atestar que verificou o certificado e que ele é autêntico. Se quiser que outras pessoas saibam que você deu ao certificado seu selo de aprovação, você pode exportar a assinatura para um servidor de certificados para que outras pessoas possam vê-la.
Conforme descrito na seção Infraestruturas de Chave Pública , algumas empresas designam uma ou mais Autoridades de Certificação (CAs) para indicar a validade do certificado. Em uma organização que usa uma PKI com certificados X.509, é função da CA emitir certificados aos usuários — um processo que geralmente envolve responder à solicitação de certificado do usuário. Em uma organização que usa certificados PGP sem PKI, é função da CA verificar a autenticidade de todos os certificados PGP e depois assinar os bons. Basicamente, o objetivo principal de uma CA é vincular uma chave pública às informações de identificação contidas no certificado e, assim, garantir a terceiros que algum cuidado foi tomado para garantir que esta ligação das informações de identificação e da chave seja válida. O CA é o Grand Pooh-bah da validação em uma organização; alguém em quem todos confiam e, em algumas organizações, como aquelas que utilizam uma PKI, nenhum certificado é considerado válido, a menos que tenha sido assinado por uma CA confiável.
Verificando validade.
Uma maneira de estabelecer a validade é passar por algum processo manual. Existem várias maneiras de fazer isso. Você pode exigir que o destinatário pretendido lhe entregue fisicamente uma cópia de sua chave pública. Mas isto é muitas vezes inconveniente e ineficiente. Outra forma é verificar manualmente a impressão digital do certificado. Assim como as impressões digitais de cada ser humano são únicas, a impressão digital de cada certificado PGP é única. A impressão digital é um hash do certificado do usuário e aparece como uma das propriedades do certificado. No PGP, a impressão digital pode aparecer como um número hexadecimal ou uma série das chamadas palavras biométricas, que são foneticamente distintas e são usadas para facilitar um pouco o processo de identificação da impressão digital. Você pode verificar se um certificado é válido ligando para o proprietário da chave (para que você origine a transação) e pedindo ao proprietário que leia a impressão digital de sua chave para você e compare essa impressão digital com aquela que você acredita ser a verdadeira. Isso funciona se você conhece a voz do proprietário, mas como verificar manualmente a identidade de alguém que você não conhece? Algumas pessoas colocam a impressão digital de sua chave em seus cartões de visita exatamente por esse motivo. Outra forma de estabelecer a validade do certificado de alguém é confiar que um terceiro indivíduo passou pelo processo de validação do mesmo. Uma CA, por exemplo, é responsável por garantir que, antes de emitir um certificado, ele ou ela o verifique cuidadosamente para ter certeza de que a parte da chave pública realmente pertence ao suposto proprietário. Qualquer pessoa que confie na CA considerará automaticamente quaisquer certificados assinados pela CA como válidos. Outro aspecto da verificação da validade é garantir que o certificado não foi revogado. Para obter mais informações, consulte a seção Revogação de certificado .
Estabelecendo confiança.
Você valida certificados. Você confia nas pessoas. Mais especificamente, você confia nas pessoas para validar os certificados de outras pessoas. Normalmente, a menos que o proprietário lhe entregue o certificado, você terá que confiar na palavra de outra pessoa de que ele é válido.
Introdutores meta e confiáveis.
Na maioria das situações, as pessoas confiam completamente na CA para estabelecer a validade dos certificados. Isso significa que todos os demais dependem da CA para passar por todo o processo de validação manual. Isso é aceitável até um certo número de usuários ou locais de trabalho e, então, não é possível para a AC manter o mesmo nível de validação de qualidade. Nesse caso, é necessário adicionar outros validadores ao sistema.
Um CA também pode ser um meta- introdutor. Um meta-introdutor confere não apenas validade às chaves, mas também confere a capacidade de confiar nas chaves a outros. Semelhante ao rei que entrega seu selo a seus conselheiros de confiança para que eles possam agir de acordo com sua autoridade, o meta-introdutor permite que outros atuem como introdutores de confiança. Esses introdutores confiáveis podem validar chaves com o mesmo efeito do meta-introdutor. Eles não podem, entretanto, criar novos introdutores confiáveis.
Meta-introdutor e introdutor confiável são termos PGP. Em um ambiente X.509, o meta-introdutor é chamado de Autoridade de Certificação raiz ( CA raiz) e os introdutores confiáveis são Autoridades de Certificação subordinadas . A CA raiz usa a chave privada associada a um tipo de certificado especial denominado certificado CA raiz para assinar certificados. Qualquer certificado assinado pelo certificado CA raiz é visto como válido por qualquer outro certificado assinado pela raiz. Este processo de validação funciona mesmo para certificados assinados por outras CAs no sistema — desde que o certificado da CA raiz tenha assinado o certificado da CA subordinada, qualquer certificado assinado pela CA será considerado válido para outras pessoas dentro da hierarquia. Este processo de verificação de backup por meio do sistema para ver quem assinou cujo certificado é chamado de rastreamento de um caminho de certificação ou cadeia de certificação.
Modelos de confiança.
Em sistemas relativamente fechados, como em uma pequena empresa, é fácil rastrear um caminho de certificação até a CA raiz. No entanto, os usuários muitas vezes precisam se comunicar com pessoas fora do seu ambiente corporativo, incluindo algumas que nunca conheceram, como fornecedores, consumidores, clientes, associados e assim por diante. É difícil estabelecer uma linha de confiança com aqueles em quem sua CA não confia explicitamente. As empresas seguem um ou outro modelo de confiança, que determina como os usuários irão estabelecer a validade do certificado. Existem três modelos diferentes:
Confiança Direta.
Confiança Hierárquica Uma teia de confiança Confiança direta A confiança direta é o modelo de confiança mais simples. Neste modelo, um usuário confia que uma chave é válida porque sabe de onde ela veio. Todos os criptosistemas usam essa forma de confiança de alguma forma. Por exemplo, em navegadores da Web, as chaves raiz da Autoridade de Certificação são diretamente confiáveis porque foram enviadas pelo fabricante. Se houver alguma forma de hierarquia, ela se estenderá a partir desses certificados diretamente confiáveis. No PGP, um usuário que valida as chaves e nunca define outro certificado para ser um introdutor confiável está usando confiança direta.
Figura 1-11. Confiança direta
Confiança Hierárquica.
Em um sistema hierárquico, há vários certificados "raiz" a partir dos quais a confiança se estende. Esses certificados podem certificar eles próprios certificados ou podem certificar certificados que certificam ainda outros certificados em alguma cadeia. Considere isso como uma grande “árvore” de confiança. A validade do certificado "folha" é verificada rastreando desde seu certificador até outros certificadores, até que um certificado raiz diretamente confiável seja encontrado.
Figura 1-12. Confiança hierárquica
Teia de Confiança.
Uma teia de confiança abrange ambos os outros modelos, mas também acrescenta a noção de que a confiança está nos olhos de quem vê (que é a visão do mundo real) e a ideia de que mais informação é melhor. É, portanto, um modelo de confiança cumulativa. Um certificado pode ser confiável diretamente ou confiável em alguma cadeia que remonta a um certificado raiz diretamente confiável (o meta-introdutor) ou por algum grupo de introdutores.
Talvez você já tenha ouvido falar do termo seis graus de separação, que sugere que qualquer pessoa no mundo pode determinar algum vínculo com qualquer outra pessoa no mundo usando seis ou menos outras pessoas como intermediários. Esta é uma teia de introdutores. É também a visão de confiança do PGP. PGP usa assinaturas digitais como forma de introdução. Quando qualquer usuário assina a chave de outro, ele ou ela se torna o introdutor dessa chave. À medida que esse processo avança, ele estabelece uma rede de confiança.
Em um ambiente PGP, qualquer usuário pode atuar como autoridade certificadora. Qualquer usuário PGP pode validar o certificado de chave pública de outro usuário PGP. No entanto, tal certificado só é válido para outro usuário se a parte confiável reconhecer o validador como um introdutor confiável. (Ou seja, você confia na minha opinião de que as chaves dos outros são válidas apenas se você me considerar um apresentador confiável. Caso contrário, minha opinião sobre a validade das outras chaves é discutível.) Armazenados no chaveiro público de cada usuário estão indicadores de
● se o usuário considera ou não uma chave específica válida
● o nível de confiança que o usuário deposita na chave que o proprietário da chave pode servir como certificador das chaves de terceiros
Você indica, na sua cópia da minha chave, se acha que meu julgamento conta. Na verdade, é um sistema de reputação: certas pessoas têm a reputação de fornecer boas assinaturas e as pessoas confiam nelas para atestar a validade de outras chaves.
Níveis de confiança no PGP.
O nível mais alto de confiança em uma chave, a confiança implícita , é a confiança em seu próprio par de chaves. O PGP assume que se você possui a chave privada, você deve confiar nas ações da sua chave pública relacionada. Quaisquer chaves assinadas pela sua chave implicitamente confiável são válidas.
Existem três níveis de confiança que você pode atribuir à chave pública de outra pessoa:
● Confiança total ● Confiança marginal ● Não confiável (ou não confiável)
Para tornar as coisas confusas, também existem três níveis de validade:
● Válido ● Marginalmente válido ● Inválido
Para definir a chave de outra pessoa como um introdutor confiável, você
- Comece com uma chave válida, que seja.
- assinado por você ou
-
assinado por outro apresentador confiável e então
-
Defina o nível de confiança que você acha que o proprietário da chave tem direito.
Por exemplo, suponha que seu chaveiro contenha a chave de Alice. Você validou a chave de Alice e indica isso assinando-a. Você sabe que Alice é uma verdadeira defensora da validação de chaves de outras pessoas. Portanto, você atribui a chave dela com confiança total. Isso faz de Alice uma Autoridade Certificadora. Se Alice assinar a chave de outra pessoa, ela aparecerá como Válida em seu chaveiro. O PGP requer uma assinatura Totalmente confiável ou duas assinaturas Marginalmente confiáveis para estabelecer uma chave como válida. O método do PGP de considerar dois Marginais iguais a um Completo é semelhante a um comerciante que solicita duas formas de identificação. Você pode considerar Alice bastante confiável e também considerar Bob bastante confiável. Qualquer um deles sozinho corre o risco de assinar acidentalmente uma chave falsificada, portanto, você pode não depositar total confiança em nenhum deles. No entanto, as probabilidades de ambos os indivíduos terem assinado a mesma chave falsa são provavelmente pequenas.
Revogação de certificado.
Os certificados só são úteis enquanto são válidos. Não é seguro simplesmente presumir que um certificado é válido para sempre. Na maioria das organizações e em todas as PKIs, os certificados têm uma vida útil restrita. Isso restringe o período em que um sistema fica vulnerável caso ocorra um comprometimento do certificado.
Os certificados são assim criados com um período de validade programado: uma data/hora de início e uma data/hora de expiração. Espera-se que o certificado seja utilizável durante todo o seu período de validade (seu tempo de vida ). Quando o certificado expirar, ele não será mais válido, pois a autenticidade do seu par chave/identificação não estará mais garantida. (O certificado ainda pode ser usado com segurança para reconfirmar informações que foram criptografadas ou assinadas dentro do período de validade – no entanto, ele não deve ser confiável para tarefas criptográficas futuras.)
Existem também situações em que é necessário invalidar um certificado antes da sua data de expiração, como quando o titular do certificado termina o contrato de trabalho com a empresa ou suspeita que a chave privada correspondente do certificado foi comprometida. Isso é chamado de revogação. Um certificado revogado é muito mais suspeito do que um certificado expirado. Os certificados expirados são inutilizáveis, mas não apresentam a mesma ameaça de comprometimento que um certificado revogado. Qualquer pessoa que tenha assinado um certificado pode revogar a sua assinatura no certificado (desde que utilize a mesma chave privada que criou a assinatura). Uma assinatura revogada indica que o signatário não acredita mais que a chave pública e as informações de identificação pertencem uma à outra, ou que a chave pública do certificado (ou a chave privada correspondente) foi comprometida. Uma assinatura revogada deve ter quase tanto peso quanto um certificado revogado. Com certificados X.509, uma assinatura revogada é praticamente igual a um certificado revogado, visto que a única assinatura no certificado é aquela que o tornou válido em primeiro lugar – a assinatura da CA. Os certificados PGP fornecem o recurso adicional de que você pode revogar todo o seu certificado (não apenas as assinaturas nele) se você achar que o certificado foi comprometido. Somente o proprietário do certificado (o detentor da chave privada correspondente) ou alguém que o proprietário do certificado tenha designado como revogador pode revogar um certificado PGP. (Designar um revogador é uma prática útil, pois muitas vezes é a perda da senha da chave privada correspondente do certificado que leva um usuário PGP a revogar seu certificado - uma tarefa que só é possível se alguém tiver acesso à chave privada. ) Somente o emissor do certificado pode revogar um certificado X.509.
Comunicar que um certificado foi revogado.
Quando um certificado é revogado, é importante conscientizar os usuários potenciais do certificado de que ele não é mais válido. Com certificados PGP, a maneira mais comum de comunicar que um certificado foi revogado é publicá-lo em um servidor de certificados para que outras pessoas que desejem se comunicar com você sejam avisadas para não usar essa chave pública. Em um ambiente PKI, a comunicação de certificados revogados é mais comumente obtida por meio de uma estrutura de dados chamada Lista de Revogação de Certificados, ou CRL, que é publicada pela CA. A CRL contém uma lista validada com carimbo de data e hora de todos os certificados revogados e não expirados no sistema. Os certificados revogados permanecem na lista apenas até expirarem e, em seguida, são removidos da lista — isso evita que a lista fique muito longa. A CA distribui a CRL aos usuários em algum intervalo programado regularmente (e potencialmente fora do ciclo, sempre que um certificado é revogado). Teoricamente, isso impedirá que os usuários usem involuntariamente um certificado comprometido. É possível, no entanto, que haja um período de tempo entre as CRLs em que um certificado recentemente comprometido seja usado.
O que é uma senha?
A maioria das pessoas está familiarizada com a restrição de acesso a sistemas de computador por meio de uma senha, que é uma sequência única de caracteres que um usuário digita como código de identificação.
Uma senha longa é uma versão mais longa de uma senha e, em teoria, mais segura. Normalmente composta por várias palavras, uma frase secreta é mais segura contra ataques de dicionário padrão, em que o invasor tenta todas as palavras do dicionário na tentativa de determinar sua senha. As melhores senhas são relativamente longas e complexas e contêm uma combinação de letras maiúsculas e minúsculas, caracteres numéricos e de pontuação. O PGP usa uma senha para criptografar sua chave privada em sua máquina. Sua chave privada é criptografada em seu disco usando um hash de sua senha como chave secreta. Você usa a senha para descriptografar e usar sua chave privada. Uma senha deve ser difícil de esquecer e difícil de ser adivinhada por outras pessoas. Deve ser algo já firmemente enraizado na sua memória de longo prazo, em vez de algo que você invente do zero. Por que? Porque se você esquecer sua senha, você estará sem sorte. Sua chave privada é total e absolutamente inútil sem sua senha e nada pode ser feito a respeito. Lembra-se da citação anterior neste capítulo?
PGP é a criptografia que manterá os principais governos fora dos seus arquivos. Certamente também o manterá fora de seus arquivos. Tenha isso em mente quando decidir alterar sua senha para a piada daquela piada que você nunca consegue lembrar.
Divisão de chave.
Dizem que um segredo não é segredo se for conhecido por mais de uma pessoa. Compartilhar um par de chaves privadas representa um grande problema. Embora não seja uma prática recomendada, às vezes é necessário compartilhar um par de chaves privadas. Chaves de assinatura corporativa, por exemplo, são chaves privadas usadas por uma empresa para assinar – por exemplo – documentos legais, informações pessoais confidenciais ou comunicados de imprensa para autenticar sua origem. Nesse caso, vale a pena que vários membros da empresa tenham acesso à chave privada. No entanto, isto significa que qualquer indivíduo pode agir plenamente em nome da empresa. Nesse caso, é aconselhável dividir a chave entre várias pessoas, de modo que mais de uma ou duas pessoas apresentem um pedaço da chave para reconstituí-la em condições utilizáveis. Se poucas peças da chave estiverem disponíveis, a chave ficará inutilizável. Alguns exemplos são dividir uma chave em três partes e exigir duas delas para reconstituir a chave, ou dividi-la em duas partes e exigir ambas as peças. Se uma conexão de rede segura for usada durante o processo de reconstituição, os acionistas da chave não precisam estar fisicamente presentes para aderirem novamente à chave.
-
@ 4ba8e86d:89d32de4
2025-04-21 02:13:56Tutorial feito por nostr:nostr:npub1rc56x0ek0dd303eph523g3chm0wmrs5wdk6vs0ehd0m5fn8t7y4sqra3tk poste original abaixo:
Parte 1 : http://xh6liiypqffzwnu5734ucwps37tn2g6npthvugz3gdoqpikujju525yd.onion/263585/tutorial-debloat-de-celulares-android-via-adb-parte-1
Parte 2 : http://xh6liiypqffzwnu5734ucwps37tn2g6npthvugz3gdoqpikujju525yd.onion/index.php/263586/tutorial-debloat-de-celulares-android-via-adb-parte-2
Quando o assunto é privacidade em celulares, uma das medidas comumente mencionadas é a remoção de bloatwares do dispositivo, também chamado de debloat. O meio mais eficiente para isso sem dúvidas é a troca de sistema operacional. Custom Rom’s como LineageOS, GrapheneOS, Iodé, CalyxOS, etc, já são bastante enxutos nesse quesito, principalmente quanto não é instalado os G-Apps com o sistema. No entanto, essa prática pode acabar resultando em problemas indesejados como a perca de funções do dispositivo, e até mesmo incompatibilidade com apps bancários, tornando este método mais atrativo para quem possui mais de um dispositivo e separando um apenas para privacidade. Pensando nisso, pessoas que possuem apenas um único dispositivo móvel, que são necessitadas desses apps ou funções, mas, ao mesmo tempo, tem essa visão em prol da privacidade, buscam por um meio-termo entre manter a Stock rom, e não ter seus dados coletados por esses bloatwares. Felizmente, a remoção de bloatwares é possível e pode ser realizada via root, ou mais da maneira que este artigo irá tratar, via adb.
O que são bloatwares?
Bloatware é a junção das palavras bloat (inchar) + software (programa), ou seja, um bloatware é basicamente um programa inútil ou facilmente substituível — colocado em seu dispositivo previamente pela fabricante e operadora — que está no seu dispositivo apenas ocupando espaço de armazenamento, consumindo memória RAM e pior, coletando seus dados e enviando para servidores externos, além de serem mais pontos de vulnerabilidades.
O que é o adb?
O Android Debug Brigde, ou apenas adb, é uma ferramenta que se utiliza das permissões de usuário shell e permite o envio de comandos vindo de um computador para um dispositivo Android exigindo apenas que a depuração USB esteja ativa, mas também pode ser usada diretamente no celular a partir do Android 11, com o uso do Termux e a depuração sem fio (ou depuração wifi). A ferramenta funciona normalmente em dispositivos sem root, e também funciona caso o celular esteja em Recovery Mode.
Requisitos:
Para computadores:
• Depuração USB ativa no celular; • Computador com adb; • Cabo USB;
Para celulares:
• Depuração sem fio (ou depuração wifi) ativa no celular; • Termux; • Android 11 ou superior;
Para ambos:
• Firewall NetGuard instalado e configurado no celular; • Lista de bloatwares para seu dispositivo;
Ativação de depuração:
Para ativar a Depuração USB em seu dispositivo, pesquise como ativar as opções de desenvolvedor de seu dispositivo, e lá ative a depuração. No caso da depuração sem fio, sua ativação irá ser necessária apenas no momento que for conectar o dispositivo ao Termux.
Instalação e configuração do NetGuard
O NetGuard pode ser instalado através da própria Google Play Store, mas de preferência instale pela F-Droid ou Github para evitar telemetria.
F-Droid: https://f-droid.org/packages/eu.faircode.netguard/
Github: https://github.com/M66B/NetGuard/releases
Após instalado, configure da seguinte maneira:
Configurações → padrões (lista branca/negra) → ative as 3 primeiras opções (bloquear wifi, bloquear dados móveis e aplicar regras ‘quando tela estiver ligada’);
Configurações → opções avançadas → ative as duas primeiras (administrar aplicativos do sistema e registrar acesso a internet);
Com isso, todos os apps estarão sendo bloqueados de acessar a internet, seja por wifi ou dados móveis, e na página principal do app basta permitir o acesso a rede para os apps que você vai usar (se necessário). Permita que o app rode em segundo plano sem restrição da otimização de bateria, assim quando o celular ligar, ele já estará ativo.
Lista de bloatwares
Nem todos os bloatwares são genéricos, haverá bloatwares diferentes conforme a marca, modelo, versão do Android, e até mesmo região.
Para obter uma lista de bloatwares de seu dispositivo, caso seu aparelho já possua um tempo de existência, você encontrará listas prontas facilmente apenas pesquisando por elas. Supondo que temos um Samsung Galaxy Note 10 Plus em mãos, basta pesquisar em seu motor de busca por:
Samsung Galaxy Note 10 Plus bloatware list
Provavelmente essas listas já terão inclusas todos os bloatwares das mais diversas regiões, lhe poupando o trabalho de buscar por alguma lista mais específica.
Caso seu aparelho seja muito recente, e/ou não encontre uma lista pronta de bloatwares, devo dizer que você acaba de pegar em merda, pois é chato para um caralho pesquisar por cada aplicação para saber sua função, se é essencial para o sistema ou se é facilmente substituível.
De antemão já aviso, que mais para frente, caso vossa gostosura remova um desses aplicativos que era essencial para o sistema sem saber, vai acabar resultando na perda de alguma função importante, ou pior, ao reiniciar o aparelho o sistema pode estar quebrado, lhe obrigando a seguir com uma formatação, e repetir todo o processo novamente.
Download do adb em computadores
Para usar a ferramenta do adb em computadores, basta baixar o pacote chamado SDK platform-tools, disponível através deste link: https://developer.android.com/tools/releases/platform-tools. Por ele, você consegue o download para Windows, Mac e Linux.
Uma vez baixado, basta extrair o arquivo zipado, contendo dentro dele uma pasta chamada platform-tools que basta ser aberta no terminal para se usar o adb.
Download do adb em celulares com Termux.
Para usar a ferramenta do adb diretamente no celular, antes temos que baixar o app Termux, que é um emulador de terminal linux, e já possui o adb em seu repositório. Você encontra o app na Google Play Store, mas novamente recomendo baixar pela F-Droid ou diretamente no Github do projeto.
F-Droid: https://f-droid.org/en/packages/com.termux/
Github: https://github.com/termux/termux-app/releases
Processo de debloat
Antes de iniciarmos, é importante deixar claro que não é para você sair removendo todos os bloatwares de cara sem mais nem menos, afinal alguns deles precisam antes ser substituídos, podem ser essenciais para você para alguma atividade ou função, ou até mesmo são insubstituíveis.
Alguns exemplos de bloatwares que a substituição é necessária antes da remoção, é o Launcher, afinal, é a interface gráfica do sistema, e o teclado, que sem ele só é possível digitar com teclado externo. O Launcher e teclado podem ser substituídos por quaisquer outros, minha recomendação pessoal é por aqueles que respeitam sua privacidade, como Pie Launcher e Simple Laucher, enquanto o teclado pelo OpenBoard e FlorisBoard, todos open-source e disponíveis da F-Droid.
Identifique entre a lista de bloatwares, quais você gosta, precisa ou prefere não substituir, de maneira alguma você é obrigado a remover todos os bloatwares possíveis, modifique seu sistema a seu bel-prazer. O NetGuard lista todos os apps do celular com o nome do pacote, com isso você pode filtrar bem qual deles não remover.
Um exemplo claro de bloatware insubstituível e, portanto, não pode ser removido, é o com.android.mtp, um protocolo onde sua função é auxiliar a comunicação do dispositivo com um computador via USB, mas por algum motivo, tem acesso a rede e se comunica frequentemente com servidores externos. Para esses casos, e melhor solução mesmo é bloquear o acesso a rede desses bloatwares com o NetGuard.
MTP tentando comunicação com servidores externos:
Executando o adb shell
No computador
Faça backup de todos os seus arquivos importantes para algum armazenamento externo, e formate seu celular com o hard reset. Após a formatação, e a ativação da depuração USB, conecte seu aparelho e o pc com o auxílio de um cabo USB. Muito provavelmente seu dispositivo irá apenas começar a carregar, por isso permita a transferência de dados, para que o computador consiga se comunicar normalmente com o celular.
Já no pc, abra a pasta platform-tools dentro do terminal, e execute o seguinte comando:
./adb start-server
O resultado deve ser:
daemon not running; starting now at tcp:5037 daemon started successfully
E caso não apareça nada, execute:
./adb kill-server
E inicie novamente.
Com o adb conectado ao celular, execute:
./adb shell
Para poder executar comandos diretamente para o dispositivo. No meu caso, meu celular é um Redmi Note 8 Pro, codinome Begonia.
Logo o resultado deve ser:
begonia:/ $
Caso ocorra algum erro do tipo:
adb: device unauthorized. This adb server’s $ADB_VENDOR_KEYS is not set Try ‘adb kill-server’ if that seems wrong. Otherwise check for a confirmation dialog on your device.
Verifique no celular se apareceu alguma confirmação para autorizar a depuração USB, caso sim, autorize e tente novamente. Caso não apareça nada, execute o kill-server e repita o processo.
No celular
Após realizar o mesmo processo de backup e hard reset citado anteriormente, instale o Termux e, com ele iniciado, execute o comando:
pkg install android-tools
Quando surgir a mensagem “Do you want to continue? [Y/n]”, basta dar enter novamente que já aceita e finaliza a instalação
Agora, vá até as opções de desenvolvedor, e ative a depuração sem fio. Dentro das opções da depuração sem fio, terá uma opção de emparelhamento do dispositivo com um código, que irá informar para você um código em emparelhamento, com um endereço IP e porta, que será usado para a conexão com o Termux.
Para facilitar o processo, recomendo que abra tanto as configurações quanto o Termux ao mesmo tempo, e divida a tela com os dois app’s, como da maneira a seguir:
Para parear o Termux com o dispositivo, não é necessário digitar o ip informado, basta trocar por “localhost”, já a porta e o código de emparelhamento, deve ser digitado exatamente como informado. Execute:
adb pair localhost:porta CódigoDeEmparelhamento
De acordo com a imagem mostrada anteriormente, o comando ficaria “adb pair localhost:41255 757495”.
Com o dispositivo emparelhado com o Termux, agora basta conectar para conseguir executar os comandos, para isso execute:
adb connect localhost:porta
Obs: a porta que você deve informar neste comando não é a mesma informada com o código de emparelhamento, e sim a informada na tela principal da depuração sem fio.
Pronto! Termux e adb conectado com sucesso ao dispositivo, agora basta executar normalmente o adb shell:
adb shell
Remoção na prática Com o adb shell executado, você está pronto para remover os bloatwares. No meu caso, irei mostrar apenas a remoção de um app (Google Maps), já que o comando é o mesmo para qualquer outro, mudando apenas o nome do pacote.
Dentro do NetGuard, verificando as informações do Google Maps:
Podemos ver que mesmo fora de uso, e com a localização do dispositivo desativado, o app está tentando loucamente se comunicar com servidores externos, e informar sabe-se lá que peste. Mas sem novidades até aqui, o mais importante é que podemos ver que o nome do pacote do Google Maps é com.google.android.apps.maps, e para o remover do celular, basta executar:
pm uninstall –user 0 com.google.android.apps.maps
E pronto, bloatware removido! Agora basta repetir o processo para o resto dos bloatwares, trocando apenas o nome do pacote.
Para acelerar o processo, você pode já criar uma lista do bloco de notas com os comandos, e quando colar no terminal, irá executar um atrás do outro.
Exemplo de lista:
Caso a donzela tenha removido alguma coisa sem querer, também é possível recuperar o pacote com o comando:
cmd package install-existing nome.do.pacote
Pós-debloat
Após limpar o máximo possível o seu sistema, reinicie o aparelho, caso entre no como recovery e não seja possível dar reboot, significa que você removeu algum app “essencial” para o sistema, e terá que formatar o aparelho e repetir toda a remoção novamente, desta vez removendo poucos bloatwares de uma vez, e reiniciando o aparelho até descobrir qual deles não pode ser removido. Sim, dá trabalho… quem mandou querer privacidade?
Caso o aparelho reinicie normalmente após a remoção, parabéns, agora basta usar seu celular como bem entender! Mantenha o NetGuard sempre executando e os bloatwares que não foram possíveis remover não irão se comunicar com servidores externos, passe a usar apps open source da F-Droid e instale outros apps através da Aurora Store ao invés da Google Play Store.
Referências: Caso você seja um Australopithecus e tenha achado este guia difícil, eis uma videoaula (3:14:40) do Anderson do canal Ciberdef, realizando todo o processo: http://odysee.com/@zai:5/Como-remover-at%C3%A9-200-APLICATIVOS-que-colocam-a-sua-PRIVACIDADE-E-SEGURAN%C3%87A-em-risco.:4?lid=6d50f40314eee7e2f218536d9e5d300290931d23
Pdf’s do Anderson citados na videoaula: créditos ao anon6837264 http://eternalcbrzpicytj4zyguygpmkjlkddxob7tptlr25cdipe5svyqoqd.onion/file/3863a834d29285d397b73a4af6fb1bbe67c888d72d30/t-05e63192d02ffd.pdf
Processo de instalação do Termux e adb no celular: https://youtu.be/APolZrPHSms
-
@ bc52210b:20bfc6de
2025-04-28 20:13:25
Imagine a world where clean, safe, and efficient nuclear power can be delivered to any corner of the globe, powering everything from small villages to bustling cities. This vision is becoming a reality with the development of nuclear modular plants—compact, portable nuclear reactors that can be shipped in standard containers and set up quickly to provide reliable energy. These innovative power sources use fission—the process of splitting atomic nuclei to release energy, the same fundamental principle that powers traditional nuclear plants—but with a twist: they utilize thorium as fuel and a molten salt system for cooling and fuel delivery. This combination offers a host of benefits that could revolutionize how we think about nuclear energy.
Portability and Deployment
One of the most significant advantages of these nuclear modular plants is their portability. Designed to fit within standard shipping containers, these reactors can be transported by truck, ship, or even air to virtually any location. This makes them ideal for remote communities, disaster relief efforts, or military operations where traditional power infrastructure is lacking or damaged. Setting up a conventional power plant typically takes years, but these modular units can be operational in a matter of weeks, providing a rapid solution to energy needs.
Safety Features
Safety is a paramount concern in nuclear energy, and modular thorium molten salt reactors (MSRs) offer several inherent safety advantages. Unlike traditional reactors that use water under high pressure, MSRs operate at atmospheric pressure, eliminating the risk of pressure-related accidents. The fuel is dissolved in the molten salt, which means there's no solid fuel that could melt down. If the reactor overheats, the salt expands, naturally slowing the fission reaction—a built-in safety mechanism. Additionally, thorium-based fuels produce less long-lived radioactive waste, reducing the long-term environmental impact.
Efficiency and Abundance
Thorium is a more abundant resource than uranium, with estimates suggesting it is three to four times more plentiful in the Earth's crust. This abundance makes thorium a sustainable fuel choice for the future. Moreover, MSRs can operate at higher temperatures than traditional reactors, leading to greater thermal efficiency. This means more electricity can be generated from the same amount of fuel, making the energy production process more efficient and cost-effective in the long run.
Scalability
The modular design of these reactors allows for scalability to meet varying power demands. A single unit might power a small community, while multiple units can be combined to serve larger towns or cities. This flexibility is particularly useful for growing populations or regions with fluctuating energy needs. As demand increases, additional modules can be added without the need for extensive new infrastructure.
Cost-Effectiveness
While the initial investment in nuclear modular plants may be significant, the long-term operational costs can be lower than traditional power sources. The high efficiency of MSRs means less fuel is needed over time, and the reduced waste production lowers disposal costs. Additionally, the ability to mass-produce these modular units could drive down manufacturing costs, making nuclear power more accessible and affordable.
Environmental Impact
Nuclear power is already one of the cleanest energy sources in terms of carbon emissions, and thorium MSRs take this a step further. By producing less long-lived waste and utilizing a more abundant fuel, these reactors offer a more sustainable path for nuclear energy. Furthermore, their ability to provide reliable baseload power can help reduce reliance on fossil fuels, contributing to global efforts to combat climate change.
Challenges and Considerations
Despite these benefits, there are challenges to overcome before nuclear modular plants can be widely deployed. The technology for thorium MSRs is still in the developmental stage, with ongoing research needed to address issues such as material corrosion and fuel processing. Regulatory frameworks will also need to adapt to this new type of reactor, and public perception of nuclear energy remains a hurdle in many regions. However, with continued investment and innovation, these obstacles can be addressed.
Conclusion
In conclusion, nuclear modular plants using thorium and molten salt systems represent a promising advancement in nuclear technology. Their portability, safety features, efficiency, scalability, and environmental benefits make them an attractive option for meeting the world's growing energy needs. While challenges remain, the potential of these reactors to provide clean, reliable power to communities around the globe is undeniable. As research and development continue, we may soon see a new era of nuclear energy that is safer, more efficient, and more accessible than ever before.
-
@ ed5774ac:45611c5c
2025-04-19 20:29:31April 20, 2020: The day I saw my so-called friends expose themselves as gutless, brain-dead sheep.
On that day, I shared a video exposing the damning history of the Bill & Melinda Gates Foundation's vaccine campaigns in Africa and the developing world. As Gates was on every TV screen, shilling COVID jabs that didn’t even exist, I called out his blatant financial conflict of interest and pointed out the obvious in my facebook post: "Finally someone is able to explain why Bill Gates runs from TV to TV to promote vaccination. Not surprisingly, it's all about money again…" - referencing his substantial investments in vaccine technology, including BioNTech's mRNA platform that would later produce the COVID vaccines and generate massive profits for his so-called philanthropic foundation.
The conflict of interest was undeniable. I genuinely believed anyone capable of basic critical thinking would at least pause to consider these glaring financial motives. But what followed was a masterclass in human stupidity.
My facebook post from 20 April 2020:
Not only was I branded a 'conspiracy theorist' for daring to question the billionaire who stood to make a fortune off the very vaccines he was shilling, but the brain-dead, logic-free bullshit vomited by the people around me was beyond pathetic. These barely literate morons couldn’t spell "Pfizer" without auto-correct, yet they mindlessly swallowed and repeated every lie the media and government force-fed them, branding anything that cracked their fragile reality as "conspiracy theory." Big Pharma’s rap sheet—fraud, deadly cover-ups, billions in fines—could fill libraries, yet these obedient sheep didn’t bother to open a single book or read a single study before screaming their ignorance, desperate to virtue-signal their obedience. Then, like spineless lab rats, they lined up for an experimental jab rushed to the market in months, too dumb to care that proper vaccine development takes a decade.
The pathetic part is that these idiots spend hours obsessing over reviews for their useless purchases like shoes or socks, but won’t spare 60 seconds to research the experimental cocktail being injected into their veins—or even glance at the FDA’s own damning safety reports. Those same obedient sheep would read every Yelp review for a fucking coffee shop but won't spend five minutes looking up Pfizer's criminal fraud settlements. They would demand absolute obedience to ‘The Science™’—while being unable to define mRNA, explain lipid nanoparticles, or justify why trials were still running as they queued up like cattle for their jab. If they had two brain cells to rub together or spent 30 minutes actually researching, they'd know, but no—they'd rather suck down the narrative like good little slaves, too dumb to question, too weak to think.
Worst of all, they became the system’s attack dogs—not just swallowing the poison, but forcing it down others’ throats. This wasn’t ignorance. It was betrayal. They mutated into medical brownshirts, destroying lives to virtue-signal their obedience—even as their own children’s hearts swelled with inflammation.
One conversation still haunts me to this day—a masterclass in wealth-worship delusion. A close friend, as a response to my facebook post, insisted that Gates’ assumed reading list magically awards him vaccine expertise, while dismissing his billion-dollar investments in the same products as ‘no conflict of interest.’ Worse, he argued that Gates’s $5–10 billion pandemic windfall was ‘deserved.’
This exchange crystallizes civilization’s intellectual surrender: reason discarded with religious fervor, replaced by blind faith in corporate propaganda.
The comment of a friend on my facebook post that still haunts me to this day:
Walking Away from the Herd
After a period of anger and disillusionment, I made a decision: I would no longer waste energy arguing with people who refused to think for themselves. If my circle couldn’t even ask basic questions—like why an untested medical intervention was being pushed with unprecedented urgency—then I needed a new community.
Fortunately, I already knew where to look. For three years, I had been involved in Bitcoin, a space where skepticism wasn’t just tolerated—it was demanded. Here, I’d met some of the most principled and independent thinkers I’d ever encountered. These were people who understood the corrupting influence of centralized power—whether in money, media, or politics—and who valued sovereignty, skepticism, and integrity. Instead of blind trust, bitcoiners practiced relentless verification. And instead of empty rhetoric, they lived by a simple creed: Don’t trust. Verify.
It wasn’t just a philosophy. It was a lifeline. So I chose my side and I walked away from the herd.
Finding My Tribe
Over the next four years, I immersed myself in Bitcoin conferences, meetups, and spaces where ideas were tested, not parroted. Here, I encountered extraordinary people: not only did they share my skepticism toward broken systems, but they challenged me to sharpen it.
No longer adrift in a sea of mindless conformity, I’d found a crew of thinkers who cut through the noise. They saw clearly what most ignored—that at the core of society’s collapse lay broken money, the silent tax on time, freedom, and truth itself. But unlike the complainers I’d left behind, these people built. They coded. They wrote. They risked careers and reputations to expose the rot. Some faced censorship; others, mockery. All understood the stakes.
These weren’t keyboard philosophers. They were modern-day Cassandras, warning of inflation’s theft, the Fed’s lies, and the coming dollar collapse—not for clout, but because they refused to kneel to a dying regime. And in their defiance, I found something rare: a tribe that didn’t just believe in a freer future. They were engineering it.
April 20, 2024: No more herd. No more lies. Only proof-of-work.
On April 20, 2024, exactly four years after my last Facebook post, the one that severed my ties to the herd for good—I stood in front of Warsaw’s iconic Palace of Culture and Science, surrounded by 400 bitcoiners who felt like family. We were there to celebrate Bitcoin’s fourth halving, but it was more than a protocol milestone. It was a reunion of sovereign individuals. Some faces I’d known since the early days; others, I’d met only hours before. We bonded instantly—heated debates, roaring laughter, zero filters on truths or on so called conspiracy theories.
As the countdown to the halving began, it hit me: This was the antithesis of the hollow world I’d left behind. No performative outrage, no coerced consensus—just a room of unyielding minds who’d traded the illusion of safety for the grit of truth. Four years prior, I’d been alone in my resistance. Now, I raised my glass among my people - those who had seen the system's lies and chosen freedom instead. Each had their own story of awakening, their own battles fought, but here we shared the same hard-won truth.
The energy wasn’t just electric. It was alive—the kind that emerges when free people build rather than beg. For the first time, I didn’t just belong. I was home. And in that moment, the halving’s ticking clock mirrored my own journey: cyclical, predictable in its scarcity, revolutionary in its consequences. Four years had burned away the old world. What remained was stronger.
No Regrets
Leaving the herd wasn’t a choice—it was evolution. My soul shouted: "I’d rather stand alone than kneel with the masses!". The Bitcoin community became more than family; they’re living proof that the world still produces warriors, not sheep. Here, among those who forge truth, I found something extinct elsewhere: hope that burns brighter with every halving, every block, every defiant mind that joins the fight.
Change doesn’t come from the crowd. It starts when one person stops applauding.
Today, I stand exactly where I always wanted to be—shoulder-to-shoulder with my true family: the rebels, the builders, the ungovernable. Together, we’re building the decentralized future.
-
@ 30b99916:3cc6e3fe
2025-04-28 16:29:23security #vault #veracrypt #powershell
VaultApi a self-host method for securing data
VaultApi is dependent upon both HashiCorp Vault and VeraCrypt to work it's magic.
Hashicorp Vault and KeePassXC are the primary password manager applications that I'm using currently and for the most part the entries in each should be mirroring each other. The functional difference between these two are KeePassXC has a graphical interface. While Hashicorp Vault has a web interface, the key value VaultApi makes use of is the REST Api to perform ACID operations on secured data for automation purposes.
The vault keys and root token associated with HashiCorp Vault are stored in an encrypted file that is kept in cold storage. Prior to starting HashiCrop Vault server, the cold storage file is mounted on the system using VeraCrypt.
Also, this implementation is on my non-routed network primarily being used by my Linux systems but any OS supporting PowerShell on the non-routed network should be able to access the Vault as a client.
Additionally, the Vault is only ran on an on-demand basis.
The startup process is as follows:
VaultApi start VaultApi unseal VaultApi login VaultApi KeyPaths
The command VaultApi KeyPaths dumps a list of key paths to a local file to make the finding of key paths simpler.The path lookup process is as follows:
VaultApi FindPaths Vehicle
This command returns a list of paths matching the specified value of Vehicle.VaultApi FindPaths Vehicle kv1/Vehicle/1995-Mustang-GT500 kv1/Vehicle/2003-DodgeViper kv1/Vehicle/2012-Nissan kv1/Vehicle/2016-Telsa
To lookup all the keys associated to a given path:VaultApi kv1Read kv1/Vehicle/2012-Nissan -kvkey _ReturnKeys plate VIN
To return a value associated with a key of a given path to the clipboard:VaultApi kv1Read kv1/Vehicle/2012-Nissan -kvkey plate
If the -raw options is included the value will be returned to the console.To add a new key/value pair to an existing path:
VaultApi kv1Update kv1/Vehicle/2012-Nissan 21000000 -kvkey mileage
To add a new path and key/value pair:VaultApi kv1Create kv1/Vehicle/2025-Lambo Bitcoin -kvkey plate
To list the 2nd level path names: ``` Default level 1 path name is "kv1"VaultApi kv1list
To list 3rd level path names:
VaultApi kv1list kv1/Vehicle
To Delete a **path** and it's associated key/value pairs:
VaultApi kv1Delete kv1/Vehicle/2012-NissanTo just delete a single key/value pair for a given path use the HashiCorp Vault Web interface.
To launch the **HashiCorp Vault** web interface:
VaultApi WebUITo return status information about the **Vault**:
VaultApi status sealed initialized version n t
False True 1.15.6 5 3
To return process information about the **Vault**:
VaultApi Check Hashicorp Vault (v1.15.6) is running...116147 ``` To show the hash value of the VaultApi script:VaultApi ShowHash 3D47628ECB3FA0E7DBD28BA7606CE5BF
To return a 20 character randomized value to the clipboard:VaultApi SetValue
To create a backup of the HashiCorp Vault : ``` Must be logged in with the root token.VaultApi Backup
A snapshot file will be created in the $HOME/Downloads directory by default. ``` To stop the HashiCorp Vault server:
``` VaultApi seal The vault is sealed.
VaultApi stop The server is stopped.
```
To get help information about VaultApi
``` Get-Help VaultApi -Full | more
OR
Get-Help VaultApi -Examples | more ```
Here are some past articles I wrote about setting up HashiCorp Vault and VeraCrypt.
Bitcoin and key/value using Hashicorp Vault
Bitcoin and Cold Storage using VeraCrypt
More information on VaultApi.
-
@ 6e0ea5d6:0327f353
2025-04-19 15:09:18🩸
The world won’t stop and wait for you to recover.Do your duty regardless of how you feel. That’s the only guarantee you’ll end the day alright.
You’ve heard it before: “The worst workout is the one you didn’t do.” Sometimes you don’t feel like going to the gym. You start bargaining with laziness: “I didn’t sleep well… maybe I should skip today.” But then you go anyway, committing only to the bare minimum your energy allows. And once you start, your body outperforms your mind’s assumptions—it turns out to be one of the best workouts you’ve had in a long time. The feeling of following through, of winning a battle you were losing, gives you the confidence to own the rest of your day. You finally feel good.
And that wouldn’t have happened if you stayed home waiting to feel better. Guilt would’ve joined forces with discouragement, and you’d be crushed by melancholy in a victim mindset. That loss would bleed into the rest of your week, conditioning your mind: because you didn’t spend your energy on the workout, you’d stay up late, wake up worse, and while waiting to feel “ready,” you’d lose a habit that took months of effort to build.
When in doubt, just do your duty. Stick to the plan. Don’t negotiate with your feelings—outsmart them. “Just one page today,” and you’ll end up reading ten. “Only the easy tasks,” and you’ll gain momentum to conquer the hard ones. Laziness is a serpent—you win when you make no deals with it.
A close friend once told me that when he was at his limit during a second job shift, he’d open a picture on his phone—of a fridge or a stove he needed to buy for his home—and that image gave him strength to stay awake. That moment stuck with me forever.
Do you really think the world will have the same mercy on you that you have on yourself? Don’t be surprised when it doesn’t spare you. Move forward even while stitching your wounds: “If you wait for perfect conditions, you’ll never do anything.” (Ecclesiastes 11:4)
Thank you for reading, my friend!
If this message resonated with you, consider leaving your "🥃" as a token of appreciation.
A toast to our family!
-
@ 6e0ea5d6:0327f353
2025-04-19 15:02:55My friend, let yourself be deluded for a moment, and reality will see to it that your fantasy is shattered—like a hammer crushing marble. The real world grants no mercy; it will relentlessly tear down your aspirations, casting them into the abyss of disillusionment and burying your dreams under the unbearable weight of your own expectations. It’s an inescapable fate—but the outcome is still in your hands: perish at the bottom like a wretch or turn the pit into a trench.
Davvero, everyone must eventually face something that breaks them. It is in devastation that man discovers what he is made of, and in the silence of defeat that he hears the finest advice. Yet the weak would rather embrace the convenient lie of self-pity, blaming life for failures that are, in truth, the result of their own negligence and cowardly choices. If you hide behind excuses because you fear the painful truth, know this: the responsibility has always been yours.
Ascolta bene! Just remain steadfast, even when everything feels like an endless maze. The difficulties you face today—those you believe you’ll never overcome—will one day seem insignificant under the light of time and experience. Tomorrow, you’ll look back and laugh at yourself for ever letting these storms seem so overwhelming.
Now, it’s up to you to fight your own battle—for the evil day spares no one. Don’t let yourself be paralyzed by shock or bow before adversity. Be strong and of good courage—not as one who waits for relief, but as one prepared to face the inevitable and turn pain into glory.
Thank you for reading, my friend!
If this message resonated with you, consider leaving your "🥃" as a token of appreciation.
A toast to our family!
-
@ f683e870:557f5ef2
2025-04-28 10:10:55Spam is the single biggest problem in decentralized networks. Jameson Lopp, co-founder of Casa and OG bitcoiner, has written a brilliant article on the death of decentralized email that paints a vivid picture of what went wrong—and how an originally decentralized protocol was completely captured. The cause? Spam.
The same fate may happen to Nostr, because posting a note is fundamentally cheap. Payments, and to some extent Proof of Work, certainly have their role in fighting spam, but they introduce friction, which doesn’t work everywhere. In particular, they can’t solve every economic problem.\ Take free trials, for example. There is a reason why 99% of companies offer them. Sure, you waste resources on users who don’t convert, but it’s a calculated cost, a marketing expense. Also, some services can’t or don’t want to monetize directly. They offer something for free and monetize elsewhere.
So how do you offer a free trial or giveaway in a hostile decentralized network? Or even, how do you decide which notes to accept on your relay?
At first glance, these may seem like unrelated questions—but they’re not. Generally speaking, these are situations where you have a finite budget, and you want to use it well. You want more of what you value — and less of what you don’t (spam).
Reputation is a powerful shortcut when direct evaluation isn’t practical. It’s hard to earn, easy to lose — and that’s exactly what makes it valuable.\ Can a reputable user do bad things? Absolutely. But it’s much less likely, and that’s the point. Heuristics are always imperfect, just like the world we live in.
The legacy Web relies heavily on email-based reputation. If you’ve ever tried to log in with a temporary email, you know what I’m talking about. It just doesn’t work anymore. The problem, as Lopp explains, is that these systems are highly centralized, opaque, and require constant manual intervention.\ They also suck. They put annoying roadblocks between the world and your product, often frustrating the very users you’re trying to convert.
At Vertex, we take a different approach.\ We transparently analyze Nostr’s open social graph to help companies fight spam while improving the UX for their users. But we don’t take away your agency—we just do the math. You take the decision of what algorithm and criteria to use.
Think of us as a signal provider, not an authority.\ You define what reputation means for your use case. Want to rank by global influence? Local or personalized? You’re in control. We give you actionable and transparent analytics so you can build sharper filters, better user experiences, and more resilient systems. That’s how we fight spam, without sacrificing decentralization.
Are you looking to add Web of Trust capabilities to your app or project?\ Take a look at our website or send a DM to Pip.
-
@ 3ffac3a6:2d656657
2025-04-15 14:49:31🏅 Como Criar um Badge Épico no Nostr com
nak
+ badges.pageRequisitos:
- Ter o
nak
instalado (https://github.com/fiatjaf/nak) - Ter uma chave privada Nostr (
nsec...
) - Acesso ao site https://badges.page
- Um relay ativo (ex:
wss://relay.primal.net
)
🔧 Passo 1 — Criar o badge em badges.page
- Acesse o site https://badges.page
-
Clique em "New Badge" no canto superior direito
-
Preencha os campos:
- Nome (ex:
Teste Épico
) - Descrição
-
Imagem e thumbnail
-
Após criar, você será redirecionado para a página do badge.
🔍 Passo 2 — Copiar o
naddr
do badgeNa barra de endereços, copie o identificador que aparece após
/a/
— este é o naddr do seu badge.Exemplo:
nostr:naddr1qq94getnw3jj63tsd93k7q3q8lav8fkgt8424rxamvk8qq4xuy9n8mltjtgztv2w44hc5tt9vetsxpqqqp6njkq3sd0
Copie:
naddr1qq94getnw3jj63tsd93k7q3q8lav8fkgt8424rxamvk8qq4xuy9n8mltjtgztv2w44hc5tt9vetsxpqqqp6njkq3sd0
🧠 Passo 3 — Decodificar o naddr com
nak
Abra seu terminal (ou Cygwin no Windows) e rode:
bash nak decode naddr1qq94getnw3jj63tsd93k7q3q8lav8fkgt8424rxamvk8qq4xuy9n8mltjtgztv2w44hc5tt9vetsxpqqqp6njkq3sd0
Você verá algo assim:
json { "pubkey": "3ffac3a6c859eaaa8cdddb2c7002a6e10b33efeb92d025b14ead6f8a2d656657", "kind": 30009, "identifier": "Teste-Epico" }
Grave o campo
"identifier"
— nesse caso: Teste-Epico
🛰️ Passo 4 — Consultar o evento no relay
Agora vamos pegar o evento do badge no relay:
bash nak req -d "Teste-Epico" wss://relay.primal.net
Você verá o conteúdo completo do evento do badge, algo assim:
json { "kind": 30009, "tags": [["d", "Teste-Epico"], ["name", "Teste Épico"], ...] }
💥 Passo 5 — Minerar o evento como "épico" (PoW 31)
Agora vem a mágica: minerar com proof-of-work (PoW 31) para que o badge seja classificado como épico!
bash nak req -d "Teste-Epico" wss://relay.primal.net | nak event --pow 31 --sec nsec1SEU_NSEC_AQUI wss://relay.primal.net wss://nos.lol wss://relay.damus.io
Esse comando: - Resgata o evento original - Gera um novo com PoW de dificuldade 31 - Assina com sua chave privada
nsec
- E publica nos relays wss://relay.primal.net, wss://nos.lol e wss://relay.damus.io⚠️ Substitua
nsec1SEU_NSEC_AQUI
pela sua chave privada Nostr.
✅ Resultado
Se tudo der certo, o badge será atualizado com um evento de PoW mais alto e aparecerá como "Epic" no site!
- Ter o
-
@ 78b3c1ed:5033eea9
2025-04-27 01:42:48・ThunderHubで焼いたマカロンがlncli printmacaroonでどう見えるか確認した。
ThunderHub macaroon permissions
get invoices invoices:read create invoices invoices:write get payments offchain:read pay invoices offchain:write get chain transactions onchain:read send to chain address onchain:write create chain address address:write get wallet info info:read stop daemon info:write この結果によれば、offchain:wirteとonchain:writeの権限がなければそのマカロンを使うクライアントは勝手にBTCを送金することができない。 info:writeがなければ勝手にLNDを止めたりすることができない。
・lncli printmacaroonでデフォルトで作られるmacaroonのpermissionsを調べてみた。 admin.macaroon
{ "version": 2, "location": "lnd", "root_key_id": "0", "permissions": [ "address:read", "address:write", "info:read", "info:write", "invoices:read", "invoices:write", "macaroon:generate", "macaroon:read", "macaroon:write", "message:read", "message:write", "offchain:read", "offchain:write", "onchain:read", "onchain:write", "peers:read", "peers:write", "signer:generate", "signer:read" ], "caveats": null }
chainnotifier.macaroon{ "version": 2, "location": "lnd", "root_key_id": "0", "permissions": [ "onchain:read" ], "caveats": null }
invoice.macaroon{ "version": 2, "location": "lnd", "root_key_id": "0", "permissions": [ "address:read", "address:write", "invoices:read", "invoices:write", "onchain:read" ], "caveats": null }
invoices.macaroon{ "version": 2, "location": "lnd", "root_key_id": "0", "permissions": [ "invoices:read", "invoices:write" ], "caveats": null }
readonly.macaroon{ "version": 2, "location": "lnd", "root_key_id": "0", "permissions": [ "address:read", "info:read", "invoices:read", "macaroon:read", "message:read", "offchain:read", "onchain:read", "peers:read", "signer:read" ], "caveats": null }
router.macaroon{ "version": 2, "location": "lnd", "root_key_id": "0", "permissions": [ "offchain:read", "offchain:write" ], "caveats": null }
signer.macaroon{ "version": 2, "location": "lnd", "root_key_id": "0", "permissions": [ "signer:generate", "signer:read" ], "caveats": null }
walletkit.macaroon{ "version": 2, "location": "lnd", "root_key_id": "0", "permissions": [ "address:read", "address:write", "onchain:read", "onchain:write" ], "caveats": null }
・lncli listpermissions コマンドですべての RPC メソッド URI と、それらを呼び出すために必要なマカロン権限を一覧表示できる。 LND v0.18.5-betaでやると1344行ほどのJSONができる。 AddInvoiceだとinvoice:writeのpermissionを持つmacaroonを使えばインボイスを作れるようだ。
"/lnrpc.Lightning/AddInvoice": { "permissions": [ { "entity": "invoices", "action": "write" } ] },
lncli listpermissionsからentityとactionを抜き出してみた。 ``` "entity": "address", "entity": "info", "entity": "invoices", "entity": "macaroon", "entity": "message", "entity": "offchain", "entity": "onchain", "entity": "peers", "entity": "signer","action": "generate" "action": "read" "action": "write"
lncli とjqを組み合わせると例えば以下コマンドでinvoices:writeを必要とするRPCの一覧を表示できる。 invoices:writeだとAddInvoiceの他にホドルインボイス作成でも使ってるようだ
lncli listpermissions | jq -r '.method_permissions | to_entries[] | select(.value.permissions[] | select(.entity == "invoices" and .action == "write")) | .key'/invoicesrpc.Invoices/AddHoldInvoice /invoicesrpc.Invoices/CancelInvoice /invoicesrpc.Invoices/HtlcModifier /invoicesrpc.Invoices/LookupInvoiceV2 /invoicesrpc.Invoices/SettleInvoice /lnrpc.Lightning/AddInvoice
invoices:readだと以下となる。
/invoicesrpc.Invoices/SubscribeSingleInvoice /lnrpc.Lightning/ListInvoices /lnrpc.Lightning/LookupInvoice /lnrpc.Lightning/SubscribeInvoicesLNの主だった機能のRPCはoffchainが必要ぽいので抜き出してみた。 offchain:write チャネルの開閉、ペイメントの送信までやってるみたい。 デフォルトのmacaroonでoffchain:writeを持ってるのはadminとrouterの2つだけ。openchannel,closechannelはonchain:writeのpermissionも必要なようだ。
/autopilotrpc.Autopilot/ModifyStatus /autopilotrpc.Autopilot/SetScores /lnrpc.Lightning/AbandonChannel /lnrpc.Lightning/BatchOpenChannel /lnrpc.Lightning/ChannelAcceptor /lnrpc.Lightning/CloseChannel /lnrpc.Lightning/DeleteAllPayments /lnrpc.Lightning/DeletePayment /lnrpc.Lightning/FundingStateStep /lnrpc.Lightning/OpenChannel /lnrpc.Lightning/OpenChannelSync /lnrpc.Lightning/RestoreChannelBackups /lnrpc.Lightning/SendCustomMessage /lnrpc.Lightning/SendPayment /lnrpc.Lightning/SendPaymentSync /lnrpc.Lightning/SendToRoute /lnrpc.Lightning/SendToRouteSync /lnrpc.Lightning/UpdateChannelPolicy /routerrpc.Router/HtlcInterceptor /routerrpc.Router/ResetMissionControl /routerrpc.Router/SendPayment /routerrpc.Router/SendPaymentV2 /routerrpc.Router/SendToRoute /routerrpc.Router/SendToRouteV2 /routerrpc.Router/SetMissionControlConfig /routerrpc.Router/UpdateChanStatus /routerrpc.Router/XAddLocalChanAliases /routerrpc.Router/XDeleteLocalChanAliases /routerrpc.Router/XImportMissionControl /wtclientrpc.WatchtowerClient/AddTower /wtclientrpc.WatchtowerClient/DeactivateTower /wtclientrpc.WatchtowerClient/RemoveTower /wtclientrpc.WatchtowerClient/TerminateSession"/lnrpc.Lightning/OpenChannel": { "permissions": [ { "entity": "onchain", "action": "write" }, { "entity": "offchain", "action": "write" } ] },
offchain:read readの方はチャネルやインボイスの状態を確認するためのpermissionのようだ。
/lnrpc.Lightning/ChannelBalance /lnrpc.Lightning/ClosedChannels /lnrpc.Lightning/DecodePayReq /lnrpc.Lightning/ExportAllChannelBackups /lnrpc.Lightning/ExportChannelBackup /lnrpc.Lightning/FeeReport /lnrpc.Lightning/ForwardingHistory /lnrpc.Lightning/GetDebugInfo /lnrpc.Lightning/ListAliases /lnrpc.Lightning/ListChannels /lnrpc.Lightning/ListPayments /lnrpc.Lightning/LookupHtlcResolution /lnrpc.Lightning/PendingChannels /lnrpc.Lightning/SubscribeChannelBackups /lnrpc.Lightning/SubscribeChannelEvents /lnrpc.Lightning/SubscribeCustomMessages /lnrpc.Lightning/VerifyChanBackup /routerrpc.Router/BuildRoute /routerrpc.Router/EstimateRouteFee /routerrpc.Router/GetMissionControlConfig /routerrpc.Router/QueryMissionControl /routerrpc.Router/QueryProbability /routerrpc.Router/SubscribeHtlcEvents /routerrpc.Router/TrackPayment /routerrpc.Router/TrackPaymentV2 /routerrpc.Router/TrackPayments /wtclientrpc.WatchtowerClient/GetTowerInfo /wtclientrpc.WatchtowerClient/ListTowers /wtclientrpc.WatchtowerClient/Policy /wtclientrpc.WatchtowerClient/Stats・おまけ1 RPCメソッド名にopenを含む要素を抽出するコマンド
lncli listpermissions | jq '.method_permissions | to_entries[] | select(.key | test("open"; "i"))'{ "key": "/lnrpc.Lightning/BatchOpenChannel", "value": { "permissions": [ { "entity": "onchain", "action": "write" }, { "entity": "offchain", "action": "write" } ] } } { "key": "/lnrpc.Lightning/OpenChannel", "value": { "permissions": [ { "entity": "onchain", "action": "write" }, { "entity": "offchain", "action": "write" } ] } } { "key": "/lnrpc.Lightning/OpenChannelSync", "value": { "permissions": [ { "entity": "onchain", "action": "write" }, { "entity": "offchain", "action": "write" } ] } }
・おまけ2 thunderhubで作ったmacaroonはテキストで出力されコピペして使うもので、macaroonファイルになってない。 HEXをmacaroonファイルにするには以下コマンドでできる。HEXをコピペして置換する。またYOURSの箇所を自分でわかりやすい名称に置換すると良い。
echo -n "HEX" | xxd -r -p > YOURS.macaroonthunderhubで"Create Invoices, Get Invoices, Get Wallet Info, Get Payments, Pay Invoices"をチェックして作ったmacaroonのpermissionsは以下となる。
{ "version": 2, "location": "lnd", "root_key_id": "0", "permissions": [ "info:read", "invoices:read", "invoices:write", "offchain:read", "offchain:write" ], "caveats": null } ``` offchain:writeはあるがonchain:writeがないのでチャネル開閉はできないはず。 -
@ e3ba5e1a:5e433365
2025-04-15 11:03:15Prelude
I wrote this post differently than any of my others. It started with a discussion with AI on an OPSec-inspired review of separation of powers, and evolved into quite an exciting debate! I asked Grok to write up a summary in my overall writing style, which it got pretty well. I've decided to post it exactly as-is. Ultimately, I think there are two solid ideas driving my stance here:
- Perfect is the enemy of the good
- Failure is the crucible of success
Beyond that, just some hard-core belief in freedom, separation of powers, and operating from self-interest.
Intro
Alright, buckle up. I’ve been chewing on this idea for a while, and it’s time to spit it out. Let’s look at the U.S. government like I’d look at a codebase under a cybersecurity audit—OPSEC style, no fluff. Forget the endless debates about what politicians should do. That’s noise. I want to talk about what they can do, the raw powers baked into the system, and why we should stop pretending those powers are sacred. If there’s a hole, either patch it or exploit it. No half-measures. And yeah, I’m okay if the whole thing crashes a bit—failure’s a feature, not a bug.
The Filibuster: A Security Rule with No Teeth
You ever see a firewall rule that’s more theater than protection? That’s the Senate filibuster. Everyone acts like it’s this untouchable guardian of democracy, but here’s the deal: a simple majority can torch it any day. It’s not a law; it’s a Senate preference, like choosing tabs over spaces. When people call killing it the “nuclear option,” I roll my eyes. Nuclear? It’s a button labeled “press me.” If a party wants it gone, they’ll do it. So why the dance?
I say stop playing games. Get rid of the filibuster. If you’re one of those folks who thinks it’s the only thing saving us from tyranny, fine—push for a constitutional amendment to lock it in. That’s a real patch, not a Post-it note. Until then, it’s just a vulnerability begging to be exploited. Every time a party threatens to nuke it, they’re admitting it’s not essential. So let’s stop pretending and move on.
Supreme Court Packing: Because Nine’s Just a Number
Here’s another fun one: the Supreme Court. Nine justices, right? Sounds official. Except it’s not. The Constitution doesn’t say nine—it’s silent on the number. Congress could pass a law tomorrow to make it 15, 20, or 42 (hitchhiker’s reference, anyone?). Packing the court is always on the table, and both sides know it. It’s like a root exploit just sitting there, waiting for someone to log in.
So why not call the bluff? If you’re in power—say, Trump’s back in the game—say, “I’m packing the court unless we amend the Constitution to fix it at nine.” Force the issue. No more shadowboxing. And honestly? The court’s got way too much power anyway. It’s not supposed to be a super-legislature, but here we are, with justices’ ideologies driving the bus. That’s a bug, not a feature. If the court weren’t such a kingmaker, packing it wouldn’t even matter. Maybe we should be talking about clipping its wings instead of just its size.
The Executive Should Go Full Klingon
Let’s talk presidents. I’m not saying they should wear Klingon armor and start shouting “Qapla’!”—though, let’s be real, that’d be awesome. I’m saying the executive should use every scrap of power the Constitution hands them. Enforce the laws you agree with, sideline the ones you don’t. If Congress doesn’t like it, they’ve got tools: pass new laws, override vetoes, or—here’s the big one—cut the budget. That’s not chaos; that’s the system working as designed.
Right now, the real problem isn’t the president overreaching; it’s the bureaucracy. It’s like a daemon running in the background, eating CPU and ignoring the user. The president’s supposed to be the one steering, but the administrative state’s got its own agenda. Let the executive flex, push the limits, and force Congress to check it. Norms? Pfft. The Constitution’s the spec sheet—stick to it.
Let the System Crash
Here’s where I get a little spicy: I’m totally fine if the government grinds to a halt. Deadlock isn’t a disaster; it’s a feature. If the branches can’t agree, let the president veto, let Congress starve the budget, let enforcement stall. Don’t tell me about “essential services.” Nothing’s so critical it can’t take a breather. Shutdowns force everyone to the table—debate, compromise, or expose who’s dropping the ball. If the public loses trust? Good. They’ll vote out the clowns or live with the circus they elected.
Think of it like a server crash. Sometimes you need a hard reboot to clear the cruft. If voters keep picking the same bad admins, well, the country gets what it deserves. Failure’s the best teacher—way better than limping along on autopilot.
States Are the Real MVPs
If the feds fumble, states step up. Right now, states act like junior devs waiting for the lead engineer to sign off. Why? Federal money. It’s a leash, and it’s tight. Cut that cash, and states will remember they’re autonomous. Some will shine, others will tank—looking at you, California. And I’m okay with that. Let people flee to better-run states. No bailouts, no excuses. States are like competing startups: the good ones thrive, the bad ones pivot or die.
Could it get uneven? Sure. Some states might turn into sci-fi utopias while others look like a post-apocalyptic vidya game. That’s the point—competition sorts it out. Citizens can move, markets adjust, and failure’s a signal to fix your act.
Chaos Isn’t the Enemy
Yeah, this sounds messy. States ignoring federal law, external threats poking at our seams, maybe even a constitutional crisis. I’m not scared. The Supreme Court’s there to referee interstate fights, and Congress sets the rules for state-to-state play. But if it all falls apart? Still cool. States can sort it without a babysitter—it’ll be ugly, but freedom’s worth it. External enemies? They’ll either unify us or break us. If we can’t rally, we don’t deserve the win.
Centralizing power to avoid this is like rewriting your app in a single thread to prevent race conditions—sure, it’s simpler, but you’re begging for a deadlock. Decentralized chaos lets states experiment, lets people escape, lets markets breathe. States competing to cut regulations to attract businesses? That’s a race to the bottom for red tape, but a race to the top for innovation—workers might gripe, but they’ll push back, and the tension’s healthy. Bring it—let the cage match play out. The Constitution’s checks are enough if we stop coddling the system.
Why This Matters
I’m not pitching a utopia. I’m pitching a stress test. The U.S. isn’t a fragile porcelain doll; it’s a rugged piece of hardware built to take some hits. Let it fail a little—filibuster, court, feds, whatever. Patch the holes with amendments if you want, or lean into the grind. Either way, stop fearing the crash. It’s how we debug the republic.
So, what’s your take? Ready to let the system rumble, or got a better way to secure the code? Hit me up—I’m all ears.
-
@ 5d4b6c8d:8a1c1ee3
2025-04-27 00:55:29After taking the most predictable option on day 1, the last 6 rounds of the draft were a rollercoaster. The Raiders traded back twice in the 2nd round and ended up with some extra picks.
Picks
-
RB Ashton Jeanty: Best player available at a position of need. Jeanty + Bowers gives the Raiders offense two elite weapons
-
WR Jack Bech: Tough, physical receiver with great hands, willing and able blocker
-
CB Darien Porter: Elite athlete, classic Raiders pick, "Can't teach size, can't teach speed."
-
OL Caleb Rogers: Versatile, athletic lineman, will likely compete at guard
-
OL Charles Grant: Same as above, but more likely to compete at tackle
-
WR: Donte Thornton: 6'5" and 4.3 speed, another classic Raiders pick, can open the field for the other weapons
-
DT Tonka Hemingway: Dope name, versatile and athletic D-Lineman, adds depth and optionality to a very talented groups
-
DT/TE/FB JJ Pegues: Interesting guy, probably won't contribute much as a DT immediately, but could be part of the goal line package on offense, as he had 7 rushing TDs last year and as a former TE could be a red zone target too
-
WR/KR/PR/QB Tommy Mellot: Another super versatile player, ran a 4.4 40 as a QB, but will convert to WR and return kicks, seems like someone with tons of trick play potential
-
QB Cam Miller: Finally, an actual QB! Won a ton of games at a smaller program.
-
LB Cody Lindenberg: 7th rounder, probably special teams if he makes the team
Takeaways
The Raiders added a ton of talent and versatility to their offense, including a defensive player who can contribute to the red zone offense in several ways. They're building a big physically dominant offense around an elite RB/TE combo, with big physical WRs who don't mind blocking and another talented TE.
The defensive line is going to have to be really dominant, which they have the potential for, because there is a dearth of talent behind them. Porter recently converted to CB from WR, so will likely take time to develop, and the others are day 3 picks.
The recipe will likely be to eat up clock on long offensive drives to give our pass rushers lots of breathers. Score reliably with a much improved redzone offense and a great kicker, then rely on that pass rush and the best punter in the league to keep the other team out of the endzone.
It's a good starting point. Maybe they'll sign Ramsey, or something, and really upgrade the defensive back group before the season starts.
originally posted at https://stacker.news/items/962047
-
-
@ 418a17eb:b64b2b3a
2025-04-26 21:45:33In today’s world, many people chase after money. We often think that wealth equals success and happiness. But if we look closer, we see that money is just a tool. The real goal is freedom.
Money helps us access resources and experiences. It can open doors. But the constant pursuit of wealth can trap us. We may find ourselves stressed, competing with others, and feeling unfulfilled. The more we chase money, the more we might lose sight of what truly matters.
Freedom, on the other hand, is about choice. It’s the ability to live life on our own terms. When we prioritize freedom, we can follow our passions and build meaningful relationships. We can spend our time on what we love, rather than being tied down by financial worries.
True fulfillment comes from this freedom. It allows us to define success for ourselves. When we embrace freedom, we become more resilient and creative. We connect more deeply with ourselves and others. This sense of purpose often brings more happiness than money ever could.
In the end, money isn’t the ultimate goal. It’s freedom that truly matters. By focusing on living authentically and making choices that resonate with us, we can create a life filled with meaning and joy.
-
@ c4b5369a:b812dbd6
2025-04-15 07:26:16Offline transactions with Cashu
Over the past few weeks, I've been busy implementing offline capabilities into nutstash. I think this is one of the key value propositions of ecash, beinga a bearer instrument that can be used without internet access.
It does however come with limitations, which can lead to a bit of confusion. I hope this article will clear some of these questions up for you!
What is ecash/Cashu?
Ecash is the first cryptocurrency ever invented. It was created by David Chaum in 1983. It uses a blind signature scheme, which allows users to prove ownership of a token without revealing a link to its origin. These tokens are what we call ecash. They are bearer instruments, meaning that anyone who possesses a copy of them, is considered the owner.
Cashu is an implementation of ecash, built to tightly interact with Bitcoin, more specifically the Bitcoin lightning network. In the Cashu ecosystem,
Mints
are the gateway to the lightning network. They provide the infrastructure to access the lightning network, pay invoices and receive payments. Instead of relying on a traditional ledger scheme like other custodians do, the mint issues ecash tokens, to represent the value held by the users.How do normal Cashu transactions work?
A Cashu transaction happens when the sender gives a copy of his ecash token to the receiver. This can happen by any means imaginable. You could send the token through email, messenger, or even by pidgeon. One of the common ways to transfer ecash is via QR code.
The transaction is however not finalized just yet! In order to make sure the sender cannot double-spend their copy of the token, the receiver must do what we call a
swap
. A swap is essentially exchanging an ecash token for a new one at the mint, invalidating the old token in the process. This ensures that the sender can no longer use the same token to spend elsewhere, and the value has been transferred to the receiver.What about offline transactions?
Sending offline
Sending offline is very simple. The ecash tokens are stored on your device. Thus, no internet connection is required to access them. You can litteraly just take them, and give them to someone. The most convenient way is usually through a local transmission protocol, like NFC, QR code, Bluetooth, etc.
The one thing to consider when sending offline is that ecash tokens come in form of "coins" or "notes". The technical term we use in Cashu is
Proof
. It "proofs" to the mint that you own a certain amount of value. Since these proofs have a fixed value attached to them, much like UTXOs in Bitcoin do, you would need proofs with a value that matches what you want to send. You can mix and match multiple proofs together to create a token that matches the amount you want to send. But, if you don't have proofs that match the amount, you would need to go online and swap for the needed proofs at the mint.Another limitation is, that you cannot create custom proofs offline. For example, if you would want to lock the ecash to a certain pubkey, or add a timelock to the proof, you would need to go online and create a new custom proof at the mint.
Receiving offline
You might think: well, if I trust the sender, I don't need to be swapping the token right away!
You're absolutely correct. If you trust the sender, you can simply accept their ecash token without needing to swap it immediately.
This is already really useful, since it gives you a way to receive a payment from a friend or close aquaintance without having to worry about connectivity. It's almost just like physical cash!
It does however not work if the sender is untrusted. We have to use a different scheme to be able to receive payments from someone we don't trust.
Receiving offline from an untrusted sender
To be able to receive payments from an untrusted sender, we need the sender to create a custom proof for us. As we've seen before, this requires the sender to go online.
The sender needs to create a token that has the following properties, so that the receciver can verify it offline:
- It must be locked to ONLY the receiver's public key
- It must include an
offline signature proof
(DLEQ proof) - If it contains a timelock & refund clause, it must be set to a time in the future that is acceptable for the receiver
- It cannot contain duplicate proofs (double-spend)
- It cannot contain proofs that the receiver has already received before (double-spend)
If all of these conditions are met, then the receiver can verify the proof offline and accept the payment. This allows us to receive payments from anyone, even if we don't trust them.
At first glance, this scheme seems kinda useless. It requires the sender to go online, which defeats the purpose of having an offline payment system.
I beleive there are a couple of ways this scheme might be useful nonetheless:
-
Offline vending machines: Imagine you have an offline vending machine that accepts payments from anyone. The vending machine could use this scheme to verify payments without needing to go online itself. We can assume that the sender is able to go online and create a valid token, but the receiver doesn't need to be online to verify it.
-
Offline marketplaces: Imagine you have an offline marketplace where buyers and sellers can trade goods and services. Before going to the marketplace the sender already knows where he will be spending the money. The sender could create a valid token before going to the marketplace, using the merchants public key as a lock, and adding a refund clause to redeem any unspent ecash after it expires. In this case, neither the sender nor the receiver needs to go online to complete the transaction.
How to use this
Pretty much all cashu wallets allow you to send tokens offline. This is because all that the wallet needs to do is to look if it can create the desired amount from the proofs stored locally. If yes, it will automatically create the token offline.
Receiving offline tokens is currently only supported by nutstash (experimental).
To create an offline receivable token, the sender needs to lock it to the receiver's public key. Currently there is no refund clause! So be careful that you don't get accidentally locked out of your funds!
The receiver can then inspect the token and decide if it is safe to accept without a swap. If all checks are green, they can accept the token offline without trusting the sender.
The receiver will see the unswapped tokens on the wallet homescreen. They will need to manually swap them later when they are online again.
Later when the receiver is online again, they can swap the token for a fresh one.
Summary
We learned that offline transactions are possible with ecash, but there are some limitations. It either requires trusting the sender, or relying on either the sender or receiver to be online to verify the tokens, or create tokens that can be verified offline by the receiver.
I hope this short article was helpful in understanding how ecash works and its potential for offline transactions.
Cheers,
Gandlaf
-
@ 266815e0:6cd408a5
2025-04-15 06:58:14Its been a little over a year since NIP-90 was written and merged into the nips repo and its been a communication mess.
Every DVM implementation expects the inputs in slightly different formats, returns the results in mostly the same format and there are very few DVM actually running.
NIP-90 is overloaded
Why does a request for text translation and creating bitcoin OP_RETURNs share the same input
i
tag? and why is there anoutput
tag on requests when only one of them will return an output?Each DVM request kind is for requesting completely different types of compute with diffrent input and output requirements, but they are all using the same spec that has 4 different types of inputs (
text
,url
,event
,job
) and an undefined number ofoutput
types.Let me show a few random DVM requests and responses I found on
wss://relay.damus.io
to demonstrate what I mean:This is a request to translate an event to English
json { "kind": 5002, "content": "", "tags": [ // NIP-90 says there can be multiple inputs, so how would a DVM handle translatting multiple events at once? [ "i", "<event-id>", "event" ], [ "param", "language", "en" ], // What other type of output would text translations be? image/jpeg? [ "output", "text/plain" ], // Do we really need to define relays? cant the DVM respond on the relays it saw the request on? [ "relays", "wss://relay.unknown.cloud/", "wss://nos.lol/" ] ] }
This is a request to generate text using an LLM model
json { "kind": 5050, // Why is the content empty? wouldn't it be better to have the prompt in the content? "content": "", "tags": [ // Why use an indexable tag? are we ever going to lookup prompts? // Also the type "prompt" isn't in NIP-90, this should probably be "text" [ "i", "What is the capital of France?", "prompt" ], [ "p", "c4878054cff877f694f5abecf18c7450f4b6fdf59e3e9cb3e6505a93c4577db2" ], [ "relays", "wss://relay.primal.net" ] ] }
This is a request for content recommendation
json { "kind": 5300, "content": "", "tags": [ // Its fine ignoring this param, but what if the client actually needs exactly 200 "results" [ "param", "max_results", "200" ], // The spec never mentions requesting content for other users. // If a DVM didn't understand this and responded to this request it would provide bad data [ "param", "user", "b22b06b051fd5232966a9344a634d956c3dc33a7f5ecdcad9ed11ddc4120a7f2" ], [ "relays", "wss://relay.primal.net", ], [ "p", "ceb7e7d688e8a704794d5662acb6f18c2455df7481833dd6c384b65252455a95" ] ] }
This is a request to create a OP_RETURN message on bitcoin
json { "kind": 5901, // Again why is the content empty when we are sending human readable text? "content": "", "tags": [ // and again, using an indexable tag on an input that will never need to be looked up ["i", "09/01/24 SEC Chairman on the brink of second ETF approval", "text"] ] }
My point isn't that these event schema's aren't understandable but why are they using the same schema? each use-case is different but are they all required to use the same
i
tag format as input and could support all 4 types of inputs.Lack of libraries
With all these different types of inputs, params, and outputs its verify difficult if not impossible to build libraries for DVMs
If a simple text translation request can have an
event
ortext
as inputs, apayment-required
status at any point in the flow, partial results, or responses from 10+ DVMs whats the best way to build a translation library for other nostr clients to use?And how do I build a DVM framework for the server side that can handle multiple inputs of all four types (
url
,text
,event
,job
) and clients are sending all the requests in slightly differently.Supporting payments is impossible
The way NIP-90 is written there isn't much details about payments. only a
payment-required
status and a genericamount
tagBut the way things are now every DVM is implementing payments differently. some send a bolt11 invoice, some expect the client to NIP-57 zap the request event (or maybe the status event), and some even ask for a subscription. and we haven't even started implementing NIP-61 nut zaps or cashu A few are even formatting the
amount
number wrong or denominating it in sats and not mili-satsBuilding a client or a library that can understand and handle all of these payment methods is very difficult. for the DVM server side its worse. A DVM server presumably needs to support all 4+ types of payments if they want to get the most sats for their services and support the most clients.
All of this is made even more complicated by the fact that a DVM can ask for payment at any point during the job process. this makes sense for some types of compute, but for others like translations or user recommendation / search it just makes things even more complicated.
For example, If a client wanted to implement a timeline page that showed the notes of all the pubkeys on a recommended list. what would they do when the selected DVM asks for payment at the start of the job? or at the end? or worse, only provides half the pubkeys and asks for payment for the other half. building a UI that could handle even just two of these possibilities is complicated.
NIP-89 is being abused
NIP-89 is "Recommended Application Handlers" and the way its describe in the nips repo is
a way to discover applications that can handle unknown event-kinds
Not "a way to discover everything"
If I wanted to build an application discovery app to show all the apps that your contacts use and let you discover new apps then it would have to filter out ALL the DVM advertisement events. and that's not just for making requests from relays
If the app shows the user their list of "recommended applications" then it either has to understand that everything in the 5xxx kind range is a DVM and to show that is its own category or show a bunch of unknown "favorites" in the list which might be confusing for the user.
In conclusion
My point in writing this article isn't that the DVMs implementations so far don't work, but that they will never work well because the spec is too broad. even with only a few DVMs running we have already lost interoperability.
I don't want to be completely negative though because some things have worked. the "DVM feeds" work, although they are limited to a single page of results. text / event translations also work well and kind
5970
Event PoW delegation could be cool. but if we want interoperability, we are going to need to change a few things with NIP-90I don't think we can (or should) abandon NIP-90 entirely but it would be good to break it up into small NIPs or specs. break each "kind" of DVM request out into its own spec with its own definitions for expected inputs, outputs and flow.
Then if we have simple, clean definitions for each kind of compute we want to distribute. we might actually see markets and services being built and used.
-
@ 91bea5cd:1df4451c
2025-04-15 06:27:28Básico
bash lsblk # Lista todos os diretorios montados.
Para criar o sistema de arquivos:
bash mkfs.btrfs -L "ThePool" -f /dev/sdx
Criando um subvolume:
bash btrfs subvolume create SubVol
Montando Sistema de Arquivos:
bash mount -o compress=zlib,subvol=SubVol,autodefrag /dev/sdx /mnt
Lista os discos formatados no diretório:
bash btrfs filesystem show /mnt
Adiciona novo disco ao subvolume:
bash btrfs device add -f /dev/sdy /mnt
Lista novamente os discos do subvolume:
bash btrfs filesystem show /mnt
Exibe uso dos discos do subvolume:
bash btrfs filesystem df /mnt
Balancea os dados entre os discos sobre raid1:
bash btrfs filesystem balance start -dconvert=raid1 -mconvert=raid1 /mnt
Scrub é uma passagem por todos os dados e metadados do sistema de arquivos e verifica as somas de verificação. Se uma cópia válida estiver disponível (perfis de grupo de blocos replicados), a danificada será reparada. Todas as cópias dos perfis replicados são validadas.
iniciar o processo de depuração :
bash btrfs scrub start /mnt
ver o status do processo de depuração Btrfs em execução:
bash btrfs scrub status /mnt
ver o status do scrub Btrfs para cada um dos dispositivos
bash btrfs scrub status -d / data btrfs scrub cancel / data
Para retomar o processo de depuração do Btrfs que você cancelou ou pausou:
btrfs scrub resume / data
Listando os subvolumes:
bash btrfs subvolume list /Reports
Criando um instantâneo dos subvolumes:
Aqui, estamos criando um instantâneo de leitura e gravação chamado snap de marketing do subvolume de marketing.
bash btrfs subvolume snapshot /Reports/marketing /Reports/marketing-snap
Além disso, você pode criar um instantâneo somente leitura usando o sinalizador -r conforme mostrado. O marketing-rosnap é um instantâneo somente leitura do subvolume de marketing
bash btrfs subvolume snapshot -r /Reports/marketing /Reports/marketing-rosnap
Forçar a sincronização do sistema de arquivos usando o utilitário 'sync'
Para forçar a sincronização do sistema de arquivos, invoque a opção de sincronização conforme mostrado. Observe que o sistema de arquivos já deve estar montado para que o processo de sincronização continue com sucesso.
bash btrfs filsystem sync /Reports
Para excluir o dispositivo do sistema de arquivos, use o comando device delete conforme mostrado.
bash btrfs device delete /dev/sdc /Reports
Para sondar o status de um scrub, use o comando scrub status com a opção -dR .
bash btrfs scrub status -dR / Relatórios
Para cancelar a execução do scrub, use o comando scrub cancel .
bash $ sudo btrfs scrub cancel / Reports
Para retomar ou continuar com uma depuração interrompida anteriormente, execute o comando de cancelamento de depuração
bash sudo btrfs scrub resume /Reports
mostra o uso do dispositivo de armazenamento:
btrfs filesystem usage /data
Para distribuir os dados, metadados e dados do sistema em todos os dispositivos de armazenamento do RAID (incluindo o dispositivo de armazenamento recém-adicionado) montados no diretório /data , execute o seguinte comando:
sudo btrfs balance start --full-balance /data
Pode demorar um pouco para espalhar os dados, metadados e dados do sistema em todos os dispositivos de armazenamento do RAID se ele contiver muitos dados.
Opções importantes de montagem Btrfs
Nesta seção, vou explicar algumas das importantes opções de montagem do Btrfs. Então vamos começar.
As opções de montagem Btrfs mais importantes são:
**1. acl e noacl
**ACL gerencia permissões de usuários e grupos para os arquivos/diretórios do sistema de arquivos Btrfs.
A opção de montagem acl Btrfs habilita ACL. Para desabilitar a ACL, você pode usar a opção de montagem noacl .
Por padrão, a ACL está habilitada. Portanto, o sistema de arquivos Btrfs usa a opção de montagem acl por padrão.
**2. autodefrag e noautodefrag
**Desfragmentar um sistema de arquivos Btrfs melhorará o desempenho do sistema de arquivos reduzindo a fragmentação de dados.
A opção de montagem autodefrag permite a desfragmentação automática do sistema de arquivos Btrfs.
A opção de montagem noautodefrag desativa a desfragmentação automática do sistema de arquivos Btrfs.
Por padrão, a desfragmentação automática está desabilitada. Portanto, o sistema de arquivos Btrfs usa a opção de montagem noautodefrag por padrão.
**3. compactar e compactar-forçar
**Controla a compactação de dados no nível do sistema de arquivos do sistema de arquivos Btrfs.
A opção compactar compacta apenas os arquivos que valem a pena compactar (se compactar o arquivo economizar espaço em disco).
A opção compress-force compacta todos os arquivos do sistema de arquivos Btrfs, mesmo que a compactação do arquivo aumente seu tamanho.
O sistema de arquivos Btrfs suporta muitos algoritmos de compactação e cada um dos algoritmos de compactação possui diferentes níveis de compactação.
Os algoritmos de compactação suportados pelo Btrfs são: lzo , zlib (nível 1 a 9) e zstd (nível 1 a 15).
Você pode especificar qual algoritmo de compactação usar para o sistema de arquivos Btrfs com uma das seguintes opções de montagem:
- compress=algoritmo:nível
- compress-force=algoritmo:nível
Para obter mais informações, consulte meu artigo Como habilitar a compactação do sistema de arquivos Btrfs .
**4. subvol e subvolid
**Estas opções de montagem são usadas para montar separadamente um subvolume específico de um sistema de arquivos Btrfs.
A opção de montagem subvol é usada para montar o subvolume de um sistema de arquivos Btrfs usando seu caminho relativo.
A opção de montagem subvolid é usada para montar o subvolume de um sistema de arquivos Btrfs usando o ID do subvolume.
Para obter mais informações, consulte meu artigo Como criar e montar subvolumes Btrfs .
**5. dispositivo
A opção de montagem de dispositivo** é usada no sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs.
Em alguns casos, o sistema operacional pode falhar ao detectar os dispositivos de armazenamento usados em um sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs. Nesses casos, você pode usar a opção de montagem do dispositivo para especificar os dispositivos que deseja usar para o sistema de arquivos de vários dispositivos Btrfs ou RAID.
Você pode usar a opção de montagem de dispositivo várias vezes para carregar diferentes dispositivos de armazenamento para o sistema de arquivos de vários dispositivos Btrfs ou RAID.
Você pode usar o nome do dispositivo (ou seja, sdb , sdc ) ou UUID , UUID_SUB ou PARTUUID do dispositivo de armazenamento com a opção de montagem do dispositivo para identificar o dispositivo de armazenamento.
Por exemplo,
- dispositivo=/dev/sdb
- dispositivo=/dev/sdb,dispositivo=/dev/sdc
- dispositivo=UUID_SUB=490a263d-eb9a-4558-931e-998d4d080c5d
- device=UUID_SUB=490a263d-eb9a-4558-931e-998d4d080c5d,device=UUID_SUB=f7ce4875-0874-436a-b47d-3edef66d3424
**6. degraded
A opção de montagem degradada** permite que um RAID Btrfs seja montado com menos dispositivos de armazenamento do que o perfil RAID requer.
Por exemplo, o perfil raid1 requer a presença de 2 dispositivos de armazenamento. Se um dos dispositivos de armazenamento não estiver disponível em qualquer caso, você usa a opção de montagem degradada para montar o RAID mesmo que 1 de 2 dispositivos de armazenamento esteja disponível.
**7. commit
A opção commit** mount é usada para definir o intervalo (em segundos) dentro do qual os dados serão gravados no dispositivo de armazenamento.
O padrão é definido como 30 segundos.
Para definir o intervalo de confirmação para 15 segundos, você pode usar a opção de montagem commit=15 (digamos).
**8. ssd e nossd
A opção de montagem ssd** informa ao sistema de arquivos Btrfs que o sistema de arquivos está usando um dispositivo de armazenamento SSD, e o sistema de arquivos Btrfs faz a otimização SSD necessária.
A opção de montagem nossd desativa a otimização do SSD.
O sistema de arquivos Btrfs detecta automaticamente se um SSD é usado para o sistema de arquivos Btrfs. Se um SSD for usado, a opção de montagem de SSD será habilitada. Caso contrário, a opção de montagem nossd é habilitada.
**9. ssd_spread e nossd_spread
A opção de montagem ssd_spread** tenta alocar grandes blocos contínuos de espaço não utilizado do SSD. Esse recurso melhora o desempenho de SSDs de baixo custo (baratos).
A opção de montagem nossd_spread desativa o recurso ssd_spread .
O sistema de arquivos Btrfs detecta automaticamente se um SSD é usado para o sistema de arquivos Btrfs. Se um SSD for usado, a opção de montagem ssd_spread será habilitada. Caso contrário, a opção de montagem nossd_spread é habilitada.
**10. descarte e nodiscard
Se você estiver usando um SSD que suporte TRIM enfileirado assíncrono (SATA rev3.1), a opção de montagem de descarte** permitirá o descarte de blocos de arquivos liberados. Isso melhorará o desempenho do SSD.
Se o SSD não suportar TRIM enfileirado assíncrono, a opção de montagem de descarte prejudicará o desempenho do SSD. Nesse caso, a opção de montagem nodiscard deve ser usada.
Por padrão, a opção de montagem nodiscard é usada.
**11. norecovery
Se a opção de montagem norecovery** for usada, o sistema de arquivos Btrfs não tentará executar a operação de recuperação de dados no momento da montagem.
**12. usebackuproot e nousebackuproot
Se a opção de montagem usebackuproot for usada, o sistema de arquivos Btrfs tentará recuperar qualquer raiz de árvore ruim/corrompida no momento da montagem. O sistema de arquivos Btrfs pode armazenar várias raízes de árvore no sistema de arquivos. A opção de montagem usebackuproot** procurará uma boa raiz de árvore e usará a primeira boa que encontrar.
A opção de montagem nousebackuproot não verificará ou recuperará raízes de árvore inválidas/corrompidas no momento da montagem. Este é o comportamento padrão do sistema de arquivos Btrfs.
**13. space_cache, space_cache=version, nospace_cache e clear_cache
A opção de montagem space_cache** é usada para controlar o cache de espaço livre. O cache de espaço livre é usado para melhorar o desempenho da leitura do espaço livre do grupo de blocos do sistema de arquivos Btrfs na memória (RAM).
O sistema de arquivos Btrfs suporta 2 versões do cache de espaço livre: v1 (padrão) e v2
O mecanismo de cache de espaço livre v2 melhora o desempenho de sistemas de arquivos grandes (tamanho de vários terabytes).
Você pode usar a opção de montagem space_cache=v1 para definir a v1 do cache de espaço livre e a opção de montagem space_cache=v2 para definir a v2 do cache de espaço livre.
A opção de montagem clear_cache é usada para limpar o cache de espaço livre.
Quando o cache de espaço livre v2 é criado, o cache deve ser limpo para criar um cache de espaço livre v1 .
Portanto, para usar o cache de espaço livre v1 após a criação do cache de espaço livre v2 , as opções de montagem clear_cache e space_cache=v1 devem ser combinadas: clear_cache,space_cache=v1
A opção de montagem nospace_cache é usada para desabilitar o cache de espaço livre.
Para desabilitar o cache de espaço livre após a criação do cache v1 ou v2 , as opções de montagem nospace_cache e clear_cache devem ser combinadas: clear_cache,nosapce_cache
**14. skip_balance
Por padrão, a operação de balanceamento interrompida/pausada de um sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs será retomada automaticamente assim que o sistema de arquivos Btrfs for montado. Para desabilitar a retomada automática da operação de equilíbrio interrompido/pausado em um sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs, você pode usar a opção de montagem skip_balance .**
**15. datacow e nodatacow
A opção datacow** mount habilita o recurso Copy-on-Write (CoW) do sistema de arquivos Btrfs. É o comportamento padrão.
Se você deseja desabilitar o recurso Copy-on-Write (CoW) do sistema de arquivos Btrfs para os arquivos recém-criados, monte o sistema de arquivos Btrfs com a opção de montagem nodatacow .
**16. datasum e nodatasum
A opção datasum** mount habilita a soma de verificação de dados para arquivos recém-criados do sistema de arquivos Btrfs. Este é o comportamento padrão.
Se você não quiser que o sistema de arquivos Btrfs faça a soma de verificação dos dados dos arquivos recém-criados, monte o sistema de arquivos Btrfs com a opção de montagem nodatasum .
Perfis Btrfs
Um perfil Btrfs é usado para informar ao sistema de arquivos Btrfs quantas cópias dos dados/metadados devem ser mantidas e quais níveis de RAID devem ser usados para os dados/metadados. O sistema de arquivos Btrfs contém muitos perfis. Entendê-los o ajudará a configurar um RAID Btrfs da maneira que você deseja.
Os perfis Btrfs disponíveis são os seguintes:
single : Se o perfil único for usado para os dados/metadados, apenas uma cópia dos dados/metadados será armazenada no sistema de arquivos, mesmo se você adicionar vários dispositivos de armazenamento ao sistema de arquivos. Assim, 100% do espaço em disco de cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos pode ser utilizado.
dup : Se o perfil dup for usado para os dados/metadados, cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos manterá duas cópias dos dados/metadados. Assim, 50% do espaço em disco de cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos pode ser utilizado.
raid0 : No perfil raid0 , os dados/metadados serão divididos igualmente em todos os dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, não haverá dados/metadados redundantes (duplicados). Assim, 100% do espaço em disco de cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos pode ser usado. Se, em qualquer caso, um dos dispositivos de armazenamento falhar, todo o sistema de arquivos será corrompido. Você precisará de pelo menos dois dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid0 .
raid1 : No perfil raid1 , duas cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, a matriz RAID pode sobreviver a uma falha de unidade. Mas você pode usar apenas 50% do espaço total em disco. Você precisará de pelo menos dois dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid1 .
raid1c3 : No perfil raid1c3 , três cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, a matriz RAID pode sobreviver a duas falhas de unidade, mas você pode usar apenas 33% do espaço total em disco. Você precisará de pelo menos três dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid1c3 .
raid1c4 : No perfil raid1c4 , quatro cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, a matriz RAID pode sobreviver a três falhas de unidade, mas você pode usar apenas 25% do espaço total em disco. Você precisará de pelo menos quatro dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid1c4 .
raid10 : No perfil raid10 , duas cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos, como no perfil raid1 . Além disso, os dados/metadados serão divididos entre os dispositivos de armazenamento, como no perfil raid0 .
O perfil raid10 é um híbrido dos perfis raid1 e raid0 . Alguns dos dispositivos de armazenamento formam arrays raid1 e alguns desses arrays raid1 são usados para formar um array raid0 . Em uma configuração raid10 , o sistema de arquivos pode sobreviver a uma única falha de unidade em cada uma das matrizes raid1 .
Você pode usar 50% do espaço total em disco na configuração raid10 . Você precisará de pelo menos quatro dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid10 .
raid5 : No perfil raid5 , uma cópia dos dados/metadados será dividida entre os dispositivos de armazenamento. Uma única paridade será calculada e distribuída entre os dispositivos de armazenamento do array RAID.
Em uma configuração raid5 , o sistema de arquivos pode sobreviver a uma única falha de unidade. Se uma unidade falhar, você pode adicionar uma nova unidade ao sistema de arquivos e os dados perdidos serão calculados a partir da paridade distribuída das unidades em execução.
Você pode usar 1 00x(N-1)/N % do total de espaços em disco na configuração raid5 . Aqui, N é o número de dispositivos de armazenamento adicionados ao sistema de arquivos. Você precisará de pelo menos três dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid5 .
raid6 : No perfil raid6 , uma cópia dos dados/metadados será dividida entre os dispositivos de armazenamento. Duas paridades serão calculadas e distribuídas entre os dispositivos de armazenamento do array RAID.
Em uma configuração raid6 , o sistema de arquivos pode sobreviver a duas falhas de unidade ao mesmo tempo. Se uma unidade falhar, você poderá adicionar uma nova unidade ao sistema de arquivos e os dados perdidos serão calculados a partir das duas paridades distribuídas das unidades em execução.
Você pode usar 100x(N-2)/N % do espaço total em disco na configuração raid6 . Aqui, N é o número de dispositivos de armazenamento adicionados ao sistema de arquivos. Você precisará de pelo menos quatro dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid6 .
-
@ d34e832d:383f78d0
2025-04-26 15:04:51Raspberry Pi-based voice assistant
This Idea details the design and deployment of a Raspberry Pi-based voice assistant powered by the Google Gemini AI API. The system combines open hardware with modern AI services to create a low-cost, flexible, and educational voice assistant platform. By leveraging a Raspberry Pi, basic audio hardware, and Python-based software, developers can create a functional, customizable assistant suitable for home automation, research, or personal productivity enhancement.
1. Voice assistants
Voice assistants have become increasingly ubiquitous, but commercially available systems like Alexa, Siri, or Google Assistant come with significant privacy and customization limitations.
This project offers an open, local, and customizable alternative, demonstrating how to build a voice assistant using Google Gemini (or OpenAI’s ChatGPT) APIs for natural language understanding.Target Audience:
- DIY enthusiasts - Raspberry Pi hobbyists - AI developers - Privacy-conscious users
2. System Architecture
2.1 Hardware Components
| Component | Purpose | |:--------------------------|:----------------------------------------| | Raspberry Pi (any recent model, 4B recommended) | Core processing unit | | Micro SD Card (32GB+) | Operating System and storage | | USB Microphone | Capturing user voice input | | Audio Amplifier + Speaker | Outputting synthesized responses | | 5V DC Power Supplies (2x) | Separate power for Pi and amplifier | | LEDs + Resistors (optional)| Visual feedback (e.g., recording or listening states) |
2.2 Software Stack
| Software | Function | |:---------------------------|:----------------------------------------| | Raspberry Pi OS (Lite or Full) | Base operating system | | Python 3.9+ | Programming language | | SpeechRecognition | Captures and transcribes user voice | | Google Text-to-Speech (gTTS) | Converts responses into spoken audio | | Google Gemini API (or OpenAI API) | Powers the AI assistant brain | | Pygame | Audio playback for responses | | WinSCP + Windows Terminal | File transfer and remote management |
3. Hardware Setup
3.1 Basic Connections
- Microphone: Connect via USB port.
- Speaker and Amplifier: Wire from Raspberry Pi audio jack or via USB sound card if better quality is needed.
- LEDs (Optional): Connect through GPIO pins, using 220–330Ω resistors to limit current.
3.2 Breadboard Layout (Optional for LEDs)
| GPIO Pin | LED Color | Purpose | |:---------|:-----------|:--------------------| | GPIO 17 | Red | Recording active | | GPIO 27 | Green | Response playing |
Tip: Use a small breadboard for quick prototyping before moving to a custom PCB if desired.
4. Software Setup
4.1 Raspberry Pi OS Installation
- Use Raspberry Pi Imager to flash Raspberry Pi OS onto the Micro SD card.
- Initial system update:
bash sudo apt update && sudo apt upgrade -y
4.2 Python Environment
-
Install Python virtual environment:
bash sudo apt install python3-venv python3 -m venv voice-env source voice-env/bin/activate
-
Install required Python packages:
bash pip install SpeechRecognition google-generativeai pygame gtts
(Replace
google-generativeai
withopenai
if using OpenAI's ChatGPT.)4.3 API Key Setup
- Obtain a Google Gemini API key (or OpenAI API key).
- Store safely in a
.env
file or configure as environment variables for security:bash export GEMINI_API_KEY="your_api_key_here"
4.4 File Transfer
- Use WinSCP or
scp
commands to transfer Python scripts to the Pi.
4.5 Example Python Script (Simplified)
```python import speech_recognition as sr import google.generativeai as genai from gtts import gTTS import pygame import os
genai.configure(api_key=os.getenv('GEMINI_API_KEY')) recognizer = sr.Recognizer() mic = sr.Microphone()
pygame.init()
while True: with mic as source: print("Listening...") audio = recognizer.listen(source)
try: text = recognizer.recognize_google(audio) print(f"You said: {text}") response = genai.generate_content(text) tts = gTTS(text=response.text, lang='en') tts.save("response.mp3") pygame.mixer.music.load("response.mp3") pygame.mixer.music.play() while pygame.mixer.music.get_busy(): continue except Exception as e: print(f"Error: {e}")
```
5. Testing and Execution
- Activate the Python virtual environment:
bash source voice-env/bin/activate
- Run your main assistant script:
bash python3 assistant.py
- Speak into the microphone and listen for the AI-generated spoken response.
6. Troubleshooting
| Problem | Possible Fix | |:--------|:-------------| | Microphone not detected | Check
arecord -l
| | Audio output issues | Checkaplay -l
, use a USB DAC if needed | | Permission denied errors | Verify group permissions (audio, gpio) | | API Key Errors | Check environment variable and internet access |
7. Performance Notes
- Latency: Highly dependent on network speed and API response time.
- Audio Quality: Can be enhanced with a better USB microphone and powered speakers.
- Privacy: Minimal data retention if using your own Gemini or OpenAI account.
8. Potential Extensions
- Add hotword detection ("Hey Gemini") using Snowboy or Porcupine libraries.
- Build a local fallback model to answer basic questions offline.
- Integrate with home automation via MQTT, Home Assistant, or Node-RED.
- Enable LED animations to visually indicate listening and responding states.
- Deploy with a small eInk or OLED screen for text display of answers.
9. Consider
Building a Gemini-powered voice assistant on the Raspberry Pi empowers individuals to create customizable, private, and cost-effective alternatives to commercial voice assistants. By utilizing accessible hardware, modern open-source libraries, and powerful AI APIs, this project blends education, experimentation, and privacy-centric design into a single hands-on platform.
This guide can be adapted for personal use, educational programs, or even as a starting point for more advanced AI-based embedded systems.
References
- Raspberry Pi Foundation: https://www.raspberrypi.org
- Google Generative AI Documentation: https://ai.google.dev
- OpenAI Documentation: https://platform.openai.com
- SpeechRecognition Library: https://pypi.org/project/SpeechRecognition/
- gTTS Documentation: https://pypi.org/project/gTTS/
- Pygame Documentation: https://www.pygame.org/docs/
-
@ 9223d2fa:b57e3de7
2025-04-15 02:54:0012,600 steps
-
@ 6e0ea5d6:0327f353
2025-04-14 15:11:17Ascolta.
We live in times where the average man is measured by the speeches he gives — not by the commitments he keeps. People talk about dreams, goals, promises… but what truly remains is what’s honored in the silence of small gestures, in actions that don’t seek applause, in attitudes unseen — yet speak volumes.
Punctuality, for example. Showing up on time isn’t about the clock. It’s about respect. Respect for another’s time, yes — but more importantly, respect for one’s own word. A man who is late without reason is already running late in his values. And the one who excuses his own lateness with sweet justifications slowly gets used to mediocrity.
Keeping your word is more than fulfilling promises. It is sealing, with the mouth, what the body must later uphold. Every time a man commits to something, he creates a moral debt with his own dignity. And to break that commitment is to declare bankruptcy — not in the eyes of others, but in front of himself.
And debts? Even the small ones — or especially the small ones — are precise thermometers of character. A forgotten sum, an unpaid favor, a commitment left behind… all of these reveal the structure of the inner building that man resides in. He who neglects the small is merely rehearsing for his future collapse.
Life, contrary to what the reckless say, is not built on grand deeds. It is built with small bricks, laid with almost obsessive precision. The truly great man is the one who respects the details — recognizing in them a code of conduct.
In Sicily, especially in the streets of Palermo, I learned early on that there is more nobility in paying a five-euro debt on time than in flaunting riches gained without word, without honor, without dignity.
As they say in Palermo: L’uomo si conosce dalle piccole cose.
So, amico mio, Don’t talk to me about greatness if you can’t show up on time. Don’t talk to me about respect if your word is fickle. And above all, don’t talk to me about honor if you still owe what you once promised — no matter how small.
Thank you for reading, my friend!
If this message resonated with you, consider leaving your "🥃" as a token of appreciation.
A toast to our family!
-
@ d34e832d:383f78d0
2025-04-26 14:33:06Gist
This Idea presents a blueprint for creating a portable, offline-first education server focused on Free and Open Source Software (FOSS) topics like Bitcoin fundamentals, Linux administration, GPG encryption, and digital self-sovereignty. Using the compact and powerful Nookbox G9 NAS unit, we demonstrate how to deliver accessible, decentralized educational content in remote or network-restricted environments.
1. Bitcoin, Linux, and Cryptographic tools
Access to self-sovereign technologies such as Bitcoin, Linux, and cryptographic tools is critical for empowering individuals and communities. However, many areas face internet connectivity issues or political restrictions limiting access to online resources.
By combining a high-performance mini NAS server with a curated library of FOSS educational materials, we can create a mobile "university" that delivers critical knowledge independently of centralized networks.
2. Hardware Platform: Nookbox G9 Overview
The Nookbox G9 offers an ideal balance of performance, portability, and affordability for this project.
2.1 Core Specifications
| Feature | Specification | |:------------------------|:---------------------------------------| | Form Factor | 1U Rackmount mini-NAS | | Storage | Up to 8TB (4×2TB M.2 NVMe SSDs) | | M.2 Interface | PCIe Gen 3x2 per drive slot | | Networking | Dual 2.5 Gigabit Ethernet ports | | Power Consumption | 11–30 Watts (typical usage) | | Default OS | Windows 11 (to be replaced with Linux) | | Linux Compatibility | Fully compatible with Ubuntu 24.10 |
3. FOSS Education Server Design
3.1 Operating System Setup
- Replace Windows 11 with a clean install of Ubuntu Server 24.10.
- Harden the OS:
- Enable full-disk encryption.
- Configure UFW firewall.
- Disable unnecessary services.
3.2 Core Services Deployed
| Service | Purpose | |:--------------------|:-----------------------------------------| | Nginx Web Server | Host offline courses and documentation | | Nextcloud (optional) | Offer private file sharing for students | | Moodle LMS (optional) | Deliver structured courses and quizzes | | Tor Hidden Service | Optional for anonymous access locally | | rsync/Syncthing | Distribute updates peer-to-peer |
3.3 Content Hosted
- Bitcoin: Bitcoin Whitepaper, Bitcoin Core documentation, Electrum Wallet tutorials.
- Linux: Introduction to Linux (LPIC-1 materials), bash scripting guides, system administration manuals.
- Cryptography: GPG tutorials, SSL/TLS basics, secure communications handbooks.
- Offline Tools: Full mirrors of sites like LearnLinux.tv, Bitcoin.org, and selected content from FSF.
All resources are curated to be license-compliant and redistributable in an offline format.
4. Network Configuration
- LAN-only Access: No reliance on external Internet.
- DHCP server setup for automatic IP allocation.
- Optional Wi-Fi access point using USB Wi-Fi dongle and
hostapd
. - Access Portal: Homepage automatically redirects users to educational content upon connection.
5. Advantages of This Setup
| Feature | Advantage | |:-----------------------|:----------------------------------------| | Offline Capability | Operates without internet connectivity | | Portable Form Factor | Fits into field deployments easily | | Secure and Hardened | Encrypted, compartmentalized, and locked down | | Modular Content | Easy to update or expand educational resources | | Energy Efficient | Low power draw enables solar or battery operation | | Open Source Stack | End-to-end FOSS ecosystem, no vendor lock-in |
6. Deployment Scenarios
- Rural Schools: Provide Linux training without requiring internet.
- Disaster Recovery Zones: Deliver essential technical education in post-disaster areas.
- Bitcoin Meetups: Offer Bitcoin literacy and cryptography workshops in remote communities.
- Privacy Advocacy Groups: Teach operational security practices without risking network surveillance.
7. Performance Considerations
Despite PCIe Gen 3x2 limitations, the available bandwidth (~2GB/s theoretical) vastly exceeds the server's 2.5 Gbps network output (~250MB/s), making it more than sufficient for a read-heavy educational workload.
Thermal Management:
Given the G9’s known cooling issues, install additional thermal pads or heatsinks on the NVMe drives. Consider external USB-powered cooling fans for sustained heavy usage.
8. Ways To Extend
- Multi-language Support: Add localized course materials.
- Bitcoin Node Integration: Host a lightweight Bitcoin node (e.g., Bitcoin Core with pruning enabled or a complete full node) for educational purposes.
- Mesh Networking: Use Mesh Wi-Fi protocols (e.g., cjdns or Yggdrasil) to allow peer-to-peer server sharing without centralized Wi-Fi.
9. Consider
Building a Portable FOSS Education Server on a Nookbox G9 is a practical, scalable solution for democratizing technical knowledge, empowering communities, and defending digital sovereignty in restricted environments.
Through thoughtful system design—leveraging open-source software and secure deployment practices—we enable resilient, censorship-resistant education wherever it's needed.
📎 References
-
@ 6e0ea5d6:0327f353
2025-04-14 15:10:58Ascolta bene.
A man’s collapse never begins on the battlefield.
It begins in the invisible antechamber of his own mind.
Before any public fall, there is an ignored internal whisper—
a small, quiet, private decision that gradually drags him toward ruin.No empire ever fell without first rotting from within.
The world does not destroy a man who hasn’t first surrendered to himself.
The enemy outside only wins when it finds space in the void the man has silently carved.**Non ti sbagliare ** — there are no armies more ruthless than undisciplined thoughts.
There are no blows more fatal than the ones we deal ourselves:
with small concessions, well-crafted excuses,
and the slow deterioration of our integrity.
What people call failure is nothing more than the logical outcome
of a sequence of internal betrayals.Afraid of the world? Sciocchezze.
But a man who’s already bowed before his own weaknesses—
he needs no enemies.
He digs his own grave, chooses the epitaph,
and the only thing the world does is toss in some dirt.Capisci?
Strength isn’t the absence of falling, but the presence of resistance.
The true battle isn’t external.
It takes place within—where there’s only you, your conscience, and the mirror.
And it’s in that silent courtroom where everything is decided.The discipline to say “no” to yourself
is more noble than any public glory.
Self-control is more valuable than any victory over others.In Sicily, we learn early:
“Cu s’abbrazza cu’ so’ nemicu, si scorda la faccia di l’amicu.”
He who embraces his enemy forgets the face of his friend.
The most dangerous enemy is the one you feed daily with self-indulgence.
And the most relentless confrontation is the one you avoid in front of the mirror.So don’t talk to me about external defeats.
Tell me where inside you the weakness began.
Tell me the exact moment you abandoned what you believed in, in the name of ease.
Because a man only falls before the world… after falling before himself.Thank you for reading, my friend!
If this message resonated with you, consider leaving your "🥃" as a token of appreciation.
A toast to our family!
-
@ 266815e0:6cd408a5
2025-04-26 13:10:09To all existing nostr developers and new nostr developers, stop using kind 1 events... just stop whatever your doing and switch the kind to
Math.round(Math.random() * 10000)
trust me it will be betterWhat are kind 1 events
kind 1 events are defined in NIP-10 as "simple plaintext notes" or in other words social posts.
Don't trick your users
Most users are joining nostr for the social experience, and secondly to find all the cool "other stuff" apps They find friends, browse social posts, and reply to them. If a user signs into a new nostr client and it starts asking them to sign kind 1 events with blobs of JSON, they will sign it without thinking too much about it.
Then when they return to their comfy social apps they will see that they made 10+ posts with massive amounts of gibberish that they don't remember posting. then they probably will go looking for the delete button and realize there isn't one...
Even if those kind 1 posts don't contain JSON and have a nice fancy human readable syntax. they will still confuse users because they won't remember writing those social posts
What about "discoverability"
If your goal is to make your "other stuff" app visible to more users, then I would suggest using NIP-19 and NIP-89 The first allows users to embed any other event kind into social posts as
nostr:nevent1
ornostr:naddr1
links, and the second allows social clients to redirect users to an app that knows how to handle that specific kind of eventSo instead of saving your apps data into kind 1 events. you can pick any kind you want, then give users a "share on nostr" button that allows them to compose a social post (kind 1) with a
nostr:
link to your special kind of event and by extension you appWhy its a trap
Once users start using your app it becomes a lot more difficult to migrate to a new event kind or data format. This sounds obvious, but If your app is built on kind 1 events that means you will be stuck with their limitations forever.
For example, here are some of the limitations of using kind 1 - Querying for your apps data becomes much more difficult. You have to filter through all of a users kind 1 events to find which ones are created by your app - Discovering your apps data is more difficult for the same reason, you have to sift through all the social posts just to find the ones with you special tag or that contain JSON - Users get confused. as mentioned above users don't expect "other stuff" apps to be creating special social posts - Other nostr clients won't understand your data and will show it as a social post with no option for users to learn about your app
-
@ c21b1a6c:0cd4d170
2025-04-14 14:41:20🧾 Progress Report Two
Hey everyone! I’m back with another progress report for Formstr, a part of the now completed grant from nostr:npub10pensatlcfwktnvjjw2dtem38n6rvw8g6fv73h84cuacxn4c28eqyfn34f . This update covers everything we’ve built since the last milestone — including polish, performance, power features, and plenty of bug-squashing.
🏗️ What’s New Since Last Time?
This quarter was less about foundational rewrites and more about production hardening and real-world feedback. With users now onboard, our focus shifted to polishing UX, fixing issues, and adding new features that made Formstr easier and more powerful to use.
✨ New Features & UX Improvements
- Edit Existing Forms
- Form Templates
- Drag & Drop Enhancements (especially for mobile)
- New Public Forms UX (card-style layout)
- FAQ & Support Sections
- Relay Modal for Publishing
- Skeleton Loaders and subtle UI Polish
🐛 Major Bug Fixes
- Fixed broken CSV exports when responses were empty
- Cleaned up mobile rendering issues for public forms
- Resolved blank.ts export issues and global form bugs
- Fixed invalid
npub
strings in the admin flow - Patched response handling for private forms
- Lots of small fixes for titles, drafts, embedded form URLs, etc.
🔐 Access Control & Privacy
- Made forms private by default
- Fixed multiple issues around form visibility, access control UIs, and anonymous submissions
- Improved detection of pubkey issues in shared forms
🚧 Some Notable In-Progress Features
The following features are actively being developed, and many are nearing completion:
-
Conditional Questions:
This one’s been tough to crack, but we’re close!
Work in progress bykeraliss
and myself:
👉 PR #252 -
Downloadable Forms:
Fully-contained downloadable HTML versions of forms.
Being led bycasyazmon
with initial code by Basanta Goswami
👉 PR #274 -
OLLAMA Integration (Self-Hosted LLMs):
Users will be able to create forms using locally hosted LLMs.
PR byashu01304
👉 PR #247 -
Sections in Forms:
Work just started on adding section support!
Small PoC PR bykeraliss
:
👉 PR #217
🙌 Huge Thanks to New Contributors
We've had amazing contributors this cycle. Big thanks to:
- Aashutosh Gandhi (ashu01304) – drag-and-drop enhancements, OLLAMA integration
- Amaresh Prasad (devAmaresh) – fixed npub and access bugs
- Biresh Biswas (Billa05) – skeleton loaders
- Shashank Shekhar Singh (Shashankss1205) – bugfixes, co-authored image patches
- Akap Azmon Deh-nji (casyazmon) – CSV fixes, downloadable forms
- Manas Ranjan Dash (mdash3735) – bug fixes
- Basanta Goswami – initial groundwork for downloadable forms
- keraliss – ongoing work on conditional questions and sections
We also registered for the Summer of Bitcoin program and have been receiving contributions from some incredibly bright new applicants.
🔍 What’s Still Coming?
From the wishlist I committed to during the grant, here’s what’s still in the oven:
-[x] Upgrade to nip-44 - [x] Access Controlled Forms: A Form will be able to have multiple admins and Editors. - [x] Private Forms and Fixed Participants: Enncrypt a form and only allow certain npubs to fill it. - [x] Edit Past Forms: Being able to edit an existing form. - [x] Edit Past Forms
- [ ] Conditional Rendering (in progress)
- [ ] Sections (just started)
- [ ] Integrations - OLLAMA / AI-based Form Generation (near complete)
- [ ] Paid Surveys
- [ ] NIP-42 Private Relay support
❌ What’s De-Prioritized?
- Nothing is de-prioritized now especially since Ollama Integration got re-prioritized (thanks to Summer Of Bitcoin). We are a little delayed on Private Relays support but it's now becoming a priority and in active development. Zap Surveys will be coming soon too.
💸 How Funds Were Used
- Paid individual contributors for their work.
- Living expenses to allow full-time focus on development
🧠 Closing Thoughts
Things feel like they’re coming together now. We’re out of "beta hell", starting to see real adoption, and most importantly, gathering feedback from real users. That’s helping us make smarter choices and move fast without breaking too much.
Stay tuned for the next big drop — and in the meantime, try creating a form at formstr.app, and let me know what you think!
-
@ d1667293:388e7004
2025-04-29 16:00:19The "Bitcoindollar" system—an emerging term which describes the interplay of U.S. dollar-denominated stablecoins and Bitcoin as complementary forces in the evolving monetary framework of the digital era (and which replaces the defunct Petrodollar system)—has sparked an interesting debate on Nostr with PowMaxi.
You will find the thread links at the bottom of this article.
Powmaxi argues that attempting to merge hard money (Bitcoin) with soft money (the U.S. dollar) is structurally doomed, because the systems are inherently contradictory and cannot coexist without one eventually destroying the other.
This critique is certainly valid, but ONLY if the Bitcoindollar is viewed as a final system. But I never claim that. To the contrary, the conclusion in my book is that this is a system that buys time for fiat, absorbs global demand for monetary stability, and ushers in a Bitcoinized world without the immediate collapse and the reset of the fiat system which would otherwise cause dramatic consequences. The Bitcoindollar is the only way to a gradual Bitcoin dominance in 10-20 years time while avoiding sudden collapse of the fiat system, so that also the power elites who hold the keys to this system can adapt.\ At least this is my hope.
Therefore the "fusion" isn't the future. The siphoning is. And the U.S. may try to ride it as long as possible. The Bitcoindollar system is a transitional strategic framework, not a\ permanent monetary equilibrium. In the end I agree with PowMaxi.
His detailed critique deserves an equally detailed analysis. Here's how the objections break down and why they don’t necessarily undermine the Bitcoindollar system.
1. Hard Money vs. Soft Money: Opposed Systems?
Objection: Bitcoin is a closed, decentralized system with a fixed supply; the dollar is an open, elastic system governed by central banks and political power. These traits are mutually exclusive and incompatible.
Response: Ideologically, yes. Practically, no. Hybrid financial systems are not uncommon. Bitcoin and stablecoins serve different user needs: Bitcoin is a store of value; stablecoins are mediums of exchange. Their coexistence mirrors real-world economic needs. The contradiction can be managed, and is not fatal at least for the transitional phase.
2. Scarcity vs. Elasticity: Economic Incompatibility?
Objection: Bitcoin can’t inject liquidity in crises; fiat systems can. Anchoring fiat to Bitcoin removes policymakers' tools.
Response: Correct — but that’s why Bitcoin is held as a reserve, not used as the primary medium of exchange in the Bitcoindollar model. Fiat-based liquidity mechanisms still function via stablecoins, while Bitcoin acts as a counterweight to long-term monetary debasement. The system’s strength is in its optionality: you don’t have to use Bitcoin until you want an exit ramp from fiat.
3. No Stable Equilibrium: One Must Win?
Objection: The system will destabilize. Either Bitcoin undermines fiat or fiat suppresses Bitcoin.
Response: Not necessarily in this transitional phase. The “conflict” isn’t between tools — it’s between control philosophies. The dollar won’t disappear overnight, and Bitcoin isn’t going away. The likely outcome is a gradual shifting of savings and settlement layers to Bitcoin, while fiat continues to dominate day-to-day payments and credit markets — until Bitcoin becomes structurally better in both.
4. Gresham’s and Thiers’ Law: Hollowing Fiat?
Objection: People save in Bitcoin and spend fiat, eroding fiat value.
Response: Yes — and that’s been happening since 2009. But this isn’t a flaw; it’s a transition mechanism. The Bitcoindollar model recognizes this and creates a bridge: it monetizes U.S. debt while preserving access to hard money. In the long run, my expectation is that naturally bitcoin will prevail both as a SOV and currency, but until then, stablecoins and T-bill-backed tokens serve useful roles in the global economy.
5. Philosophical Incompatibility?
Objection: Bitcoin prioritizes individual sovereignty; fiat systems are hierarchical. They can't be reconciled.
Response: They don’t need to be reconciled ideologically to function in parallel. Users choose the tool that suits their needs. One empowers individual autonomy; the other offers state-backed convenience. This is a competition of values, not a mechanical incompatibility. The Bitcoindollar model is a strategy. It’s a bridge between old and new systems, not a permanent coexistence.
6. Fusion is Impossible?
Objection: It’s only a temporary bridge. One side must lose.
Response: Exactly. The Bitcoindollar system is a transitional bridge. But that doesn’t reduce its value. It provides a functional pathway for individuals, companies, and governments to gradually exit broken monetary systems and experiment with new models.
In the meantime, the U.S. benefits from stablecoin-driven Treasury demand, while Bitcoin continues to grow as a global reserve asset.
Bottom line: A Strategic Convergence, Not a Permanent Fusion
The Bitcoindollar system isn’t a contradiction. It’s a convergence zone. It reflects the reality that monetary systems evolve gradually, not cleanly. Bitcoin and fiat will compete, overlap, and influence each other. Eventually, yes — hard money wins. But until then, hybrid systems offer powerful stepping stones.
Thread links:
Thread started from this initial post.
-
@ 0b118e40:4edc09cb
2025-04-13 02:46:36note - i wrote this before the global trade war, back when tariffs only affected China, Mexico, and Canada. But you will still get the gist of it.
During tough economic times, governments have to decide if they should open markets to global trade or protect local businesses with tariffs. The United States has swung between these two strategies, and history shows that the results are never straightforward
Just days ago, President Donald Trump imposed tariffs on imports from Canada, Mexico, and China. He framed these tariffs (25% on most Canadian goods, 10% on Canadian energy, 25% on Mexican imports, and 10% on Chinese imports) as a way to protect American industries.
But will they actually help, or could they backfire?
A History of U.S. Tariffs
Many have asked if countries will retaliate against the US. They can and they have. Once upon a time, 60 countries were so pissed off at the US, they retaliated at one go and crushed US dominance over trade.
This was during the Great Depression era in the 1930s when the government passed the Smoot-Hawley Tariff Act, placing high taxes on over 20,000 foreign goods. The goal was to protect American jobs, especially American farmers and manufacturers, but it backfired so badly.
Over 60 countries, including Canada, France, and Germany, retaliated by imposing their own tariffs. By 1933, US imports and exports both dropped significantly over 60%, and unemployment rose to 25%.
After President Franklin Roosevelt came to office, he implemented the Reciprocal Trade Agreements Act of 1934 to reverse these policies, calming the world down and reviving trade again.
The economist history of protectionism
The idea of shielding local businesses with tariffs isn’t new or recent. It's been around for a few centuries. In the 16th to 18th centuries, mercantilism encouraged countries to limit imports and boost exports.
In the 18th century, Adam Smith, in The Wealth of Nations, argued that free trade allows nations to specialize in what they do best countering protectionism policies. Friedrich List later challenged Smith's view by stating that developing countries need some protection to grow their “infant” industries which is a belief that still influences many governments today.
But how often do governments truly support startups and new small businesses in ways that create real growth, rather than allowing funds to trickle down to large corporations instead?
In modern times, John Maynard Keynes supported government intervention during economic downturns, while Milton Friedman championed free trade and minimal state interference.
Paul Krugman argued that limited protectionism can help large industries by providing them unfair advantages to become global market leaders. I have deep reservations about Krugman’s take, particularly on its impact or lack thereof in globalizing small businesses.
The debate between free trade and protectionism has existed for centuries. What’s clear is that there is no one-size-fits-all model to this.
The Political Debate - left vs right
Both the left and right have used tariffs but for different reasons. The right supports tariffs to protect jobs and industries, while the left uses them to prevent multinational corporations from exploiting cheap labor abroad.
Neoliberal policies favor free trade, arguing that competition drives efficiency and growth. In the US this gets a little bit confusing as liberals are tied to the left, and free trade is tied to libertarianism which the rights align closely with, yet at present right wing politicians push for protectionism which crosses the boundaries of free-trade.
There are also institutions like the WTO and IMF who advocate for open markets, but their policies often reflect political alliances and preferential treatment - so it depends on what you define as true 'free trade’.
Who Really Benefits from Tariffs?
Most often, tariffs help capital-intensive industries like pharmaceuticals, tech, and defense, while hurting labor-intensive sectors like manufacturing, agriculture, and construction.
This worsens inequality as big corporations will thrive, while small businesses and working-class people struggle with rising costs and fewer job opportunities.
I’ve been reading through international trade economics out of personal interest, I'll share some models below on why this is the case
1. The Disruption of Natural Trade
Tariffs disrupt the natural flow of trade. The Heckscher-Ohlin model explains that countries export goods that match their resources like Canada’s natural resource energy or China’s labour intensive textile and electronics. When tariffs block this natural exchange, industries suffer.
A clear example was Europe’s energy crisis during the Russia-Ukraine war. By abruptly cutting themselves off from the supply of Russian energy, Europe scrambled to find alternative sources. In the end, it was the people who had to bear the brunt of skyrocketing prices of energy.
2. Who wins and who loses?
The Stolper-Samuelson theorem helps us understand who benefits from tariffs and who loses. The idea behind it is that tariffs benefit capital-intensive industries, while labor-intensive sectors are hurt.
In the US, small manufacturing industries that rely on low-cost imports on intermediary parts from countries like China and Mexico will face rising costs, making their final goods too expensive and less competitive. This is similar to what happened to Argentina, where subsidies and devaluation of pesos contributed to cost-push inflation, making locally produced goods more expensive and less competitive globally.
This also reminded me of the decline of the US Rust Belt during the 1970s and 1980s, where the outsourcing of labour-intensive manufacturing jobs led to economic stagnation in many regions in the Midwest, while capital-intensive sectors flourished on the coasts. It resulted in significantly high income inequality that has not improved over the last 40 years.
Ultimately the cost of economic disruption is disproportionately borne by smaller businesses and low-skilled workers. At the end of the day, the rich get richer and the poor get poorer.
3. Delays in Economic Growth
The Rybczynski theorem suggests that economic growth depends on how efficiently nations reallocate their resources toward capital- or labor-intensive industries. But tariffs can distort this transition and progress.
In the 70s and 80s, the US steel industry had competition from Japan and Germany who modernized their production methods, making their steel more efficient and cost-effective. Instead of prioritizing innovation, many U.S. steel producers relied on tariffs and protectionist measures to shield themselves from foreign competition. This helped for a bit but over time, American steelmakers lost global market share as foreign competitors continued to produce better, cheaper steel. Other factors, such as aging infrastructure, and economic shifts toward a service-based economy, further contributed to the industry's decline.
A similar struggle is seen today with China’s high-tech ambitions. Tariffs on Chinese electronics and technology products limit access to key inputs, such as semiconductors and advanced robotics. While China continues its push for automation and AI-driven manufacturing, these trade barriers increase costs and disrupt supply chains, forcing China to accelerate its decoupling from Western markets. This shift could further strengthen alliances within BRICS, as China seeks alternative trade partnerships to reduce reliance on U.S.-controlled financial and technological ecosystems.
Will the current Tariff imposition backfire and isolate the US like it did a hundred years ago or 50 years ago? Is US risking it's position as a trusted economic leader? Only time would tell
The impact of tariff on innovation - or lack thereof
While the short-term impacts of tariffs often include higher consumer prices and job losses, the long-term effects can be even more damaging, as they discourage innovation by increasing costs and reducing competition.
Some historical examples globally : * Nigeria: Blocking import of rice opened up black market out of desperation to survive. * Brazil: Protectionist car policies led to expensive, outdated vehicles. * Malaysia’s Proton: Sheltered by tariffs and cronyism and failed to compete globally. * India (before 1991): Over-regulation limited the industries, until economic reforms allowed for growth. * Soviet Union during Cold War : Substandard products and minimal innovation due to the absence of foreign alternatives, yielding to economic stagnation.
On the flip side, Vietnam has significantly reduced protectionism policies by actively pursuing free trade agreements. This enabled it to become a key manufacturing hub. But Vietnam is not stopping there as it is actively pushing forward its capital-intensive growth by funding entrepreneurs.
The Future of U.S. Tariffs
History has shown that tariffs rarely deliver their intended benefits without unintended consequences. While they may provide temporary relief, they often raise prices, shrink job opportunities, and weaken industries in the long run.
Without a clear strategy for innovation and industrial modernization, the U.S. risks repeating past mistakes of isolating itself from global trade rather than strengthening its economy.
At this point, only time will tell whether these tariffs will truly help Americans or will they, once again, make the rich richer and the poor poorer.
-
@ 3b3a42d3:d192e325
2025-04-10 08:57:51Atomic Signature Swaps (ASS) over Nostr is a protocol for atomically exchanging Schnorr signatures using Nostr events for orchestration. This new primitive enables multiple interesting applications like:
- Getting paid to publish specific Nostr events
- Issuing automatic payment receipts
- Contract signing in exchange for payment
- P2P asset exchanges
- Trading and enforcement of asset option contracts
- Payment in exchange for Nostr-based credentials or access tokens
- Exchanging GMs 🌞
It only requires that (i) the involved signatures be Schnorr signatures using the secp256k1 curve and that (ii) at least one of those signatures be accessible to both parties. These requirements are naturally met by Nostr events (published to relays), Taproot transactions (published to the mempool and later to the blockchain), and Cashu payments (using mints that support NUT-07, allowing any pair of these signatures to be swapped atomically.
How the Cryptographic Magic Works 🪄
This is a Schnorr signature
(Zₓ, s)
:s = z + H(Zₓ || P || m)⋅k
If you haven't seen it before, don't worry, neither did I until three weeks ago.
The signature scalar s is the the value a signer with private key
k
(and public keyP = k⋅G
) must calculate to prove his commitment over the messagem
given a randomly generated noncez
(Zₓ
is just the x-coordinate of the public pointZ = z⋅G
).H
is a hash function (sha256 with the tag "BIP0340/challenge" when dealing with BIP340),||
just means to concatenate andG
is the generator point of the elliptic curve, used to derive public values from private ones.Now that you understand what this equation means, let's just rename
z = r + t
. We can do that,z
is just a randomly generated number that can be represented as the sum of two other numbers. It also follows thatz⋅G = r⋅G + t⋅G ⇔ Z = R + T
. Putting it all back into the definition of a Schnorr signature we get:s = (r + t) + H((R + T)ₓ || P || m)⋅k
Which is the same as:
s = sₐ + t
wheresₐ = r + H((R + T)ₓ || P || m)⋅k
sₐ
is what we call the adaptor signature scalar) and t is the secret.((R + T)ₓ, sₐ)
is an incomplete signature that just becomes valid by add the secret t to thesₐ
:s = sₐ + t
What is also important for our purposes is that by getting access to the valid signature s, one can also extract t from it by just subtracting
sₐ
:t = s - sₐ
The specific value of
t
depends on our choice of the public pointT
, sinceR
is just a public point derived from a randomly generated noncer
.So how do we choose
T
so that it requires the secret t to be the signature over a specific messagem'
by an specific public keyP'
? (without knowing the value oft
)Let's start with the definition of t as a valid Schnorr signature by P' over m':
t = r' + H(R'ₓ || P' || m')⋅k' ⇔ t⋅G = r'⋅G + H(R'ₓ || P' || m')⋅k'⋅G
That is the same as:
T = R' + H(R'ₓ || P' || m')⋅P'
Notice that in order to calculate the appropriate
T
that requirest
to be an specific signature scalar, we only need to know the public nonceR'
used to generate that signature.In summary: in order to atomically swap Schnorr signatures, one party
P'
must provide a public nonceR'
, while the other partyP
must provide an adaptor signature using that nonce:sₐ = r + H((R + T)ₓ || P || m)⋅k
whereT = R' + H(R'ₓ || P' || m')⋅P'
P'
(the nonce provider) can then add his own signature t to the adaptor signaturesₐ
in order to get a valid signature byP
, i.e.s = sₐ + t
. When he publishes this signature (as a Nostr event, Cashu transaction or Taproot transaction), it becomes accessible toP
that can now extract the signaturet
byP'
and also make use of it.Important considerations
A signature may not be useful at the end of the swap if it unlocks funds that have already been spent, or that are vulnerable to fee bidding wars.
When a swap involves a Taproot UTXO, it must always use a 2-of-2 multisig timelock to avoid those issues.
Cashu tokens do not require this measure when its signature is revealed first, because the mint won't reveal the other signature if they can't be successfully claimed, but they also require a 2-of-2 multisig timelock when its signature is only revealed last (what is unavoidable in cashu for cashu swaps).
For Nostr events, whoever receives the signature first needs to publish it to at least one relay that is accessible by the other party. This is a reasonable expectation in most cases, but may be an issue if the event kind involved is meant to be used privately.
How to Orchestrate the Swap over Nostr?
Before going into the specific event kinds, it is important to recognize what are the requirements they must meet and what are the concerns they must address. There are mainly three requirements:
- Both parties must agree on the messages they are going to sign
- One party must provide a public nonce
- The other party must provide an adaptor signature using that nonce
There is also a fundamental asymmetry in the roles of both parties, resulting in the following significant downsides for the party that generates the adaptor signature:
- NIP-07 and remote signers do not currently support the generation of adaptor signatures, so he must either insert his nsec in the client or use a fork of another signer
- There is an overhead of retrieving the completed signature containing the secret, either from the blockchain, mint endpoint or finding the appropriate relay
- There is risk he may not get his side of the deal if the other party only uses his signature privately, as I have already mentioned
- There is risk of losing funds by not extracting or using the signature before its timelock expires. The other party has no risk since his own signature won't be exposed by just not using the signature he received.
The protocol must meet all those requirements, allowing for some kind of role negotiation and while trying to reduce the necessary hops needed to complete the swap.
Swap Proposal Event (kind:455)
This event enables a proposer and his counterparty to agree on the specific messages whose signatures they intend to exchange. The
content
field is the following stringified JSON:{ "give": <signature spec (required)>, "take": <signature spec (required)>, "exp": <expiration timestamp (optional)>, "role": "<adaptor | nonce (optional)>", "description": "<Info about the proposal (optional)>", "nonce": "<Signature public nonce (optional)>", "enc_s": "<Encrypted signature scalar (optional)>" }
The field
role
indicates what the proposer will provide during the swap, either the nonce or the adaptor. When this optional field is not provided, the counterparty may decide whether he will send a nonce back in a Swap Nonce event or a Swap Adaptor event using thenonce
(optionally) provided by in the Swap Proposal in order to avoid one hop of interaction.The
enc_s
field may be used to store the encrypted scalar of the signature associated with thenonce
, since this information is necessary later when completing the adaptor signature received from the other party.A
signature spec
specifies thetype
and all necessary information for producing and verifying a given signature. In the case of signatures for Nostr events, it contain a template with all the fields, exceptpubkey
,id
andsig
:{ "type": "nostr", "template": { "kind": "<kind>" "content": "<content>" "tags": [ … ], "created_at": "<created_at>" } }
In the case of Cashu payments, a simplified
signature spec
just needs to specify the payment amount and an array of mints trusted by the proposer:{ "type": "cashu", "amount": "<amount>", "mint": ["<acceptable mint_url>", …] }
This works when the payer provides the adaptor signature, but it still needs to be extended to also work when the payer is the one receiving the adaptor signature. In the later case, the
signature spec
must also include atimelock
and the derived public keysY
of each Cashu Proof, but for now let's just ignore this situation. It should be mentioned that the mint must be trusted by both parties and also support Token state check (NUT-07) for revealing the completed adaptor signature and P2PK spending conditions (NUT-11) for the cryptographic scheme to work.The
tags
are:"p"
, the proposal counterparty's public key (required)"a"
, akind:30455
Swap Listing event or an application specific version of it (optional)
Forget about this Swap Listing event for now, I will get to it later...
Swap Nonce Event (kind:456) - Optional
This is an optional event for the Swap Proposal receiver to provide the public nonce of his signature when the proposal does not include a nonce or when he does not want to provide the adaptor signature due to the downsides previously mentioned. The
content
field is the following stringified JSON:{ "nonce": "<Signature public nonce>", "enc_s": "<Encrypted signature scalar (optional)>" }
And the
tags
must contain:"e"
, akind:455
Swap Proposal Event (required)"p"
, the counterparty's public key (required)
Swap Adaptor Event (kind:457)
The
content
field is the following stringified JSON:{ "adaptors": [ { "sa": "<Adaptor signature scalar>", "R": "<Signer's public nonce (including parity byte)>", "T": "<Adaptor point (including parity byte)>", "Y": "<Cashu proof derived public key (if applicable)>", }, …], "cashu": "<Cashu V4 token (if applicable)>" }
And the
tags
must contain:"e"
, akind:455
Swap Proposal Event (required)"p"
, the counterparty's public key (required)
Discoverability
The Swap Listing event previously mentioned as an optional tag in the Swap Proposal may be used to find an appropriate counterparty for a swap. It allows a user to announce what he wants to accomplish, what his requirements are and what is still open for negotiation.
Swap Listing Event (kind:30455)
The
content
field is the following stringified JSON:{ "description": "<Information about the listing (required)>", "give": <partial signature spec (optional)>, "take": <partial signature spec (optional)>, "examples: [<take signature spec>], // optional "exp": <expiration timestamp (optional)>, "role": "<adaptor | nonce (optional)>" }
The
description
field describes the restrictions on counterparties and signatures the user is willing to accept.A
partial signature spec
is an incompletesignature spec
used in Swap Proposal eventskind:455
where omitting fields signals that they are still open for negotiation.The
examples
field is an array ofsignature specs
the user would be willing totake
.The
tags
are:"d"
, a unique listing id (required)"s"
, the status of the listingdraft | open | closed
(required)"t"
, topics related to this listing (optional)"p"
, public keys to notify about the proposal (optional)
Application Specific Swap Listings
Since Swap Listings are still fairly generic, it is expected that specific use cases define new event kinds based on the generic listing. Those application specific swap listing would be easier to filter by clients and may impose restrictions and add new fields and/or tags. The following are some examples under development:
Sponsored Events
This listing is designed for users looking to promote content on the Nostr network, as well as for those who want to monetize their accounts by sharing curated sponsored content with their existing audiences.
It follows the same format as the generic Swap Listing event, but uses the
kind:30456
instead.The following new tags are included:
"k"
, event kind being sponsored (required)"title"
, campaign title (optional)
It is required that at least one
signature spec
(give
and/ortake
) must have"type": "nostr"
and also contain the following tag["sponsor", "<pubkey>", "<attestation>"]
with the sponsor's public key and his signature over the signature spec without the sponsor tag as his attestation. This last requirement enables clients to disclose and/or filter sponsored events.Asset Swaps
This listing is designed for users looking for counterparties to swap different assets that can be transferred using Schnorr signatures, like any unit of Cashu tokens, Bitcoin or other asset IOUs issued using Taproot.
It follows the same format as the generic Swap Listing event, but uses the
kind:30457
instead.It requires the following additional tags:
"t"
, asset pair to be swapped (e.g."btcusd"
)"t"
, asset being offered (e.g."btc"
)"t"
, accepted payment method (e.g."cashu"
,"taproot"
)
Swap Negotiation
From finding an appropriate Swap Listing to publishing a Swap Proposal, there may be some kind of negotiation between the involved parties, e.g. agreeing on the amount to be paid by one of the parties or the exact content of a Nostr event signed by the other party. There are many ways to accomplish that and clients may implement it as they see fit for their specific goals. Some suggestions are:
- Adding
kind:1111
Comments to the Swap Listing or an existing Swap Proposal - Exchanging tentative Swap Proposals back and forth until an agreement is reached
- Simple exchanges of DMs
- Out of band communication (e.g. Signal)
Work to be done
I've been refining this specification as I develop some proof-of-concept clients to experience its flaws and trade-offs in practice. I left the signature spec for Taproot signatures out of the current document as I still have to experiment with it. I will probably find some important orchestration issues related to dealing with
2-of-2 multisig timelocks
, which also affects Cashu transactions when spent last, that may require further adjustments to what was presented here.The main goal of this article is to find other people interested in this concept and willing to provide valuable feedback before a PR is opened in the NIPs repository for broader discussions.
References
- GM Swap- Nostr client for atomically exchanging GM notes. Live demo available here.
- Sig4Sats Script - A Typescript script demonstrating the swap of a Cashu payment for a signed Nostr event.
- Loudr- Nostr client under development for sponsoring the publication of Nostr events. Live demo available at loudr.me.
- Poelstra, A. (2017). Scriptless Scripts. Blockstream Research. https://github.com/BlockstreamResearch/scriptless-scripts
-
@ 66675158:1b644430
2025-03-23 11:39:41I don't believe in "vibe coding" – it's just the newest Silicon Valley fad trying to give meaning to their latest favorite technology, LLMs. We've seen this pattern before with blockchain, when suddenly Non Fungible Tokens appeared, followed by Web3 startups promising to revolutionize everything from social media to supply chains. VCs couldn't throw money fast enough at anything with "decentralized" (in name only) in the pitch deck. Andreessen Horowitz launched billion-dollar crypto funds, while Y Combinator batches filled with blockchain startups promising to be "Uber for X, but on the blockchain."
The metaverse mania followed, with Meta betting its future on digital worlds where we'd supposedly hang out as legless avatars. Decentralized (in name only) autonomous organizations emerged as the next big thing – supposedly democratic internet communities that ended up being the next scam for quick money.
Then came the inevitable collapse. The FTX implosion in late 2022 revealed fraud, Luna/Terra's death spiral wiped out billions (including my ten thousand dollars), while Celsius and BlockFi froze customer assets before bankruptcy.
By 2023, crypto winter had fully set in. The SEC started aggressive enforcement actions, while users realized that blockchain technology had delivered almost no practical value despite a decade of promises.
Blockchain's promises tapped into fundamental human desires – decentralization resonated with a generation disillusioned by traditional institutions. Evangelists presented a utopian vision of freedom from centralized control. Perhaps most significantly, crypto offered a sense of meaning in an increasingly abstract world, making the clear signs of scams harder to notice.
The technology itself had failed to solve any real-world problems at scale. By 2024, the once-mighty crypto ecosystem had become a cautionary tale. Venture firms quietly scrubbed blockchain references from their websites while founders pivoted to AI and large language models.
Most reading this are likely fellow bitcoiners and nostr users who understand that Bitcoin is blockchain's only valid use case. But I shared that painful history because I believe the AI-hype cycle will follow the same trajectory.
Just like with blockchain, we're now seeing VCs who once couldn't stop talking about "Web3" falling over themselves to fund anything with "AI" in the pitch deck. The buzzwords have simply changed from "decentralized" to "intelligent."
"Vibe coding" is the perfect example – a trendy name for what is essentially just fuzzy instructions to LLMs. Developers who've spent years honing programming skills are now supposed to believe that "vibing" with an AI is somehow a legitimate methodology.
This might be controversial to some, but obvious to others:
Formal, context-free grammar will always remain essential for building precise systems, regardless of how advanced natural language technology becomes
The mathematical precision of programming languages provides a foundation that human language's ambiguity can never replace. Programming requires precision – languages, compilers, and processors operate on explicit instructions, not vibes. What "vibe coding" advocates miss is that beneath every AI-generated snippet lies the same deterministic rules that have always governed computation.
LLMs don't understand code in any meaningful sense—they've just ingested enormous datasets of human-written code and can predict patterns. When they "work," it's because they've seen similar patterns before, not because they comprehend the underlying logic.
This creates a dangerous dependency. Junior developers "vibing" with LLMs might get working code without understanding the fundamental principles. When something breaks in production, they'll lack the knowledge to fix it.
Even experienced developers can find themselves in treacherous territory when relying too heavily on LLM-generated code. What starts as a productivity boost can transform into a dependency crutch.
The real danger isn't just technical limitations, but the false confidence it instills. Developers begin to believe they understand systems they've merely instructed an AI to generate – fundamentally different from understanding code you've written yourself.
We're already seeing the warning signs: projects cobbled together with LLM-generated code that work initially but become maintenance nightmares when requirements change or edge cases emerge.
The venture capital money is flowing exactly as it did with blockchain. Anthropic raised billions, OpenAI is valued astronomically despite minimal revenue, and countless others are competing to build ever-larger models with vague promises. Every startup now claims to be "AI-powered" regardless of whether it makes sense.
Don't get me wrong—there's genuine innovation happening in AI research. But "vibe coding" isn't it. It's a marketing term designed to make fuzzy prompting sound revolutionary.
Cursor perfectly embodies this AI hype cycle. It's an AI-enhanced code editor built on VS Code that promises to revolutionize programming by letting you "chat with your codebase." Just like blockchain startups promised to "revolutionize" industries, Cursor promises to transform development by adding LLM capabilities.
Yes, Cursor can be genuinely helpful. It can explain unfamiliar code, suggest completions, and help debug simple issues. After trying it for just an hour, I found the autocomplete to be MAGICAL for simple refactoring and basic functionality.
But the marketing goes far beyond reality. The suggestion that you can simply describe what you want and get production-ready code is dangerously misleading. What you get are approximations with:
- Security vulnerabilities the model doesn't understand
- Edge cases it hasn't considered
- Performance implications it can't reason about
- Dependency conflicts it has no way to foresee
The most concerning aspect is how such tools are marketed to beginners as shortcuts around learning fundamentals. "Why spend years learning to code when you can just tell AI what you want?" This is reminiscent of how crypto was sold as a get-rich-quick scheme requiring no actual understanding.
When you "vibe code" with an AI, you're not eliminating complexity—you're outsourcing understanding to a black box. This creates developers who can prompt but not program, who can generate but not comprehend.
The real utility of LLMs in development is in augmenting existing workflows:
- Explaining unfamiliar codebases
- Generating boilerplate for well-understood patterns
- Suggesting implementations that a developer evaluates critically
- Assisting with documentation and testing
These uses involve the model as a subordinate assistant to a knowledgeable developer, not as a replacement for expertise. This is where the technology adds value—as a sophisticated tool in skilled hands.
Cursor is just a better hammer, not a replacement for understanding what you're building. The actual value emerges when used by developers who understand what happens beneath the abstractions. They can recognize when AI suggestions make sense and when they don't because they have the fundamental knowledge to evaluate output critically.
This is precisely where the "vibe coding" narrative falls apart.
-
@ eac63075:b4988b48
2025-01-04 19:41:34Since its creation in 2009, Bitcoin has symbolized innovation and resilience. However, from time to time, alarmist narratives arise about emerging technologies that could "break" its security. Among these, quantum computing stands out as one of the most recurrent. But does quantum computing truly threaten Bitcoin? And more importantly, what is the community doing to ensure the protocol remains invulnerable?
The answer, contrary to sensationalist headlines, is reassuring: Bitcoin is secure, and the community is already preparing for a future where quantum computing becomes a practical reality. Let’s dive into this topic to understand why the concerns are exaggerated and how the development of BIP-360 demonstrates that Bitcoin is one step ahead.
What Is Quantum Computing, and Why Is Bitcoin Not Threatened?
Quantum computing leverages principles of quantum mechanics to perform calculations that, in theory, could exponentially surpass classical computers—and it has nothing to do with what so-called “quantum coaches” teach to scam the uninformed. One of the concerns is that this technology could compromise two key aspects of Bitcoin’s security:
- Wallets: These use elliptic curve algorithms (ECDSA) to protect private keys. A sufficiently powerful quantum computer could deduce a private key from its public key.
- Mining: This is based on the SHA-256 algorithm, which secures the consensus process. A quantum attack could, in theory, compromise the proof-of-work mechanism.
Understanding Quantum Computing’s Attack Priorities
While quantum computing is often presented as a threat to Bitcoin, not all parts of the network are equally vulnerable. Theoretical attacks would be prioritized based on two main factors: ease of execution and potential reward. This creates two categories of attacks:
1. Attacks on Wallets
Bitcoin wallets, secured by elliptic curve algorithms, would be the initial targets due to the relative vulnerability of their public keys, especially those already exposed on the blockchain. Two attack scenarios stand out:
-
Short-term attacks: These occur during the interval between sending a transaction and its inclusion in a block (approximately 10 minutes). A quantum computer could intercept the exposed public key and derive the corresponding private key to redirect funds by creating a transaction with higher fees.
-
Long-term attacks: These focus on old wallets whose public keys are permanently exposed. Wallets associated with Satoshi Nakamoto, for example, are especially vulnerable because they were created before the practice of using hashes to mask public keys.
We can infer a priority order for how such attacks might occur based on urgency and importance.
Bitcoin Quantum Attack: Prioritization Matrix (Urgency vs. Importance)
2. Attacks on Mining
Targeting the SHA-256 algorithm, which secures the mining process, would be the next objective. However, this is far more complex and requires a level of quantum computational power that is currently non-existent and far from realization. A successful attack would allow for the recalculation of all possible hashes to dominate the consensus process and potentially "mine" it instantly.
Satoshi Nakamoto in 2010 on Quantum Computing and Bitcoin Attacks
Recently, Narcelio asked me about a statement I made on Tubacast:
https://x.com/eddieoz/status/1868371296683511969
If an attack became a reality before Bitcoin was prepared, it would be necessary to define the last block prior to the attack and proceed from there using a new hashing algorithm. The solution would resemble the response to the infamous 2013 bug. It’s a fact that this would cause market panic, and Bitcoin's price would drop significantly, creating a potential opportunity for the well-informed.
Preferably, if developers could anticipate the threat and had time to work on a solution and build consensus before an attack, they would simply decide on a future block for the fork, which would then adopt the new algorithm. It might even rehash previous blocks (reaching consensus on them) to avoid potential reorganization through the re-mining of blocks using the old hash. (I often use the term "shielding" old transactions).
How Can Users Protect Themselves?
While quantum computing is still far from being a practical threat, some simple measures can already protect users against hypothetical scenarios:
- Avoid using exposed public keys: Ensure funds sent to old wallets are transferred to new ones that use public key hashes. This reduces the risk of long-term attacks.
- Use modern wallets: Opt for wallets compatible with SegWit or Taproot, which implement better security practices.
- Monitor security updates: Stay informed about updates from the Bitcoin community, such as the implementation of BIP-360, which will introduce quantum-resistant addresses.
- Do not reuse addresses: Every transaction should be associated with a new address to minimize the risk of repeated exposure of the same public key.
- Adopt secure backup practices: Create offline backups of private keys and seeds in secure locations, protected from unauthorized access.
BIP-360 and Bitcoin’s Preparation for the Future
Even though quantum computing is still beyond practical reach, the Bitcoin community is not standing still. A concrete example is BIP-360, a proposal that establishes the technical framework to make wallets resistant to quantum attacks.
BIP-360 addresses three main pillars:
- Introduction of quantum-resistant addresses: A new address format starting with "BC1R" will be used. These addresses will be compatible with post-quantum algorithms, ensuring that stored funds are protected from future attacks.
- Compatibility with the current ecosystem: The proposal allows users to transfer funds from old addresses to new ones without requiring drastic changes to the network infrastructure.
- Flexibility for future updates: BIP-360 does not limit the choice of specific algorithms. Instead, it serves as a foundation for implementing new post-quantum algorithms as technology evolves.
This proposal demonstrates how Bitcoin can adapt to emerging threats without compromising its decentralized structure.
Post-Quantum Algorithms: The Future of Bitcoin Cryptography
The community is exploring various algorithms to protect Bitcoin from quantum attacks. Among the most discussed are:
- Falcon: A solution combining smaller public keys with compact digital signatures. Although it has been tested in limited scenarios, it still faces scalability and performance challenges.
- Sphincs: Hash-based, this algorithm is renowned for its resilience, but its signatures can be extremely large, making it less efficient for networks like Bitcoin’s blockchain.
- Lamport: Created in 1977, it’s considered one of the earliest post-quantum security solutions. Despite its reliability, its gigantic public keys (16,000 bytes) make it impractical and costly for Bitcoin.
Two technologies show great promise and are well-regarded by the community:
- Lattice-Based Cryptography: Considered one of the most promising, it uses complex mathematical structures to create systems nearly immune to quantum computing. Its implementation is still in its early stages, but the community is optimistic.
- Supersingular Elliptic Curve Isogeny: These are very recent digital signature algorithms and require extensive study and testing before being ready for practical market use.
The final choice of algorithm will depend on factors such as efficiency, cost, and integration capability with the current system. Additionally, it is preferable that these algorithms are standardized before implementation, a process that may take up to 10 years.
Why Quantum Computing Is Far from Being a Threat
The alarmist narrative about quantum computing overlooks the technical and practical challenges that still need to be overcome. Among them:
- Insufficient number of qubits: Current quantum computers have only a few hundred qubits, whereas successful attacks would require millions.
- High error rate: Quantum stability remains a barrier to reliable large-scale operations.
- High costs: Building and operating large-scale quantum computers requires massive investments, limiting their use to scientific or specific applications.
Moreover, even if quantum computers make significant advancements, Bitcoin is already adapting to ensure its infrastructure is prepared to respond.
Conclusion: Bitcoin’s Secure Future
Despite advancements in quantum computing, the reality is that Bitcoin is far from being threatened. Its security is ensured not only by its robust architecture but also by the community’s constant efforts to anticipate and mitigate challenges.
The implementation of BIP-360 and the pursuit of post-quantum algorithms demonstrate that Bitcoin is not only resilient but also proactive. By adopting practical measures, such as using modern wallets and migrating to quantum-resistant addresses, users can further protect themselves against potential threats.
Bitcoin’s future is not at risk—it is being carefully shaped to withstand any emerging technology, including quantum computing.
-
@ 20986fb8:cdac21b3
2025-04-26 08:08:11The Traditional Hackathon: Brilliant Sparks with Limitations
For decades, hackathons have been the petri dishes of tech culture – frantic 24- or 48-hour coding marathons fueled by pizza, caffeine, and impossible optimism. From the first hackathon in 1999, when Sun Microsystems challenged Java developers to code on a Palm V in a day [1], to the all-night hack days at startups and universities, these events celebrated the hacker spirit. They gave us Facebook’s “Like” button and Chat features – iconic innovations born in overnight jams [1]. They spawned companies like GroupMe, which was coded in a few late-night hours and sold to Skype for $80 million a year later [2]. Hackathons became tech lore, synonymous with creativity unchained.
And yet, for all their electric energy and hype, traditional hackathons had serious limitations. They were episodic and offline – a once-in-a-blue-moon adrenaline rush rather than a sustainable process. A hackathon might gather 100 coders in a room over a weekend, then vanish until the next year. Low frequency, small scale, limited reach. Only those who could be on-site (often in Silicon Valley or elite campuses) could join. A brilliant hacker in Lagos or São Paulo would be left out, no matter how bright their ideas.
The outcomes of these sprint-like events were also constrained. Sure, teams built cool demos and won bragging rights. But in most cases, the projects were throwaway prototypes – “toy” apps that never evolved into real products or companies. It’s telling that studies found only about 5% of hackathon projects have any life a few months after the event [3]. Ninety-five percent evaporate – victims of that post-hackathon hangover, when everyone goes back to “real” work and the demo code gathers dust. Critics even dubbed hackathons “weekend wastedathons,” blasting their outputs as short-lived vaporware [3]. Think about it: a burst of creativity occurs, dozens of nifty ideas bloom… and then what? How many hackathon winners can you name that turned into enduring businesses? For every Carousell or EasyTaxi that emerged from a hackathon and later raised tens of millions [2], there were hundreds of clever mashups that never saw the light of day again.
The traditional hackathon model, as exciting as it was, rarely translated into sustained innovation. It was innovation in a silo: constrained by time, geography, and a lack of follow-through. Hackathons were events, not processes. They happened in a burst and ended just as quickly – a firework, not a sunrise.
Moreover, hackathons historically were insular. Until recently, they were largely run by and for tech insiders. Big tech companies did internal hackathons to juice employee creativity (Facebook’s famous all-nighters every few weeks led to Timeline and tagging features reaching a billion users [1]), and organizations like NASA and the World Bank experimented with hackathons for civic tech. But these were exceptions that proved the rule: hackathons were special occasions, not business-as-usual. Outside of tech giants, few organizations had the bandwidth or know-how to host them regularly. If you weren’t Google, Microsoft, or a well-funded startup hub, hackathons remained a novelty.
In fact, the world’s largest hackathon today is Microsoft’s internal global hackathon – with 70,000 employees collaborating across 75 countries [4] – an incredible feat, but one only a corporate titan could pull off. Smaller players could only watch and wonder.
The limitations were clear: hackathons were too infrequent and inaccessible to tap the full global talent pool, too short-lived to build anything beyond a prototype, and too isolated to truly change an industry. Yes, they produced amazing moments of genius – flashbulbs of innovation. But as a mechanism for continuous progress, the traditional hackathon was lacking. As an investor or tech leader, you might cheer the creativity but ask: Where is the lasting impact? Where is the infrastructure that turns these flashes into a steady beam of light?
In the spirit of Clay Christensen’s Innovator’s Dilemma, incumbents often dismissed hackathon projects as mere toys – interesting but not viable. And indeed, “the next big thing always starts out being dismissed as a toy” [5]. Hackathons generated plenty of toys, but rarely the support system to turn those toys into the next big thing. The model was ripe for reinvention. Why, in the 2020s, were we still innovating with a 1990s playbook? Why limit breakthrough ideas to a weekend or a single location? Why allow 95% of nascent innovations to wither on the vine? These questions hung in the air, waiting for an answer.
Hackathons 2.0 – DoraHacks and the First Evolution (2020–2024)
Enter DoraHacks. In the early 2020s, DoraHacks emerged like a defibrillator for the hackathon format, jolting it to new life. DoraHacks 1.0 (circa 2020–2024) was nothing less than the reinvention of the hackathon – an upgrade from Hackathon 1.0 to Hackathon 2.0. It took the hackathon concept, supercharged it, scaled it, and extended its reach in every dimension. The result was a global hacker movement, a platform that transformed hackathons from one-off sprints into a continuous engine for tech innovation. How did DoraHacks revolutionize the hackathon? Let’s count the ways:
From 24 Hours to 24 Days (or 24 Weeks!)
DoraHacks stretched the timeframe of hackathons, unlocking vastly greater potential. Instead of a frantic 24-hour dash, many DoraHacks-supported hackathons ran for several weeks or even months. This was a game-changer. Suddenly, teams had time to build serious prototypes, iterate, and polish their projects. A longer format meant hackathon projects could evolve beyond the rough demo stage. Hackers could sleep (occasionally!), incorporate user feedback, and transform a kernel of an idea into a working MVP. The extended duration blurred the line between a hackathon and an accelerator program – but with the open spirit of a hackathon intact. For example, DoraHacks hackathons for blockchain startups often ran 6–8 weeks, resulting in projects that attracted real users and investors by the end. The extra time turned hackathon toys into credible products. It was as if the hackathon grew up: less hack, more build (“BUIDL”). By shattering the 24-hour norm, DoraHacks made hackathons far more productive and impactful.
From Local Coffee Shops to Global Online Arenas
DoraHacks moved hackathons from physical spaces into the cloud, unleashing global participation. Pre-2020, a hackathon meant being in a specific place – say, a warehouse in San Francisco or a university lab – shoulder-to-shoulder with a local team. DoraHacks blew the doors off that model with online hackathons that anyone, anywhere could join. Suddenly, a developer in Nigeria could collaborate with a designer in Ukraine and a product thinker in Brazil, all in the same virtual hackathon. Geography ceased to be a limit. When DoraHacks hosted the Naija HackAtom for African blockchain devs, it drew over 500 participants (160+ developers) across Nigeria’s tech community [6]. In another event, thousands of hackers from dozens of countries logged into a DoraHacks virtual venue to ideate and compete. This global reach did more than increase headcount – it brought diverse perspectives and problems into the innovation mix. A fintech hackathon might see Latin American coders addressing remittances, or an AI hackathon see Asian and African participants applying machine learning to local healthcare challenges. By going online, hackathons became massively inclusive. DoraHacks effectively democratized access to innovation competitions: all you needed was an internet connection and the will to create. The result was a quantum leap in both the quantity and quality of ideas. No longer were hackathons an elitist sport; they became a global innovation free-for-all, open to talent from every corner of the world.
From Dozens of Participants to Tens of Thousands
Scale was another pillar of the DoraHacks revolution. Traditional hackathons were intimate affairs (dozens, maybe a few hundred participants at best). DoraHacks helped orchestrate hackathons an order of magnitude larger. We’re talking global hackathons with thousands of developers and multi-million dollar prize pools. For instance, in one 2021 online hackathon, nearly 7,000 participants submitted 550 projects for $5 million in prizes [7] – a scale unimaginable in the early 2010s. DoraHacks itself became a nexus for these mega-hackathons. The platform’s hackathons in the Web3 space routinely saw hundreds of teams competing for prizes sometimes exceeding $1 million. This scale wasn’t just vanity metrics; it meant a deeper talent bench attacking problems and a higher probability that truly exceptional projects would emerge. By casting a wide net, DoraHacks events captured star teams that might have been overlooked in smaller settings. The proof is in the outcomes: 216 builder teams were funded with over $5 million in one DoraHacks-powered hackathon series on BNB Chain [8] – yes, five million dollars, distributed to over two hundred teams as seed funding. That’s not a hackathon, that’s an economy! The prize pools ballooned from pizza money to serious capital, attracting top-tier talent who realized this hackathon could launch my startup. As a result, projects coming out of DoraHacks were not just weekend hacks – they were venture-ready endeavors. The hackathon graduated from a science fair to a global startup launchpad.
From Toy Projects to Real Startups (Even Unicorns)
Here’s the most thrilling part: DoraHacks hackathons started producing not just apps, but companies. And some of them turned into unicorns (companies valued at $1B+). We saw earlier the rare cases of pre-2020 hackathon successes like Carousell (a simple idea at a 2012 hackathon that became a $1.1B valued marketplace [2]) or EasyTaxi (born in a hackathon, later raising $75M and spanning 30 countries [2]). DoraHacks turbocharged this phenomenon. By providing more time, support, and follow-up funding, DoraHacks-enabled hackathons became cradles of innovation where raw hacks matured into fully-fledged ventures. Take 1inch Network for example – a decentralized finance aggregator that started as a hackathon project in 2019. Sergej Kunz and Anton Bukov built a prototype at a hackathon and kept iterating. Fast forward: 1inch has now processed over $400 billion in trading volume [9] and became one of the leading platforms in DeFi. Or consider the winners of DoraHacks Web3 hackathons: many have gone on to raise multimillion-dollar rounds from top VCs. Hackathons became the front door to the startup world – the place where founders made their debut. A striking illustration was the Solana Season Hackathons: projects like STEPN, a move-to-earn app, won a hackathon track in late 2021 and shortly after grew into a sensation with a multi-billion dollar token economy [10]. These are not isolated anecdotes; they represent a trend DoraHacks set in motion. The platform’s hackathons produced a pipeline of fundable, high-impact startups. In effect, DoraHacks blurred the line between a hackathon and a seed-stage incubator. The playful hacker ethos remained, but now the outcomes were much more than bragging rights – they were companies with real users, revenue, and valuations. To paraphrase investor Chris Dixon, DoraHacks took those “toys” and helped nurture them into the next big things [5].
In driving this first evolution of the hackathon, DoraHacks didn’t just improve on an existing model – it created an entirely new innovation ecosystem. Hackathons became high-frequency, global, and consequential. What used to be a weekend thrill became a continuous pipeline for innovation. DoraHacks events started churning out hundreds of viable projects every year, many of which secured follow-on funding. The platform provided not just the event itself, but the after-care: community support, mentorship, and links to investors and grants (through initiatives like DoraHacks’ grant programs and quadratic funding rounds).
By 2024, the results spoke volumes. DoraHacks had grown into the world’s most important hackathon platform – the beating heart of a global hacker movement spanning blockchain, AI, and beyond. The numbers tell the story. Over nine years, DoraHacks supported 4,000+ projects in securing more than $30 million in funding [11]; by 2025, that figure skyrocketed as 21,000+ startups and developer teams received over $80 million via DoraHacks-supported hackathons and grants [12]. This is not hype – this is recorded history. According to CoinDesk, “DoraHacks has made its mark as a global hackathon organizer and one of the world’s most active multi-chain Web3 developer platforms” [11]. Major tech ecosystems took notice. Over 40 public blockchain networks (L1s and L2s) – from Solana to Polygon to Avalanche – partnered with DoraHacks to run their hackathons and open innovation programs [13]. Blockworks reported that DoraHacks became a “core partner” to dozens of Web3 ecosystems, providing them access to a global pool of developers [13]. In the eyes of investors, DoraHacks itself was key infrastructure: “DoraHacks is key to advancing the development of the infrastructure for Web3,” noted one VC backing the platform [13].
In short, by 2024 DoraHacks had transformed the hackathon from a niche event into a global innovation engine. It proved that hackathons at scale can consistently produce real, fundable innovation – not just one-off gimmicks. It connected hackers with resources and turned isolated hacks into an evergreen, worldwide developer movement. This was Hackathons 2.0: bigger, longer, borderless, and far more impactful than ever before.
One might reasonably ask: Can it get any better than this? DoraHacks had seemingly cracked the code to harness hacker energy for lasting innovation. But the team behind DoraHacks wasn’t done. In fact, they were about to unveil something even more radical – a catalyst to push hackathons into a new epoch entirely. If DoraHacks 1.0 was the evolution, what came next would be a revolution.
The Agentic Hackathon: BUIDL AI and the Second Revolution
In 2024, DoraHacks introduced BUIDL AI, and with it, the concept of the Agentic Hackathon. If hackathons at their inception were analog phones, and DoraHacks 1.0 made them smartphones, then BUIDL AI is like giving hackathons an AI co-pilot – a self-driving mode. It’s not merely an incremental improvement; it’s a second revolution. BUIDL AI infused hackathons with artificial intelligence, automation, and agency (hence “agentic”), fundamentally changing how these events are organized and experienced. We are now entering the Age of Agentic Innovation, where hackathons run with the assistance of AI agents can occur with unprecedented frequency, efficiency, and intelligence.
So, what exactly is an Agentic Hackathon? It’s a hackathon where AI-driven agents augment the entire process – from planning and judging to participant support – enabling a scale and speed of innovation that was impossible before. In an agentic hackathon, AI is the tireless co-organizer working alongside humans. Routine tasks that used to bog down organizers are now handled by intelligent algorithms. Imagine hackathons that practically run themselves, continuously, like an “always-on” tournament of ideas. With BUIDL AI, DoraHacks effectively created self-driving hackathons – autonomous, efficient, and capable of operating 24/7, across multiple domains, simultaneously. This isn’t science fiction; it’s happening now. Let’s break down how BUIDL AI works and why it 10x’d hackathon efficiency overnight:
AI-Powered Judging and Project Review – 10× Efficiency Boost
One of the most labor-intensive aspects of big hackathons is judging hundreds of project submissions. It can take organizers weeks of effort to sift the high-potential projects from the rest. BUIDL AI changes that. It comes with a BUIDL Review module – an AI-driven judging system that can intelligently evaluate hackathon projects on multiple dimensions (completeness, originality, relevance to the hackathon theme, etc.) and automatically filter out low-quality submissions [14]. It’s like having an army of expert reviewers available instantly. The result? What used to require hundreds of human-hours now happens in a flash. DoraHacks reports that AI-assisted review has improved hackathon organization efficiency by more than 10× [14]. Think about that: a process that might have taken a month of tedious work can be done in a few days or less, with AI ensuring consistency and fairness in scoring. Organizers can now handle massive hackathons without drowning in paperwork, and participants get quicker feedback. The AI doesn’t replace human judges entirely – final decisions still involve experts – but it augments them, doing the heavy lifting of initial evaluation. This means hackathons can accept more submissions, confident that AI will help triage them. No more cutting off sign-ups because “we can’t review them all.” The machine scale is here. In an agentic hackathon, no good project goes unseen due to bandwidth constraints – the AI makes sure of that.
Automated Marketing and Storytelling
Winning a hackathon is great, but if nobody hears about it, the impact is muted. Traditionally, after a hackathon ended, organizers would manually compile results, write blog posts, thank sponsors – tasks that, while important, take time and often get delayed. BUIDL AI changes this too. It features an Automated Marketing capability that can generate post-hackathon reports and content with a click [14]. Imagine an AI that observes the entire event (the projects submitted, the winners, the tech trends) and then writes a polished summary: highlighting the best ideas, profiling the winning teams, extracting insights (“60% of projects used AI in healthcare this hackathon”). BUIDL AI does exactly that – it automatically produces a hackathon “highlight reel” and summary report [14]. This not only saves organizers the headache of writing marketing copy, but it also amplifies the hackathon’s reach. Within hours of an event, a rich recap can be shared globally, showcasing the innovations and attracting attention to the teams. Sponsors and partners love this, as their investment gets publicized promptly. Participants love it because their work is immediately celebrated and visible. In essence, every hackathon tells a story, and BUIDL AI ensures that story spreads far and wide – instantly. This kind of automated storytelling turns each hackathon into ongoing content, fueling interest and momentum for the next events. It’s a virtuous cycle: hackathons create innovations, AI packages the narrative, that narrative draws in more innovators.
One-Click Launch and Multi-Hackathon Management
Perhaps the most liberating feature of BUIDL AI is how it obliterates the logistical hurdles of organizing hackathons. Before, setting up a hackathon was itself a project – coordinating registrations, judges, prizes, communications, all manually configured. DoraHacks’ BUIDL AI introduces a one-click hackathon launch tool [14]. Organizers simply input the basics (theme, prize pool, dates, some judging criteria) and the platform auto-generates the event page, submission portal, judging workflow, and more. It’s as easy as posting a blog. This dramatically lowers the barrier for communities and companies to host hackathons. A small startup or a university club can now launch a serious global hackathon without a dedicated team of event planners. Furthermore, BUIDL AI supports Multi-Hackathon Management, meaning one organization can run multiple hackathons in parallel with ease [14]. In the past, even tech giants struggled to overlap hackathons – it was too resource-intensive. Now, an ecosystem could run, say, a DeFi hackathon, an AI hackathon, and an IoT hackathon all at once, with a lean team, because AI is doing the juggling in the back-end. The launch of BUIDL AI made it feasible to organize 12 hackathons a year – or even several at the same time – something unimaginable before [14]. The platform handles participant onboarding, sends reminders, answers common queries via chatbots, and keeps everything on track. In essence, BUIDL AI turns hackathon hosting into a scalable service. Just as cloud computing platforms let you spin up servers on demand, DoraHacks lets you spin up innovation events on demand. This is a tectonic shift: hackathons can now happen as frequently as needed, not as occasionally as resources allow. We’re talking about the birth of perpetual hackathon culture. Hackathons are no longer rare spark events; they can be continuous flames, always burning, always on.
Real-Time Mentor and Agentic Assistance
The “agentic” part of Agentic Hackathons isn’t only behind the scenes. It also touches the participant experience. With AI integration, hackers get smarter tools and support. For instance, BUIDL AI can include AI assistants that answer developers’ questions during the event (“How do I use this API?” or “Any example code for this algorithm?”), acting like on-demand mentors. It can match teams with potential collaborators or suggest resources. Essentially, every hacker has an AI helper at their side, reducing frustration and accelerating progress. Coding issues that might take hours to debug can be resolved in minutes with an AI pair programmer. This means project quality goes up and participants learn more. It’s as if each team has an extra member – an tireless, all-knowing one. This agentic assistance embodies the vision that “everyone is a hacker” [14] – because AI tools enable even less-experienced participants to build something impressive. The popularization of AI has automated repetitive grunt work and amplified what small teams can achieve [14], so the innovation potential of hackathons is far greater than before [14]. In an agentic hackathon, a team of two people with AI assistants can accomplish what a team of five might have in years past. The playing field is leveled and the creative ceiling is raised.
What do all these advances add up to? Simply this: Hackathons have evolved from occasional bouts of inspiration into a continuous, AI-optimized process of innovation. We have gone from Hackathons 2.0 to Hackathons 3.0 – hackathons that are autonomous, persistent, and intelligent. It’s a paradigm shift. The hackathon is no longer an event you attend; it’s becoming an environment you live in. With BUIDL AI, DoraHacks envisions a world where “Hackathons will enter an unprecedented era of automation and intelligence, allowing more hackers, developers, and open-source communities around the world to easily initiate and participate” [14]. Innovation can happen anytime, anywhere – because the infrastructure to support it runs 24/7 in the cloud, powered by AI. The hackathon has become an agentic platform, always ready to transform ideas into reality.
Crucially, this isn’t limited to blockchain or any single field. BUIDL AI is general-purpose. It is as relevant for an AI-focused hackathon as for a climate-tech or healthcare hackathon. Any domain can plug into this agentic hackathon platform and reap the benefits of higher frequency and efficiency. This heralds a future where hackathons become the default mode for problem-solving. Instead of committees and R&D departments working in silos, companies and communities can throw problems into the hackathon arena – an arena that is always active. It’s like having a global innovation engine humming in the background, ready to tackle challenges at a moment’s notice.
To put it vividly: If DoraHacks 1.0 turned hackathons into a high-speed car, DoraHacks 2.0 with BUIDL AI made it a self-driving car with the pedal to the metal. The roadblocks of cost, complexity, and time – gone. Now, any organization can accelerate from 0 to 60 on the innovation highway without a pit stop. Hackathons can be as frequent as blog updates, as integrated into operations as sprint demos. Innovation on demand, at scale – that’s the power of the Agentic Hackathon.
Innovation On-Demand: How Agentic Hackathons Benefit Everyone
The advent of agentic hackathons isn’t just a cool new toy for the tech community – it’s a transformative tool for businesses, developers, and entire industries. We’re entering an era where anyone with a vision can harness hackathons-as-a-service to drive innovation. Here’s how different players stand to gain from this revolution:
AI Companies – Turbocharging Ecosystem Growth
For AI-focused companies (think OpenAI, Google, Microsoft, Stability AI and the like), hackathons are goldmines of creative uses for their technology. Now, with agentic hackathons, an AI company can essentially run a continuous developer conference for their platform. For example, OpenAI can host always-on hackathons for building applications with GPT-4 or DALL-E. This means thousands of developers constantly experimenting and showcasing what the AI can do – effectively crowdsourcing innovation and killer apps for the AI platform. The benefit? It dramatically expands the company’s ecosystem and user base. New use cases emerge that the company’s own team might never have imagined. (It was independent hackers who first showed how GPT-3 could draft legal contracts or generate game levels – insights that came from hackathons and community contests.) With BUIDL AI, an AI company could spin up monthly hackathons with one click, each focusing on a different aspect (one month NLP, next month robotics, etc.). This is a marketing and R&D force multiplier. Instead of traditional, expensive developer evangelism tours, the AI does the heavy lifting to engage devs globally. The company’s product gets improved and promoted at the same time. In essence, every AI company can now launch a Hackathon League to promote their APIs/models. It’s no coincidence Coinbase just hosted its first AI hackathon to bridge crypto and AI [15] – they know that to seed adoption of a new paradigm, hackathons are the way. Expect every AI platform to do the same: continuous hackathons to educate developers, generate content (demos, tutorials), and identify standout talent to hire or fund. It’s community-building on steroids.
L1s/L2s and Tech Platforms – Discovering the Next Unicorns
For blockchain Layer1/Layer2 ecosystems, or any tech platform (cloud providers, VR platforms, etc.), hackathons are the new deal flow. In the Web3 world, it’s widely recognized that many of the best projects and protocols are born in hackathons. We saw how 1inch started as a hackathon project and became a DeFi unicorn [9]. There’s also Polygon (which aggressively runs hackathons to find novel dApps for its chain) and Filecoin (which used hackathons to surface storage applications). By using DoraHacks and BUIDL AI, these platforms can now run high-frequency hackathons to continuously source innovation. Instead of one or two big events a year, they can have a rolling program – a quarterly hackathon series or even simultaneous global challenges – to keep developers building all the time. The ROI is huge: the cost of running a hackathon (even with decent prizes) is trivial compared to acquiring a thriving new startup or protocol for your ecosystem. Hackathons effectively outsource initial R&D to passionate outsiders, and the best ideas bubble up. Solana’s hackathons led to star projects like Phantom and Solend gaining traction in its ecosystem. Facebook’s internal hackathons gave birth to features that kept the platform dominant [1]. Now any platform can do this externally: use hackathons as a radar for talent and innovation. Thanks to BUIDL AI, a Layer-2 blockchain, even if its core team is small, can manage a dozen parallel bounties and hackathons – one focusing on DeFi, one on NFTs, one on gaming, etc. The AI will help review submissions and manage community questions, so the platform’s devrel team doesn’t burn out. The result is an innovation pipeline feeding the platform’s growth. The next unicorn startup or killer app is identified early and supported. In effect, hackathons become the new startup funnel for VCs and ecosystems. We can expect venture investors to lurk in these agentic hackathons because that’s where the action is – the garages of the future are now cloud hackathon rooms. As Paul Graham wrote, “hackers and painters are both makers” [16], and these makers will paint the future of technology on the canvas of hackathon platforms.
Every Company and Community – Innovation as a Continuous Process
Perhaps the most profound impact of BUIDL AI is that it opens up hackathons to every organization, not just tech companies. Any company that wants to foster innovation – be it a bank exploring fintech, a hospital network seeking healthtech solutions, or a government looking for civic tech ideas – can leverage agentic hackathons. Innovation is no longer a privilege of the giant tech firms; it’s a cloud service accessible to all. For example, a city government could host a year-round hackathon for smart city solutions, where local developers continuously propose and build projects to improve urban life. The BUIDL AI platform could manage different “tracks” for transportation, energy, public safety, etc., with monthly rewards for top ideas. This would engage the community and yield a constant stream of pilot projects, far more dynamically than traditional RFP processes. Likewise, any Fortune 500 company that fears disruption (and who doesn’t?) can use hackathons to disrupt itself positively – inviting outsiders and employees to hack on the company’s own challenges. With the agentic model, even non-technical companies can do this without a hitch; the AI will guide the process, ensuring things run smoothly. Imagine hackathons as part of every corporate strategy department’s toolkit – continuously prototyping the future. As Marc Andreessen famously said, “software is eating the world” – and now every company can have a seat at the table by hosting hackathons to software-ize their business problems. This could democratize innovation across industries. The barrier to trying out bold ideas is so low (a weekend of a hackathon vs. months of corporate planning) that more wild, potentially disruptive ideas will surface from within companies. And with the global reach of DoraHacks, they can bring in external innovators too. Why shouldn’t a retail company crowdsource AR shopping ideas from global hackers? Why shouldn’t a pharma company run bioinformatics hackathons to find new ways to analyze data? There is no reason not to – the agentic hackathon makes it feasible and attractive. Hackathon-as-a-service is the new innovation department. Use it or risk being out-innovated by those who do.
All these benefits boil down to a simple but profound shift: hackathons are becoming a permanent feature of the innovation landscape, rather than a novelty. They are turning into an always-available resource, much like cloud computing or broadband internet. Need fresh ideas or prototypes? Spin up a hackathon and let the global talent pool tackle it. Want to engage your developer community? Launch a themed hackathon and give them a stage. Want to test out 10 different approaches to a problem? Run a hackathon and see what rises to the top. We’re effectively seeing the realization of what one might call the Innovation Commons – a space where problems and ideas are continuously matched, and solutions are rapidly iterated. And AI is the enabler that keeps this commons humming efficiently, without exhausting the human facilitators.
It’s striking how this addresses the classic pitfalls identified in hackathon critiques: sustainability and follow-through. In the agentic model, hackathons are no longer isolated bursts. They can connect to each other (winning teams from one hackathon can enter an accelerator or another hackathon next month). BUIDL AI can track teams and help link them with funding opportunities, closing the loop that used to leave projects orphaned after the event. A great project doesn’t die on Sunday night; it’s funneled into the next stage automatically (perhaps an AI even suggests which grant to apply for, which partner to talk to). This way, innovations have a life beyond the demo day, systematically.
We should also recognize a more philosophical benefit: the culture of innovation becomes more experimental, meritocratic, and fast-paced. In a world of agentic hackathons, the motto is “Why not prototype it? Why not try it now?” – because spinning up the environment to do so is quick and cheap. This mindset can permeate organizations and communities, making them more agile and bold. The cost of failure is low (a few weeks of effort), and the potential upside is enormous (finding the next big breakthrough). It creates a safe sandbox for disruptive ideas – addressing the Innovator’s Dilemma by structurally giving space to those ‘toy’ ideas to prove themselves [5]. Companies no longer have to choose between core business and experimentation; they can allocate a continuous hackathon track to the latter. In effect, DoraHacks and BUIDL AI have built an innovation factory – one that any visionary leader can rent for the weekend (or the whole year).
From Like Button to Liftoff: Hackathons as the Cradle of Innovation
To truly appreciate this new era, it’s worth reflecting on how many game-changing innovations started as hackathon projects or hackathon-like experiments – often despite the old constraints – and how much more we can expect when those constraints are removed. History is full of examples that validate the hackathon model of innovation:
Facebook’s DNA was shaped by hackathons
Mark Zuckerberg himself has credited the company’s internal hackathons for some of Facebook’s most important features. The Like button, Facebook Chat, and Timeline all famously emerged from engineers pulling all-nighters at hackathons [1]. An intern’s hackathon prototype for tagging people in comments was shipped to a billion users just two weeks later [1]. Facebook’s ethos “Move fast and break things” was practically the hackathon ethos formalized. It is no stretch to say Facebook won over MySpace in the 2000s because its culture of rapid innovation (fueled by hackathons) let it out-innovate its rival [1]. If hackathons did that within one company, imagine a worldwide network of hackathons – the pace of innovation everywhere could resemble that hypergrowth.
Google and the 20% Project
Google has long encouraged employees to spend 20% of time on side projects, which is a cousin of the hackathon idea – unstructured exploration. Gmail and Google News were born this way. Additionally, Google has hosted public hackathons around its APIs (like Android hackathons) that spurred the creation of countless apps. The point is, Google institutionalized hacker-style experimentation and reaped huge rewards. With agentic hackathons, even companies without Google’s resources can institutionalize experimentation. Every weekend can be a 20% time for the world’s devs using these platforms.
Open Source Movements
Open Source Movements have benefitted from hackathons (“code sprints”) to develop critical software. The entire OpenBSD operating system had regular hackathons that were essential to its development [3]. In more recent times, projects like Node.js or TensorFlow have organized hackathons to build libraries and tools. The result: stronger ecosystems and engaged contributors. DoraHacks embraces this, positioning itself as “the leading global hackathon community and open source developer incentive platform” [17]. The synergy of open source and hackathons (both decentralized, community-driven, merit-based) is a powerful engine. We can foresee open source projects launching always-on hackathons via BUIDL AI to continuously fix bugs, add features, and reward contributors. This could rejuvenate the open source world by providing incentives (through hackathon prizes) and recognition in a structured way.
The Startup World
The Startup World has hackathons to thank for many startups. We’ve mentioned Carousell (from a Startup Weekend hackathon, now valued over $1B [2]) and EasyTaxi (Startup Weekend Rio, went on to raise $75M [2]). Add to that list Zapier (integrations startup, conceived at a hackathon), GroupMe (acquired by Skype as noted), Instacart (an early version won a hackathon at Y Combinator Demo Day, legend has it), and numerous crypto startups (the founders of Ethereum itself met and collaborated through hackathons and Bitcoin meetups!). When Coinbase wants to find the next big thing in on-chain AI, they host a hackathon [15]. When Stripe wanted more apps on its payments platform, it ran hackathons and distributed bounties. This model just works. It identifies passionate builders and gives them a springboard. With agentic hackathons, that springboard is super-sized. It’s always there, and it can catch far more people. The funnel widens, so expect even more startups to originate from hackathons. It’s quite plausible that the biggest company of the 2030s won’t be founded in a garage – it will be born out of an online hackathon, formed by a team that met in a Discord server, guided by an AI facilitator, and funded within weeks on a platform like DoraHacks. In other words, the garage is going global and AI-powered.
Hackers & Painters – The Creative Connection
Paul Graham, in Hackers & Painters, drew an analogy between hacking and painting as creative endeavors [16]. Hackathons are where that creative energy concentrates and explodes. Many great programmers will tell you their most inspired work happened in a hackathon or skunkworks setting – free of bureaucratic restraints, in a flow state of creation. By scaling and multiplying hackathons, we are effectively amplifying the global creative capacity. We might recall the Renaissance when artists and inventors thrived under patronage and in gatherings – hackathons are the modern Renaissance workshops. They combine art, science, and enterprise. The likes of Leonardo da Vinci would have been right at home in a hackathon (he was notorious for prototyping like a madman). In fact, consider how hackathons embody the solution to the Innovator’s Dilemma: they encourage working on projects that seem small or “not worth it” to incumbents, which is exactly where disruptive innovation often hides [5]. By institutionalizing hackathons, DoraHacks is institutionalizing disruption – making sure the next Netflix or Airbnb isn’t missed because someone shrugged it off as a toy.
We’ve gone from a time when hackathons were rare and local to a time when they are global and constant. This is a pivotal change in the innovation infrastructure of the world. In the 19th century, we built railroads and telegraphs that accelerated the Industrial Revolution, connecting markets and minds. In the 20th century, we built the internet and the World Wide Web, unleashing the Information Revolution. Now, in the 21st century, DoraHacks and BUIDL AI are building the “Innovation Highway” – a persistent, AI-enabled network connecting problem-solvers to problems, talent to opportunities, capital to ideas, across the entire globe, in real time. It’s an infrastructure for innovation itself.
A Grand Vision: The New Infrastructure of Global Innovation
We stand at an inflection point. With DoraHacks and the advent of agentic hackathons, innovation is no longer confined to ivory labs, Silicon Valley offices, or once-a-year events. It is becoming a continuous global activity – an arena where the best minds and the boldest ideas meet, anytime, anywhere. This is a future where innovation is as ubiquitous as Wi-Fi and as relentless as Moore’s Law. It’s a future DoraHacks is actively building, and the implications are profound.
Picture a world a few years from now, where DoraHacks+BUIDL AI is the default backbone for innovation programs across industries. This platform is buzzing 24/7 with hackathons on everything from AI-driven healthcare to climate-change mitigation to new frontiers of art and entertainment. It’s not just for coders – designers, entrepreneurs, scientists, anyone with creative impulse plugs into this network. An entrepreneur in London has a business idea at 2 AM; by 2:15 AM, she’s on DoraHacks launching a 48-hour hackathon to prototype it, with AI coordinating a team of collaborators from four different continents. Sounds crazy? It will be commonplace. A government in Asia faces a sudden environmental crisis; they host an urgent hackathon via BUIDL AI and within days have dozens of actionable tech solutions from around the world. A venture fund in New York essentially “outsources” part of its research to the hackathon cloud – instead of merely requesting pitch decks, they sponsor open hackathons to see real prototypes first. This is agentic innovation in action – fast, borderless, and intelligent.
In this coming era, DoraHacks will be as fundamental to innovation as GitHub is to code or as AWS is to startups. It’s the platform where innovation lives. One might even call it the “GitHub of Innovation” – a social and technical layer where projects are born, not just stored. Already, DoraHacks calls itself “the global hacker movement” [17], and with BUIDL AI it becomes the autopilot of that movement. It’s fitting to think of it as part of the global public infrastructure for innovation. Just as highways move goods and the internet moves information, DoraHacks moves innovation itself – carrying ideas from inception to implementation at high speed.
When history looks back at the 2020s, the arrival of continuous, AI-driven hackathons will be seen as a key development in how humanity innovates. The vision is grand, but very tangible: Innovation becomes an everlasting hackathon. Think of it – the hacker ethos spreading into every corner of society, an eternal challenge to the status quo, constantly asking “How can we improve this? How can we reinvent that?” and immediately rallying the talent to do it. This is not chaos; it’s a new form of organized, decentralized R&D. It’s a world where any bold question – “Can we cure this disease? Can we educate children better? Can we make cities sustainable?” – can trigger a global hackathon and yield answers in days or weeks, not years. A world where innovation isn’t a scarce resource, jealously guarded by few, but a common good, an open tournament where the best solution wins, whether it comes from a Stanford PhD or a self-taught coder in Lagos.
If this sounds idealistic, consider how far we’ve come: Hackathons went from obscure coder meetups to the engine behind billion-dollar businesses and critical global tech (Bitcoin itself is a product of hacker culture!). With DoraHacks’s growth and BUIDL AI’s leap, the trajectory is set for hackathons to become continuous and ubiquitous. The technology and model are in place. It’s now about execution and adoption. And the trend is already accelerating – more companies are embracing open innovation, more developers are working remotely and participating in online communities, and AI is rapidly advancing as a co-pilot in all creative endeavors.
DoraHacks finds itself at the center of this transformation. It has the first-mover advantage, the community, and the vision. The company’s ethos is telling: “Funding the everlasting hacker movement” is one of their slogans [18]. They see hackathons as not just events but a movement that must be everlasting – a permanent revolution of the mind. With BUIDL AI, DoraHacks is providing the engine to make it everlasting. This hints at a future where DoraHacks+BUIDL AI is part of the critical infrastructure of global innovation, akin to a utility. It’s the innovation grid, and when you plug into it, magic happens.
Marc Andreessen’s writings often speak about “building a better future” with almost manifest destiny fervor. In that spirit, one can boldly assert: Agentic hackathons will build our future, faster and better. They will accelerate solutions to humanity’s toughest challenges by tapping a broader talent pool and iterating faster than ever. They will empower individuals – giving every creative mind on the planet the tools, community, and opportunity to make a real impact, immediately, not someday. This is deeply democratizing. It resonates with the ethos of the early internet – permissionless innovation. DoraHacks is bringing that ethos to structured innovation events and stretching them into an ongoing fabric.
In conclusion, we are witnessing a paradigm shift: Hackathons reinvented, innovation unchained. The limitations of the old model are gone, replaced by a new paradigm where hackathons are high-frequency, AI-augmented, and outcome-oriented. DoraHacks led this charge in the 2020–2024 period, and with BUIDL AI, it’s launching the next chapter – the Age of Agentic Innovation. For investors and visionaries, this is a call to action. We often talk about investing in “infrastructure” – well, this is investing in the infrastructure of innovation itself. Backing DoraHacks and its mission is akin to backing the builders of a transcontinental railroad or an interstate highway, except this time the cargo is ideas and breakthroughs. The network effects are enormous: every additional hackathon and participant adds value to the whole ecosystem, in a compounding way. It’s a positive-sum game of innovation. And DoraHacks is poised to be the platform and the community that captures and delivers that value globally.
DoraHacks reinvented hackathons – it turned hackathons from sporadic stunts into a sustained methodology for innovation. In doing so, it has thrown open the gates to an era where innovation can be agentic: self-driving, self-organizing, and ceaseless. We are at the dawn of this new age. It’s an age where, indeed, “he who has the developers has the world” [14] – and DoraHacks is making sure that every developer, every hacker, every dreamer anywhere can contribute to shaping our collective future. The grand vista ahead is one of continuous invention and discovery, powered by a global hive mind of hackers and guided by AI. DoraHacks and BUIDL AI stand at the helm of this movement, as the architects of the “innovation rails” on which we’ll ride. It’s not just a platform, it’s a revolutionary infrastructure – the new railroad, the new highway system for ideas. Buckle up, because with DoraHacks driving, the age of agentic innovation has arrived, and the future is hurtling toward us at hackathon speed. The hackathon never ends – and that is how we will invent a better world.
References
[1] Vocoli. (2015). Facebook’s Secret Sauce: The Hackathon. https://www.vocoli.com/blog/june-2015/facebook-s-secret-sauce-the-hackathon/
[2] Analytics India Magazine. (2023). Borne Out Of Hackathons. https://analyticsindiamag.com/ai-trends/borne-out-of-hackathons/
[3] Wikipedia. (n.d.). Hackathon: Origin and History. https://en.wikipedia.org/wiki/Hackathon#Origin_and_history
[4] LinkedIn. (2024). This year marked my third annual participation in Microsoft’s Global…. https://www.linkedin.com/posts/clare-ashforth_this-year-marked-my-third-annual-participation-activity-7247636808119775233-yev-
[5] Glasp. (n.d.). Chris Dixon’s Quotes. https://glasp.co/quotes/chris-dixon
[6] ODaily. (2024). Naija HackAtom Hackathon Recap. https://www.odaily.news/en/post/5203212
[7] Solana. (2021). Meet the winners of the Riptide hackathon - Solana. https://solana.com/news/riptide-hackathon-winners-solana
[8] DoraHacks. (n.d.). BNB Grant DAO - DoraHacks. https://dorahacks.io/bnb
[9] Cointelegraph. (2021). From Hackathon Project to DeFi Powerhouse: AMA with 1inch Network. https://cointelegraph.com/news/from-hackathon-project-to-defi-powerhouse-ama-with-1inch-network
[10] Gemini. (2022). How Does STEPN Work? GST and GMT Token Rewards. https://www.gemini.com/cryptopedia/stepn-nft-sneakers-gmt-token-gst-crypto-move-to-earn-m2e
[11] CoinDesk. (2022). Inside DoraHacks: The Open Source Bazaar Empowering Web3 Innovations. https://www.coindesk.com/sponsored-content/inside-dorahacks-the-open-source-bazaar-empowering-web3-innovations
[12] LinkedIn. (n.d.). DoraHacks. https://www.linkedin.com/company/dorahacks
[13] Blockworks. (2022). Web3 Hackathon Incubator DoraHacks Nabs $20M From FTX, Liberty City. https://blockworks.co/news/web3-hackathon-incubator-dorahacks-nabs-20m-from-ftx-liberty-city
[14] Followin. (2024). BUIDL AI: The future of Hackathon, a new engine for global open source technology. https://followin.io/en/feed/16892627
[15] Coinbase. (2024). Coinbase Hosts Its First AI Hackathon: Bringing the San Francisco Developer Community Onchain. https://www.coinbase.com/developer-platform/discover/launches/Coinbase-AI-hackathon
[16] Graham, P. (2004). Hackers & Painters. https://ics.uci.edu/~pattis/common/handouts/hackerspainters.pdf
[17] Himalayas. (n.d.). DoraHacks hiring Research Engineer – BUIDL AI. https://himalayas.app/companies/dorahacks/jobs/research-engineer-buidl-ai
[18] X. (n.d.). DoraHacks. https://x.com/dorahacks?lang=en -
@ d34e832d:383f78d0
2025-04-26 07:17:45Practical Privacy and Secure Communications
1. Bootable privacy operating systems—Tails, Qubes OS, and Whonix****
This Idea explores the technical deployment of bootable privacy operating systems—Tails, Qubes OS, and Whonix—for individuals and organizations seeking to enhance operational security (OpSec). These systems provide different layers of isolation, anonymity, and confidentiality, critical for cryptographic operations, Bitcoin custody, journalistic integrity, whistleblowing, and sensitive communications. The paper outlines optimal use cases, system requirements, technical architecture, and recommended operational workflows for each OS.
2. Running An Operating System
In a digital world where surveillance, metadata leakage, and sophisticated threat models are realities, bootable privacy OSs offer critical mitigation strategies. By running an operating system from a USB, DVD, or external drive—and often entirely in RAM—users can minimize the footprint left on host hardware, dramatically enhancing privacy.
This document details Tails, Qubes OS, and Whonix: three leading open-source projects addressing different aspects of operational security.
3. Technical Overview of Systems
| OS | Focus | Main Feature | Threat Model | |------------|---------------------------|-----------------------------------------------|--------------------------------| | Tails | Anonymity & Ephemerality | Runs entirely from RAM; routes traffic via Tor | For activists, journalists, Bitcoin users | | Qubes OS | Security through Compartmentalization | Hardware-level isolation via Xen hypervisor | Defense against malware, APTs, insider threats | | Whonix | Anonymity over Tor Networks | Split-Gateway Architecture (Whonix-Gateway & Whonix-Workstation) | For researchers, Bitcoin node operators, privacy advocates |
4. System Requirements
4.1 Tails
- RAM: Minimum 2 GB (4 GB recommended)
- CPU: x86_64 (Intel or AMD)
- Storage: 8GB+ USB stick (optional persistent storage)
4.2 Qubes OS
- RAM: 16 GB minimum
- CPU: Intel VT-x or AMD-V support required
- Storage: 256 GB SSD recommended
- GPU: Minimal compatibility (no Nvidia proprietary driver support)
4.3 Whonix
- Platform: VirtualBox/KVM Host (Linux, Windows, Mac)
- RAM: 4 GB minimum (8 GB recommended)
- Storage: 100 GB suggested for optimal performance
5. Deployment Models
| Model | Description | Recommended OS | |--------------------------|-----------------------------------|------------------------------| | USB-Only Boot | No installation on disk; ephemeral use | Tails | | Hardened Laptop | Full disk installation with encryption | Qubes OS | | Virtualized Lab | VMs on hardened workstation | Whonix Workstation + Gateway |
6. Operational Security Advantages
| OS | Key Advantages | |------------|----------------------------------------------------------------------------------------------------| | Tails | Memory wipe at shutdown, built-in Tor Browser, persistent volume encryption (LUKS) | | Qubes OS | Compartmentalized VMs for work, browsing, Bitcoin keys; TemplateVMs reduce attack surface | | Whonix | IP address leaks prevented even if the workstation is compromised; full Tor network integration |
7. Threat Model Coverage
| Threat Category | Tails | Qubes OS | Whonix | |----------------------------|-----------------|------------------|------------------| | Disk Forensics | ✅ (RAM-only) | ✅ (with disk encryption) | ✅ (VM separation) | | Malware Containment | ❌ | ✅ (strong) | ✅ (via VMs) | | Network Surveillance | ✅ (Tor enforced) | Partial (needs VPN/Tor setup) | ✅ (Tor Gateway) | | Hardware-Level Attacks | ❌ | ❌ | ❌ |
8. Use Cases
- Bitcoin Cold Storage and Key Signing (Tails)
- Boot Tails offline for air-gapped Bitcoin signing.
- Private Software Development (Qubes)
- Use separate VMs for coding, browsing, and Git commits.
- Anonymous Research (Whonix)
- Surf hidden services (.onion) without IP leak risk.
- Secure Communications (All)
- Use encrypted messaging apps (Session, XMPP, Matrix) without metadata exposure.
9. Challenges and Mitigations
| Challenge | Mitigation | |---------------------|---------------------------------------------| | Hardware Incompatibility | Validate device compatibility pre-deployment (esp. for Qubes) | | Tor Exit Node Surveillance | Use onion services or bridge relays (Tails, Whonix) | | USB Persistence Risks | Always encrypt persistent volumes (Tails) | | Hypervisor Bugs (Qubes) | Regular OS and TemplateVM updates |
Here’s a fully original technical whitepaper version of your request, rewritten while keeping the important technical ideas intact but upgrading structure, language, and precision.
Executive Summary
In a world where digital surveillance and privacy threats are escalating, bootable privacy operating systems offer a critical solution for at-risk individuals. Systems like Tails, Qubes OS, and Whonix provide strong, portable security by isolating user activities from compromised or untrusted hardware. This paper explores their architectures, security models, and real-world applications.
1. To Recap
Bootable privacy-centric operating systems are designed to protect users from forensic analysis, digital tracking, and unauthorized access. By booting from an external USB drive or DVD and operating independently from the host machine's internal storage, they minimize digital footprints and maximize operational security (OpSec).
This paper provides an in-depth technical analysis of: - Tails (The Amnesic Incognito Live System) - Qubes OS (Security through Compartmentalization) - Whonix (Anonymity via Tor Isolation)
Each system’s strengths, limitations, use cases, and installation methods are explored in detail.
2. Technical Overview of Systems
2.1 Tails (The Amnesic Incognito Live System)
Architecture:
- Linux-based Debian derivative. - Boots from USB/DVD, uses RAM exclusively unless persistent storage is manually enabled. - Routes all network traffic through Tor. - Designed to leave no trace unless explicitly configured otherwise.Key Features:
- Memory erasure on shutdown. - Pre-installed secure applications: Tor Browser, KeePassXC, OnionShare. - Persistent storage available but encrypted and isolated.Limitations:
- Limited hardware compatibility (especially Wi-Fi drivers). - No support for mobile OS platforms. - ISP visibility to Tor network usage unless bridges are configured.
2.2 Qubes OS
Architecture:
- Xen-based hypervisor model. - Security through compartmentalization: distinct "qubes" (virtual machines) isolate tasks and domains (work, personal, banking, etc.). - Networking and USB stacks run in restricted VMs to prevent direct device access.Key Features:
- Template-based management for efficient updates. - Secure Copy (Qubes RPC) for data movement without exposing full disks. - Integrated Whonix templates for anonymous browsing.Limitations:
- Requires significant hardware resources (RAM and CPU). - Limited hardware compatibility (strict requirements for virtualization support: VT-d/IOMMU).
2.3 Whonix
Architecture:
- Debian-based dual VM system. - One VM (Gateway) routes all traffic through Tor; the second VM (Workstation) is fully isolated from the physical network. - Can be run on top of Qubes OS, VirtualBox, or KVM.Key Features:
- Complete traffic isolation at the system level. - Strong protections against IP leaks (fails closed if Tor is inaccessible). - Advanced metadata obfuscation options.Limitations:
- High learning curve for proper configuration. - Heavy reliance on Tor can introduce performance bottlenecks.
3. Comparative Analysis
| Feature | Tails | Qubes OS | Whonix | |:--------|:------|:---------|:-------| | Anonymity Focus | High | Medium | High | | System Isolation | Medium | Very High | High | | Persistence | Optional | Full | Optional | | Hardware Requirements | Low | High | Medium | | Learning Curve | Low | High | Medium | | Internet Privacy | Mandatory Tor | Optional Tor | Mandatory Tor |
4. Use Cases
| Scenario | Recommended System | |:---------|:--------------------| | Emergency secure browsing | Tails | | Full system compartmentalization | Qubes OS | | Anonymous operations with no leaks | Whonix | | Activist communications from hostile regions | Tails or Whonix | | Secure long-term project management | Qubes OS |
5. Installation Overview
5.1 Hardware Requirements
- Tails: Minimum 2GB RAM, USB 2.0 or higher, Intel or AMD x86-64 processor.
- Qubes OS: Minimum 16GB RAM, VT-d/IOMMU virtualization support, SSD storage.
- Whonix: Runs inside VirtualBox or Qubes; requires host compatibility.
5.2 Setup Instructions
Tails: 1. Download latest ISO from tails.net. 2. Verify signature (GPG or in-browser). 3. Use balenaEtcher or dd to flash onto USB. 4. Boot from USB, configure Persistent Storage if necessary.
Qubes OS: 1. Download ISO from qubes-os.org. 2. Verify using PGP signatures. 3. Flash to USB or DVD. 4. Boot and install onto SSD with LUKS encryption enabled.
Whonix: 1. Download both Gateway and Workstation VMs from whonix.org. 2. Import into VirtualBox or a compatible hypervisor. 3. Configure VMs to only communicate through the Gateway.
6. Security Considerations
- Tails: Physical compromise of the USB stick is a risk. Use hidden storage if necessary.
- Qubes OS: Qubes is only as secure as its weakest compartment; misconfigured VMs can leak data.
- Whonix: Full reliance on Tor can reveal usage patterns if used carelessly.
Best Practices: - Always verify downloads via GPG. - Use a dedicated, non-personal device where possible. - Utilize Tor bridges if operating under oppressive regimes. - Practice OPSEC consistently—compartmentalization, metadata removal, anonymous communications.
7. Consider
Bootable privacy operating systems represent a critical defense against modern surveillance and oppression. Whether for emergency browsing, long-term anonymous operations, or full-stack digital compartmentalization, solutions like Tails, Qubes OS, and Whonix empower users to reclaim their privacy.
When deployed thoughtfully—with an understanding of each system’s capabilities and risks—these tools can provide an exceptional layer of protection for journalists, activists, security professionals, and everyday users alike.
10. Example: Secure Bitcoin Signing Workflow with Tails
- Boot Tails from USB.
- Disconnect from the network.
- Generate Bitcoin private key or sign transaction using Electrum.
- Save signed transaction to encrypted USB drive.
- Shut down to wipe RAM completely.
- Broadcast transaction from a separate, non-sensitive machine.
This prevents key exposure to malware, man-in-the-middle attacks, and disk forensic analysis.
11. Consider
Bootable privacy operating systems like Tails, Qubes OS, and Whonix offer robust, practical strategies for improving operational security across a wide spectrum of use cases—from Bitcoin custody to anonymous journalism. Their open-source nature, focus on minimizing digital footprints, and mature security architectures make them foundational tools for modern privacy workflows.
Choosing the appropriate OS depends on the specific threat model, hardware available, and user needs. Proper training and discipline remain crucial to maintain the security these systems enable.
Appendices
A. Download Links
B. Further Reading
- "The Qubes OS Architecture" Whitepaper
- "Operational Security and Bitcoin" by Matt Odell
- "Tor and the Darknet: Separating Myth from Reality" by EFF
-
@ eac63075:b4988b48
2024-11-09 17:57:27Based on a recent paper that included collaboration from renowned experts such as Lynn Alden, Steve Lee, and Ren Crypto Fish, we discuss in depth how Bitcoin's consensus is built, the main risks, and the complex dynamics of protocol upgrades.
Podcast https://www.fountain.fm/episode/wbjD6ntQuvX5u2G5BccC
Presentation https://gamma.app/docs/Analyzing-Bitcoin-Consensus-Risks-in-Protocol-Upgrades-p66axxjwaa37ksn
1. Introduction to Consensus in Bitcoin
Consensus in Bitcoin is the foundation that keeps the network secure and functional, allowing users worldwide to perform transactions in a decentralized manner without the need for intermediaries. Since its launch in 2009, Bitcoin is often described as an "immutable" system designed to resist changes, and it is precisely this resistance that ensures its security and stability.
The central idea behind consensus in Bitcoin is to create a set of acceptance rules for blocks and transactions, ensuring that all network participants agree on the transaction history. This prevents "double-spending," where the same bitcoin could be used in two simultaneous transactions, something that would compromise trust in the network.
Evolution of Consensus in Bitcoin
Over the years, consensus in Bitcoin has undergone several adaptations, and the way participants agree on changes remains a delicate process. Unlike traditional systems, where changes can be imposed from the top down, Bitcoin operates in a decentralized model where any significant change needs the support of various groups of stakeholders, including miners, developers, users, and large node operators.
Moreover, the update process is extremely cautious, as hasty changes can compromise the network's security. As a result, the philosophy of "don't fix what isn't broken" prevails, with improvements happening incrementally and only after broad consensus among those involved. This model can make progress seem slow but ensures that Bitcoin remains faithful to the principles of security and decentralization.
2. Technical Components of Consensus
Bitcoin's consensus is supported by a set of technical rules that determine what is considered a valid transaction and a valid block on the network. These technical aspects ensure that all nodes—the computers that participate in the Bitcoin network—agree on the current state of the blockchain. Below are the main technical components that form the basis of the consensus.
Validation of Blocks and Transactions
The validation of blocks and transactions is the central point of consensus in Bitcoin. A block is only considered valid if it meets certain criteria, such as maximum size, transaction structure, and the solving of the "Proof of Work" problem. The proof of work, required for a block to be included in the blockchain, is a computational process that ensures the block contains significant computational effort—protecting the network against manipulation attempts.
Transactions, in turn, need to follow specific input and output rules. Each transaction includes cryptographic signatures that prove the ownership of the bitcoins sent, as well as validation scripts that verify if the transaction conditions are met. This validation system is essential for network nodes to autonomously confirm that each transaction follows the rules.
Chain Selection
Another fundamental technical issue for Bitcoin's consensus is chain selection, which becomes especially important in cases where multiple versions of the blockchain coexist, such as after a network split (fork). To decide which chain is the "true" one and should be followed, the network adopts the criterion of the highest accumulated proof of work. In other words, the chain with the highest number of valid blocks, built with the greatest computational effort, is chosen by the network as the official one.
This criterion avoids permanent splits because it encourages all nodes to follow the same main chain, reinforcing consensus.
Soft Forks vs. Hard Forks
In the consensus process, protocol changes can happen in two ways: through soft forks or hard forks. These variations affect not only the protocol update but also the implications for network users:
-
Soft Forks: These are changes that are backward compatible. Only nodes that adopt the new update will follow the new rules, but old nodes will still recognize the blocks produced with these rules as valid. This compatibility makes soft forks a safer option for updates, as it minimizes the risk of network division.
-
Hard Forks: These are updates that are not backward compatible, requiring all nodes to update to the new version or risk being separated from the main chain. Hard forks can result in the creation of a new coin, as occurred with the split between Bitcoin and Bitcoin Cash in 2017. While hard forks allow for deeper changes, they also bring significant risks of network fragmentation.
These technical components form the base of Bitcoin's security and resilience, allowing the system to remain functional and immutable without losing the necessary flexibility to evolve over time.
3. Stakeholders in Bitcoin's Consensus
Consensus in Bitcoin is not decided centrally. On the contrary, it depends on the interaction between different groups of stakeholders, each with their motivations, interests, and levels of influence. These groups play fundamental roles in how changes are implemented or rejected on the network. Below, we explore the six main stakeholders in Bitcoin's consensus.
1. Economic Nodes
Economic nodes, usually operated by exchanges, custody providers, and large companies that accept Bitcoin, exert significant influence over consensus. Because they handle large volumes of transactions and act as a connection point between the Bitcoin ecosystem and the traditional financial system, these nodes have the power to validate or reject blocks and to define which version of the software to follow in case of a fork.
Their influence is proportional to the volume of transactions they handle, and they can directly affect which chain will be seen as the main one. Their incentive is to maintain the network's stability and security to preserve its functionality and meet regulatory requirements.
2. Investors
Investors, including large institutional funds and individual Bitcoin holders, influence consensus indirectly through their impact on the asset's price. Their buying and selling actions can affect Bitcoin's value, which in turn influences the motivation of miners and other stakeholders to continue investing in the network's security and development.
Some institutional investors have agreements with custodians that may limit their ability to act in network split situations. Thus, the impact of each investor on consensus can vary based on their ownership structure and how quickly they can react to a network change.
3. Media Influencers
Media influencers, including journalists, analysts, and popular personalities on social media, have a powerful role in shaping public opinion about Bitcoin and possible updates. These influencers can help educate the public, promote debates, and bring transparency to the consensus process.
On the other hand, the impact of influencers can be double-edged: while they can clarify complex topics, they can also distort perceptions by amplifying or minimizing change proposals. This makes them a force both of support and resistance to consensus.
4. Miners
Miners are responsible for validating transactions and including blocks in the blockchain. Through computational power (hashrate), they also exert significant influence over consensus decisions. In update processes, miners often signal their support for a proposal, indicating that the new version is safe to use. However, this signaling is not always definitive, and miners can change their position if they deem it necessary.
Their incentive is to maximize returns from block rewards and transaction fees, as well as to maintain the value of investments in their specialized equipment, which are only profitable if the network remains stable.
5. Protocol Developers
Protocol developers, often called "Core Developers," are responsible for writing and maintaining Bitcoin's code. Although they do not have direct power over consensus, they possess an informal veto power since they decide which changes are included in the main client (Bitcoin Core). This group also serves as an important source of technical knowledge, helping guide decisions and inform other stakeholders.
Their incentive lies in the continuous improvement of the network, ensuring security and decentralization. Many developers are funded by grants and sponsorships, but their motivations generally include a strong ideological commitment to Bitcoin's principles.
6. Users and Application Developers
This group includes people who use Bitcoin in their daily transactions and developers who build solutions based on the network, such as wallets, exchanges, and payment platforms. Although their power in consensus is less than that of miners or economic nodes, they play an important role because they are responsible for popularizing Bitcoin's use and expanding the ecosystem.
If application developers decide not to adopt an update, this can affect compatibility and widespread acceptance. Thus, they indirectly influence consensus by deciding which version of the protocol to follow in their applications.
These stakeholders are vital to the consensus process, and each group exerts influence according to their involvement, incentives, and ability to act in situations of change. Understanding the role of each makes it clearer how consensus is formed and why it is so difficult to make significant changes to Bitcoin.
4. Mechanisms for Activating Updates in Bitcoin
For Bitcoin to evolve without compromising security and consensus, different mechanisms for activating updates have been developed over the years. These mechanisms help coordinate changes among network nodes to minimize the risk of fragmentation and ensure that updates are implemented in an orderly manner. Here, we explore some of the main methods used in Bitcoin, their advantages and disadvantages, as well as historical examples of significant updates.
Flag Day
The Flag Day mechanism is one of the simplest forms of activating changes. In it, a specific date or block is determined as the activation moment, and all nodes must be updated by that point. This method does not involve prior signaling; participants simply need to update to the new software version by the established day or block.
-
Advantages: Simplicity and predictability are the main benefits of Flag Day, as everyone knows the exact activation date.
-
Disadvantages: Inflexibility can be a problem because there is no way to adjust the schedule if a significant part of the network has not updated. This can result in network splits if a significant number of nodes are not ready for the update.
An example of Flag Day was the Pay to Script Hash (P2SH) update in 2012, which required all nodes to adopt the change to avoid compatibility issues.
BIP34 and BIP9
BIP34 introduced a more dynamic process, in which miners increase the version number in block headers to signal the update. When a predetermined percentage of the last blocks is mined with this new version, the update is automatically activated. This model later evolved with BIP9, which allowed multiple updates to be signaled simultaneously through "version bits," each corresponding to a specific change.
-
Advantages: Allows the network to activate updates gradually, giving more time for participants to adapt.
-
Disadvantages: These methods rely heavily on miner support, which means that if a sufficient number of miners do not signal the update, it can be delayed or not implemented.
BIP9 was used in the activation of SegWit (BIP141) but faced challenges because some miners did not signal their intent to activate, leading to the development of new mechanisms.
User Activated Soft Forks (UASF) and User Resisted Soft Forks (URSF)
To increase the decision-making power of ordinary users, the concept of User Activated Soft Fork (UASF) was introduced, allowing node operators, not just miners, to determine consensus for a change. In this model, nodes set a date to start rejecting blocks that are not in compliance with the new update, forcing miners to adapt or risk having their blocks rejected by the network.
URSF, in turn, is a model where nodes reject blocks that attempt to adopt a specific update, functioning as resistance against proposed changes.
-
Advantages: UASF returns decision-making power to node operators, ensuring that changes do not depend solely on miners.
-
Disadvantages: Both UASF and URSF can generate network splits, especially in cases of strong opposition among different stakeholders.
An example of UASF was the activation of SegWit in 2017, where users supported activation independently of miner signaling, which ended up forcing its adoption.
BIP8 (LOT=True)
BIP8 is an evolution of BIP9, designed to prevent miners from indefinitely blocking a change desired by the majority of users and developers. BIP8 allows setting a parameter called "lockinontimeout" (LOT) as true, which means that if the update has not been fully signaled by a certain point, it is automatically activated.
-
Advantages: Ensures that changes with broad support among users are not blocked by miners who wish to maintain the status quo.
-
Disadvantages: Can lead to network splits if miners or other important stakeholders do not support the update.
Although BIP8 with LOT=True has not yet been used in Bitcoin, it is a proposal that can be applied in future updates if necessary.
These activation mechanisms have been essential for Bitcoin's development, allowing updates that keep the network secure and functional. Each method brings its own advantages and challenges, but all share the goal of preserving consensus and network cohesion.
5. Risks and Considerations in Consensus Updates
Consensus updates in Bitcoin are complex processes that involve not only technical aspects but also political, economic, and social considerations. Due to the network's decentralized nature, each change brings with it a set of risks that need to be carefully assessed. Below, we explore some of the main challenges and future scenarios, as well as the possible impacts on stakeholders.
Network Fragility with Alternative Implementations
One of the main risks associated with consensus updates is the possibility of network fragmentation when there are alternative software implementations. If an update is implemented by a significant group of nodes but rejected by others, a network split (fork) can occur. This creates two competing chains, each with a different version of the transaction history, leading to unpredictable consequences for users and investors.
Such fragmentation weakens Bitcoin because, by dividing hashing power (computing) and coin value, it reduces network security and investor confidence. A notable example of this risk was the fork that gave rise to Bitcoin Cash in 2017 when disagreements over block size resulted in a new chain and a new asset.
Chain Splits and Impact on Stakeholders
Chain splits are a significant risk in update processes, especially in hard forks. During a hard fork, the network is split into two separate chains, each with its own set of rules. This results in the creation of a new coin and leaves users with duplicated assets on both chains. While this may seem advantageous, in the long run, these splits weaken the network and create uncertainties for investors.
Each group of stakeholders reacts differently to a chain split:
-
Institutional Investors and ETFs: Face regulatory and compliance challenges because many of these assets are managed under strict regulations. The creation of a new coin requires decisions to be made quickly to avoid potential losses, which may be hampered by regulatory constraints.
-
Miners: May be incentivized to shift their computing power to the chain that offers higher profitability, which can weaken one of the networks.
-
Economic Nodes: Such as major exchanges and custody providers, have to quickly choose which chain to support, influencing the perceived value of each network.
Such divisions can generate uncertainties and loss of value, especially for institutional investors and those who use Bitcoin as a store of value.
Regulatory Impacts and Institutional Investors
With the growing presence of institutional investors in Bitcoin, consensus changes face new compliance challenges. Bitcoin ETFs, for example, are required to follow strict rules about which assets they can include and how chain split events should be handled. The creation of a new asset or migration to a new chain can complicate these processes, creating pressure for large financial players to quickly choose a chain, affecting the stability of consensus.
Moreover, decisions regarding forks can influence the Bitcoin futures and derivatives market, affecting perception and adoption by new investors. Therefore, the need to avoid splits and maintain cohesion is crucial to attract and preserve the confidence of these investors.
Security Considerations in Soft Forks and Hard Forks
While soft forks are generally preferred in Bitcoin for their backward compatibility, they are not without risks. Soft forks can create different classes of nodes on the network (updated and non-updated), which increases operational complexity and can ultimately weaken consensus cohesion. In a network scenario with fragmentation of node classes, Bitcoin's security can be affected, as some nodes may lose part of the visibility over updated transactions or rules.
In hard forks, the security risk is even more evident because all nodes need to adopt the new update to avoid network division. Experience shows that abrupt changes can create temporary vulnerabilities, in which malicious agents try to exploit the transition to attack the network.
Bounty Claim Risks and Attack Scenarios
Another risk in consensus updates are so-called "bounty claims"—accumulated rewards that can be obtained if an attacker manages to split or deceive a part of the network. In a conflict scenario, a group of miners or nodes could be incentivized to support a new update or create an alternative version of the software to benefit from these rewards.
These risks require stakeholders to carefully assess each update and the potential vulnerabilities it may introduce. The possibility of "bounty claims" adds a layer of complexity to consensus because each interest group may see a financial opportunity in a change that, in the long term, may harm network stability.
The risks discussed above show the complexity of consensus in Bitcoin and the importance of approaching it gradually and deliberately. Updates need to consider not only technical aspects but also economic and social implications, in order to preserve Bitcoin's integrity and maintain trust among stakeholders.
6. Recommendations for the Consensus Process in Bitcoin
To ensure that protocol changes in Bitcoin are implemented safely and with broad support, it is essential that all stakeholders adopt a careful and coordinated approach. Here are strategic recommendations for evaluating, supporting, or rejecting consensus updates, considering the risks and challenges discussed earlier, along with best practices for successful implementation.
1. Careful Evaluation of Proposal Maturity
Stakeholders should rigorously assess the maturity level of a proposal before supporting its implementation. Updates that are still experimental or lack a robust technical foundation can expose the network to unnecessary risks. Ideally, change proposals should go through an extensive testing phase, have security audits, and receive review and feedback from various developers and experts.
2. Extensive Testing in Secure and Compatible Networks
Before an update is activated on the mainnet, it is essential to test it on networks like testnet and signet, and whenever possible, on other compatible networks that offer a safe and controlled environment to identify potential issues. Testing on networks like Litecoin was fundamental for the safe launch of innovations like SegWit and the Lightning Network, allowing functionalities to be validated on a lower-impact network before being implemented on Bitcoin.
The Liquid Network, developed by Blockstream, also plays an important role as an experimental network for new proposals, such as OP_CAT. By adopting these testing environments, stakeholders can mitigate risks and ensure that the update is reliable and secure before being adopted by the main network.
3. Importance of Stakeholder Engagement
The success of a consensus update strongly depends on the active participation of all stakeholders. This includes economic nodes, miners, protocol developers, investors, and end users. Lack of participation can lead to inadequate decisions or even future network splits, which would compromise Bitcoin's security and stability.
4. Key Questions for Evaluating Consensus Proposals
To assist in decision-making, each group of stakeholders should consider some key questions before supporting a consensus change:
- Does the proposal offer tangible benefits for Bitcoin's security, scalability, or usability?
- Does it maintain backward compatibility or introduce the risk of network split?
- Are the implementation requirements clear and feasible for each group involved?
- Are there clear and aligned incentives for all stakeholder groups to accept the change?
5. Coordination and Timing in Implementations
Timing is crucial. Updates with short activation windows can force a split because not all nodes and miners can update simultaneously. Changes should be planned with ample deadlines to allow all stakeholders to adjust their systems, avoiding surprises that could lead to fragmentation.
Mechanisms like soft forks are generally preferable to hard forks because they allow a smoother transition. Opting for backward-compatible updates when possible facilitates the process and ensures that nodes and miners can adapt without pressure.
6. Continuous Monitoring and Re-evaluation
After an update, it's essential to monitor the network to identify problems or side effects. This continuous process helps ensure cohesion and trust among all participants, keeping Bitcoin as a secure and robust network.
These recommendations, including the use of secure networks for extensive testing, promote a collaborative and secure environment for Bitcoin's consensus process. By adopting a deliberate and strategic approach, stakeholders can preserve Bitcoin's value as a decentralized and censorship-resistant network.
7. Conclusion
Consensus in Bitcoin is more than a set of rules; it's the foundation that sustains the network as a decentralized, secure, and reliable system. Unlike centralized systems, where decisions can be made quickly, Bitcoin requires a much more deliberate and cooperative approach, where the interests of miners, economic nodes, developers, investors, and users must be considered and harmonized. This governance model may seem slow, but it is fundamental to preserving the resilience and trust that make Bitcoin a global store of value and censorship-resistant.
Consensus updates in Bitcoin must balance the need for innovation with the preservation of the network's core principles. The development process of a proposal needs to be detailed and rigorous, going through several testing stages, such as in testnet, signet, and compatible networks like Litecoin and Liquid Network. These networks offer safe environments for proposals to be analyzed and improved before being launched on the main network.
Each proposed change must be carefully evaluated regarding its maturity, impact, backward compatibility, and support among stakeholders. The recommended key questions and appropriate timing are critical to ensure that an update is adopted without compromising network cohesion. It's also essential that the implementation process is continuously monitored and re-evaluated, allowing adjustments as necessary and minimizing the risk of instability.
By following these guidelines, Bitcoin's stakeholders can ensure that the network continues to evolve safely and robustly, maintaining user trust and further solidifying its role as one of the most resilient and innovative digital assets in the world. Ultimately, consensus in Bitcoin is not just a technical issue but a reflection of its community and the values it represents: security, decentralization, and resilience.
8. Links
Whitepaper: https://github.com/bitcoin-cap/bcap
Youtube (pt-br): https://www.youtube.com/watch?v=rARycAibl9o&list=PL-qnhF0qlSPkfhorqsREuIu4UTbF0h4zb
-
-
@ d34e832d:383f78d0
2025-04-26 04:24:13A Secure, Compact, and Cost-Effective Offline Key Management System
1. Idea
This idea presents a cryptographic key generation appliance built on the Nookbox G9, a compact 1U mini NAS solution. Designed to be a dedicated air-gapped or offline-first device, this system enables the secure generation and handling of RSA, ECDSA, and Ed25519 key pairs. By leveraging the Nookbox G9's small form factor, NVMe storage, and Linux compatibility, we outline a practical method for individuals and organizations to deploy secure, reproducible, and auditable cryptographic processes without relying on cloud or always-connected environments.
2. Minimization Of Trust
In an era where cryptographic operations underpin everything from Bitcoin transactions to secure messaging, generating keys in a trust-minimized environment is critical. Cloud-based solutions or general-purpose desktops expose key material to increased risk. This project defines a dedicated hardware appliance for cryptographic key generation using Free and Open Source Software (FOSS) and a tightly scoped threat model.
3. Hardware Overview: Nookbox G9
| Feature | Specification | |-----------------------|----------------------------------------------------| | Form Factor | 1U Mini NAS | | Storage Capacity | Up to 8TB via 4 × 2TB M.2 NVMe SSDs | | PCIe Interface | Each M.2 slot uses PCIe Gen 3x2 | | Networking | Dual 2.5 Gigabit Ethernet | | Cooling | Passive cooling (requires modification for load) | | Operating System | Windows 11 pre-installed; compatible with Linux |
This hardware is chosen for its compact size, multiple SSD support, and efficient power consumption (~11W idle on Linux). It fits easily into a secure rack cabinet and can run entirely offline.
4. System Configuration
4.1 OS & Software Stack
We recommend wiping Windows and installing:
- OS: Ubuntu 24.10 LTS or Debian 12
- Key Tools:
gnupg
(for GPG, RSA, and ECC)age
orrage
(for modern encryption)openssl
(general-purpose cryptographic tool)ssh-keygen
(for Ed25519 or RSA SSH keys)vault
(optional: HashiCorp Vault for managing key secrets)pwgen
/diceware
(for secure passphrase generation)
4.2 Storage Layout
- Drive 1 (System): Ubuntu 24.10 with encrypted LUKS partition
- Drive 2 (Key Store): Encrypted Veracrypt volume for keys and secrets
- Drive 3 (Backup): Offline encrypted backup (mirrored or rotated)
- Drive 4 (Logs & Audit): System logs, GPG public keyring, transparency records
5. Security Principles
- Air-Gapping: Device operates disconnected from the internet during key generation.
- FOSS Only: All software used is open-source and auditable.
- No TPM/Closed Firmware Dependencies: BIOS settings disable Intel ME, TPM, and Secure Boot.
- Tamper Evidence: Physical access logs and optional USB kill switch setup.
- Transparency: Generation scripts stored on device, along with SHA256 of all outputs.
6. Workflow: Generating Keypairs
Example: Generating an Ed25519 GPG Key
```bash gpg --full-generate-key
Choose ECC > Curve: Ed25519
Set expiration, user ID, passphrase
```
Backup public and private keys:
bash gpg --armor --export-secret-keys [keyID] > private.asc gpg --armor --export [keyID] > public.asc sha256sum *.asc > hashes.txt
Store on encrypted volume and create a printed copy (QR or hex dump) for physical backup.
7. Performance Notes
While limited to PCIe Gen 3x2 (approx. 1.6 GB/s per slot), the speed is more than sufficient for key generation workloads. The bottleneck is not IO-bound but entropy-limited and CPU-bound. In benchmarks:
- RSA 4096 generation: ~2–3 seconds
- Ed25519 generation: <1 second
- ZFS RAID-Z writes (if used): ~250MB/s due to 2.5Gbps NIC ceiling
Thermal throttling may occur under extended loads without cooling mods. A third-party aluminum heatsink resolves this.
8. Use Cases
- Bitcoin Cold Storage (xprv/xpub, seed phrases)
- SSH Key Infrastructure (Ed25519 key signing for orgs)
- PGP Trust Anchor (for a Web of Trust or private PKI)
- Certificate Authority (offline root key handling)
- Digital Notary Service (hash-based time-stamping)
9. Recommendations & Improvements
| Area | Improvement | |-------------|--------------------------------------| | Cooling | Add copper heatsinks + airflow mod | | Power | Use UPS + power filter for stability | | Boot | Use full-disk encryption with Yubikey unlock | | Expansion | Use one SSD for keybase-style append-only logs | | Chassis | Install into a tamper-evident case with RFID tracking |
10. Consider
The Nookbox G9 offers a compact, energy-efficient platform for creating a secure cryptographic key generation appliance. With minor thermal enhancements and a strict FOSS policy, it becomes a reliable workstation for cryptographers, developers, and Bitcoin self-custodians. Its support for multiple encrypted SSDs, air-gapped operation, and Linux flexibility make it a modern alternative to enterprise HSMs—without the cost or vendor lock-in.
A. Key Software Versions
GnuPG 2.4.x
OpenSSL 3.x
Ubuntu 24.10
Veracrypt 1.26+
B. System Commands (Setup)
bash sudo apt install gnupg2 openssl age veracrypt sudo cryptsetup luksFormat /dev/nvme1n1
C. Resources
The Nookbox G9 epitomizes a compact yet sophisticated energy-efficient computational architecture, meticulously designed to serve as a secure cryptographic key generation appliance. By integrating minor yet impactful thermal enhancements, it ensures optimal performance stability while adhering to a stringent Free and Open Source Software (FOSS) policy, thereby positioning itself as a reliable workstation specifically tailored for cryptographers, software developers, and individuals engaged in Bitcoin self-custody. Its capability to support multiple encrypted Solid State Drives (SSDs) facilitates an augmented data security framework, while the air-gapped operational feature significantly enhances its resilience against potential cyber threats. Furthermore, the inherent flexibility of Linux operating systems not only furnishes an adaptable environment for various cryptographic applications but also serves as a compelling modern alternative to conventional enterprise Hardware Security Modules (HSMs), ultimately bypassing the prohibitive costs and vendor lock-in typically associated with such proprietary solutions.
Further Tools
🔧 Recommended SSDs and Tools (Amazon)
-
Kingston A400 240GB SSD – SATA 3 2.5"
https://a.co/d/41esjYL -
Samsung 970 EVO Plus 2TB NVMe M.2 SSD – Gen 3
https://a.co/d/6EMVAN1 -
Crucial P5 Plus 1TB PCIe Gen4 NVMe M.2 SSD
https://a.co/d/hQx50Cq -
WD Blue SN570 1TB NVMe SSD – PCIe Gen 3
https://a.co/d/j2zSDCJ -
Sabrent Rocket Q 2TB NVMe SSD – QLC NAND
https://a.co/d/325Og2K -
Thermalright M.2 SSD Heatsink Kit
https://a.co/d/0IYH3nK -
ORICO M.2 NVMe SSD Enclosure – USB 3.2 Gen2
https://a.co/d/aEwQmih
Product Links (Amazon)
-
Thermal Heatsink for M.2 SSDs (Must-have for stress and cooling)
https://a.co/d/43B1F3t -
Nookbox G9 – Mini NAS
https://a.co/d/3dswvGZ -
Alternative 1: Possibly related cooling or SSD gear
https://a.co/d/c0Eodm3 -
Alternative 2: Possibly related NAS accessories or SSDs
https://a.co/d/9gWeqDr
Benchmark Results (Geekbench)
-
GMKtec G9 Geekbench CPU Score #1
https://browser.geekbench.com/v6/cpu/11471182 -
GMKtec G9 Geekbench CPU Score #2
https://browser.geekbench.com/v6/cpu/11470130 -
GMKtec Geekbench User Profile
https://browser.geekbench.com/user/446940
🛠️ DIY & Fix Resource
- How-Fixit – PC Repair Guides and Tutorials
https://www.how-fixit.com/
-
@ eac63075:b4988b48
2024-10-26 22:14:19The future of physical money is at stake, and the discussion about DREX, the new digital currency planned by the Central Bank of Brazil, is gaining momentum. In a candid and intense conversation, Federal Deputy Julia Zanatta (PL/SC) discussed the challenges and risks of this digital transition, also addressing her Bill No. 3,341/2024, which aims to prevent the extinction of physical currency. This bill emerges as a direct response to legislative initiatives seeking to replace physical money with digital alternatives, limiting citizens' options and potentially compromising individual freedom. Let's delve into the main points of this conversation.
https://www.fountain.fm/episode/i5YGJ9Ors3PkqAIMvNQ0
What is a CBDC?
Before discussing the specifics of DREX, it’s important to understand what a CBDC (Central Bank Digital Currency) is. CBDCs are digital currencies issued by central banks, similar to a digital version of physical money. Unlike cryptocurrencies such as Bitcoin, which operate in a decentralized manner, CBDCs are centralized and regulated by the government. In other words, they are digital currencies created and controlled by the Central Bank, intended to replace physical currency.
A prominent feature of CBDCs is their programmability. This means that the government can theoretically set rules about how, where, and for what this currency can be used. This aspect enables a level of control over citizens' finances that is impossible with physical money. By programming the currency, the government could limit transactions by setting geographical or usage restrictions. In practice, money within a CBDC could be restricted to specific spending or authorized for use in a defined geographical area.
In countries like China, where citizen actions and attitudes are also monitored, a person considered to have a "low score" due to a moral or ideological violation may have their transactions limited to essential purchases, restricting their digital currency use to non-essential activities. This financial control is strengthened because, unlike physical money, digital currency cannot be exchanged anonymously.
Practical Example: The Case of DREX During the Pandemic
To illustrate how DREX could be used, an example was given by Eric Altafim, director of Banco Itaú. He suggested that, if DREX had existed during the COVID-19 pandemic, the government could have restricted the currency’s use to a 5-kilometer radius around a person’s residence, limiting their economic mobility. Another proposed use by the executive related to the Bolsa Família welfare program: the government could set up programming that only allows this benefit to be used exclusively for food purchases. Although these examples are presented as control measures for safety or organization, they demonstrate how much a CBDC could restrict citizens' freedom of choice.
To illustrate the potential for state control through a Central Bank Digital Currency (CBDC), such as DREX, it is helpful to look at the example of China. In China, the implementation of a CBDC coincides with the country’s Social Credit System, a governmental surveillance tool that assesses citizens' and companies' behavior. Together, these technologies allow the Chinese government to monitor, reward, and, above all, punish behavior deemed inappropriate or threatening to the government.
How Does China's Social Credit System Work?
Implemented in 2014, China's Social Credit System assigns every citizen and company a "score" based on various factors, including financial behavior, criminal record, social interactions, and even online activities. This score determines the benefits or penalties each individual receives and can affect everything from public transport access to obtaining loans and enrolling in elite schools for their children. Citizens with low scores may face various sanctions, including travel restrictions, fines, and difficulty in securing loans.
With the adoption of the CBDC — or “digital yuan” — the Chinese government now has a new tool to closely monitor citizens' financial transactions, facilitating the application of Social Credit System penalties. China’s CBDC is a programmable digital currency, which means that the government can restrict how, when, and where the money can be spent. Through this level of control, digital currency becomes a powerful mechanism for influencing citizens' behavior.
Imagine, for instance, a citizen who repeatedly posts critical remarks about the government on social media or participates in protests. If the Social Credit System assigns this citizen a low score, the Chinese government could, through the CBDC, restrict their money usage in certain areas or sectors. For example, they could be prevented from buying tickets to travel to other regions, prohibited from purchasing certain consumer goods, or even restricted to making transactions only at stores near their home.
Another example of how the government can use the CBDC to enforce the Social Credit System is by monitoring purchases of products such as alcohol or luxury items. If a citizen uses the CBDC to spend more than the government deems reasonable on such products, this could negatively impact their social score, resulting in additional penalties such as future purchase restrictions or a lowered rating that impacts their personal and professional lives.
In China, this kind of control has already been demonstrated in several cases. Citizens added to Social Credit System “blacklists” have seen their spending and investment capacity severely limited. The combination of digital currency and social scores thus creates a sophisticated and invasive surveillance system, through which the Chinese government controls important aspects of citizens’ financial lives and individual freedoms.
Deputy Julia Zanatta views these examples with great concern. She argues that if the state has full control over digital money, citizens will be exposed to a level of economic control and surveillance never seen before. In a democracy, this control poses a risk, but in an authoritarian regime, it could be used as a powerful tool of repression.
DREX and Bill No. 3,341/2024
Julia Zanatta became aware of a bill by a Workers' Party (PT) deputy (Bill 4068/2020 by Deputy Reginaldo Lopes - PT/MG) that proposes the extinction of physical money within five years, aiming for a complete transition to DREX, the digital currency developed by the Central Bank of Brazil. Concerned about the impact of this measure, Julia drafted her bill, PL No. 3,341/2024, which prohibits the elimination of physical money, ensuring citizens the right to choose physical currency.
“The more I read about DREX, the less I want its implementation,” says the deputy. DREX is a Central Bank Digital Currency (CBDC), similar to other state digital currencies worldwide, but which, according to Julia, carries extreme control risks. She points out that with DREX, the State could closely monitor each citizen’s transactions, eliminating anonymity and potentially restricting freedom of choice. This control would lie in the hands of the Central Bank, which could, in a crisis or government change, “freeze balances or even delete funds directly from user accounts.”
Risks and Individual Freedom
Julia raises concerns about potential abuses of power that complete digitalization could allow. In a democracy, state control over personal finances raises serious questions, and EddieOz warns of an even more problematic future. “Today we are in a democracy, but tomorrow, with a government transition, we don't know if this kind of power will be used properly or abused,” he states. In other words, DREX gives the State the ability to restrict or condition the use of money, opening the door to unprecedented financial surveillance.
EddieOz cites Nigeria as an example, where a CBDC was implemented, and the government imposed severe restrictions on the use of physical money to encourage the use of digital currency, leading to protests and clashes in the country. In practice, the poorest and unbanked — those without regular access to banking services — were harshly affected, as without physical money, many cannot conduct basic transactions. Julia highlights that in Brazil, this situation would be even more severe, given the large number of unbanked individuals and the extent of rural areas where access to technology is limited.
The Relationship Between DREX and Pix
The digital transition has already begun with Pix, which revolutionized instant transfers and payments in Brazil. However, Julia points out that Pix, though popular, is a citizen’s choice, while DREX tends to eliminate that choice. The deputy expresses concern about new rules suggested for Pix, such as daily transaction limits of a thousand reais, justified as anti-fraud measures but which, in her view, represent additional control and a profit opportunity for banks. “How many more rules will banks create to profit from us?” asks Julia, noting that DREX could further enhance control over personal finances.
International Precedents and Resistance to CBDC
The deputy also cites examples from other countries resisting the idea of a centralized digital currency. In the United States, states like New Hampshire have passed laws to prevent the advance of CBDCs, and leaders such as Donald Trump have opposed creating a national digital currency. Trump, addressing the topic, uses a justification similar to Julia’s: in a digitalized system, “with one click, your money could disappear.” She agrees with the warning, emphasizing the control risk that a CBDC represents, especially for countries with disadvantaged populations.
Besides the United States, Canada, Colombia, and Australia have also suspended studies on digital currencies, citing the need for further discussions on population impacts. However, in Brazil, the debate on DREX is still limited, with few parliamentarians and political leaders openly discussing the topic. According to Julia, only she and one or two deputies are truly trying to bring this discussion to the Chamber, making DREX’s advance even more concerning.
Bill No. 3,341/2024 and Popular Pressure
For Julia, her bill is a first step. Although she acknowledges that ideally, it would prevent DREX's implementation entirely, PL 3341/2024 is a measure to ensure citizens' choice to use physical money, preserving a form of individual freedom. “If the future means control, I prefer to live in the past,” Julia asserts, reinforcing that the fight for freedom is at the heart of her bill.
However, the deputy emphasizes that none of this will be possible without popular mobilization. According to her, popular pressure is crucial for other deputies to take notice and support PL 3341. “I am only one deputy, and we need the public’s support to raise the project’s visibility,” she explains, encouraging the public to press other parliamentarians and ask them to “pay attention to PL 3341 and the project that prohibits the end of physical money.” The deputy believes that with a strong awareness and pressure movement, it is possible to advance the debate and ensure Brazilians’ financial freedom.
What’s at Stake?
Julia Zanatta leaves no doubt: DREX represents a profound shift in how money will be used and controlled in Brazil. More than a simple modernization of the financial system, the Central Bank’s CBDC sets precedents for an unprecedented level of citizen surveillance and control in the country. For the deputy, this transition needs to be debated broadly and transparently, and it’s up to the Brazilian people to defend their rights and demand that the National Congress discuss these changes responsibly.
The deputy also emphasizes that, regardless of political or partisan views, this issue affects all Brazilians. “This agenda is something that will affect everyone. We need to be united to ensure people understand the gravity of what could happen.” Julia believes that by sharing information and generating open debate, it is possible to prevent Brazil from following the path of countries that have already implemented a digital currency in an authoritarian way.
A Call to Action
The future of physical money in Brazil is at risk. For those who share Deputy Julia Zanatta’s concerns, the time to act is now. Mobilize, get informed, and press your representatives. PL 3341/2024 is an opportunity to ensure that Brazilian citizens have a choice in how to use their money, without excessive state interference or surveillance.
In the end, as the deputy puts it, the central issue is freedom. “My fear is that this project will pass, and people won’t even understand what is happening.” Therefore, may every citizen at least have the chance to understand what’s at stake and make their voice heard in defense of a Brazil where individual freedom and privacy are respected values.
-
@ d34e832d:383f78d0
2025-04-25 23:39:07First Contact – A Film History Breakdown
🎥 Movie: Contact
📅 Year Released: 1997
🎞️ Director: Robert Zemeckis
🕰️ Scene Timestamp: ~00:35:00
In this pivotal moment, Dr. Ellie Arroway (Jodie Foster), working at the VLA (Very Large Array) in New Mexico, detects a powerful and unusual signal emanating from the star system Vega, over 25 light-years away. It starts with rhythmic pulses—prime numbers—and escalates into layers of encoded information. The calm night shatters into focused chaos as the team realizes they might be witnessing the first confirmed evidence of extraterrestrial intelligence.
🎥 Camera Work:
Zemeckis uses slow zooms, wide shots of the VLA dishes moving in synchrony, and mid-shots on Ellie as she listens with growing awe and panic. The kinetic handheld camera inside the lab mirrors the rising tension.💡 Lighting:
Low-key, naturalistic nighttime lighting dominates the outdoor shots, enhancing the eerie isolation of the array. Indoors, practical lab lighting creates a realistic, clinical setting.✂️ Editing:
The pacing builds through quick intercuts between the signal readouts, Ellie’s expressions, and the reactions of her team. This accelerates tension while maintaining clarity.🔊 Sound:
The rhythmic signal becomes the scene’s pulse. We begin with ambient night silence, then transition to the raw audio of the alien transmission. It’s diegetic (heard by the characters), and as it builds, a subtle score underscores the awe and urgency. Every beep feels weighty.
Released in 1997, Contact emerged during a period of growing public interest in both SETI (Search for Extraterrestrial Intelligence) and skepticism about science in the post-Cold War world. It was also the era of X-Files and the Mars Pathfinder mission, where space and the unknown dominated media.
The scene reflects 1990s optimism about technology and the belief that answers to humanity’s biggest questions might lie beyond Earth—balanced against the bureaucratic red tape and political pressures that real scientists face.
- Classic procedural sci-fi like 2001: A Space Odyssey and Close Encounters of the Third Kind.
- Real-world SETI protocols and the actual scientists Carl Sagan consulted with.
- The radio broadcast scene reflects Sagan’s own passion for communication and cosmic connectedness.
This scene set a new benchmark for depicting science authentically in fiction. Many real-world SETI scientists cite Contact as an accurate portrayal of their field. It also influenced later films like Arrival and Interstellar, which similarly blend emotion with science.
The signal is more than data—it’s a modern miracle. It represents Ellie’s faith in science, the power of patience, and humanity's yearning to not be alone.
The use of prime numbers symbolizes universal language—mathematics as a bridge between species. The scene’s pacing reflects the clash between logic and emotion, science and wonder.
The signal itself acts as a metaphor for belief: you can't "see" the sender, but you believe they’re out there. It’s the crux of the entire movie’s science vs. faith dichotomy.
This scene hits hard because it captures pure awe—the mix of fear, wonder, and purpose when faced with the unknown. Watching Ellie realize she's not alone mirrors how we all feel when our faith (in science, in hope, in truth) is rewarded.
For filmmakers and students, this scene is a masterclass in procedural suspense, realistic portrayal of science, and using audiovisual cues to build tension without needing action or violence.
It reminds us that the greatest cinematic moments don’t always come from spectacle, but from stillness, sound, and a scientist whispering: “We got something.”
-
@ bbb5dda0:f09e2747
2025-04-29 13:46:37GitHub Actions (CI/CD) over Nostr
I Spent quite a bit of time on getting Nostr-based GitHub actions working. I have a basic runner implementation now, which i've reworked quite a bit when working with @dan on getting the front-end of it into gitworkshop.dev. We found that the nature of these jobs don't really lend itself to fit within the NIP-90 DVM spec.
What we have now: - A dvm-cicd-runner that - Advertises itself using NIP-89 announcements. - Takes a DVM request with: - repository - branch/ref - path to workflow file (
.yml
) - job timeout (max duration) - 🥜 Cashu prepayment for the job timeout (to be refunded) - Pulls the repository and executes the provided workflow file - Sends logs in batches as partial job results - Publishes job results and gets displayed in gitworkshop - Gitworkshop.dev (all nostr:npub15qydau2hjma6ngxkl2cyar74wzyjshvl65za5k5rl69264ar2exs5cyejr work) UI that : - Shows available workflow runners. - Instructing + paying runner to execute workflow file - Displaying job status, live updating with the latest logs / autoscroll, all the stuff you'd expect - Neatly displaying past jobs for the current repositoryTODO'S + Ideas/vision
- TODO: refunding the unused minutes (job timeout - processing time) to the requester
- TODO: create seperate kinds/nip for worflow execution over nostr
- Create separate kinds for streaming arbitrary text data over nostr (line by line logs)
- automated git watchers for projects to kick of jobs
- Separate out workflow management stuff from gitworkshop.dev. A micro-app might serve better to manage runners for git projects etc and takes away pressure from gitworkshop.dev to do it all.
- Perhaps support just running .yaml files, without the requirement to have it in a git repo. Could just be a .yaml file on blossom.
TollGate
I spent most of my time working on TollGate. There's been a lot of back and forth to the drawing board to narrow down what the TollGate protocol looks like. I helped define some concepts on implementing a tollgate which we could use as language to discuss the different components that are part of a tollgate implementation. It helped us narrow down what was implementation and what is part of the protocol.
Current state of the project
- We have a website displaying the project: TollGate.me
- Worked on a basic android app for auto payments, validating we can auto-buy from tollgates by our phones
- Presented TollGate at @Sats 'n Facts
- There's a protocol draft, presented at SEC-04
- We've done workshops, people were able to turn an OpenWRT router into a TollGate
- Building and releasing TollGate as a singular OpenWRT package, installable on any compatible architecture
- Building and releasing TollGate OS v0.0.1 (prebuilt OpenWRT image), targeting a few specific routers
- First tollgate deployed in the wild!! (At a restaurant in Funchal, Madeira)
- Other developers started to make their own adjacent implementations, which decentralizes the protocol already
What's next:
- We're gathering useful real user feedback to be incorporated in OS v0.0.2 soon
- Refine the protocol further
- Showing TollGate at various conferences in Europe throughout the summer
- Keep building the community, it's growing fast
Epoxy (Nostr based Addressing)
Although i've pivoted towards focusing on TollGate I worked out an implementation of my NIP-(1)37 proposal. During SEC-04 I worked out this browser plugin to demonstrate one way to make websites resistant to rugpulls.
It works by looking for a
meta
tag in the page'shead
:html <meta name="nostr-pubkey" relays="relay.site.com,other.relay.com">[hexPubkey]</meta>
When we've never recorded a pubkey for this domain, we save it. This pubkey now serves as the owner of the website. It looks for a kind
11111
event of that pubkey. It should list the current domain as one of it's domains. If not, it shows a warning.The key concept is that if we visit this website again and one of these scenario's is true: - There is no longer a
meta
tag - There's another pubkey in themeta
tag - The pubkey is still on the webpage, but the11111
no longer lists this domainThen we consider this domain as RUGPULLED and the user gets an error, suggesting to navigate to other domain listed by this
pubkey
. I'd like it to perhaps auto-redirect to another domain listed by the owner, this is especially useful for frequently rugged domains.This extension does try to solve a bootstrapping problem. We need to establish the website's pubkey at some point. We have to start somewhere, which is why the first load is considered as the 'real' one, since we have no way of knowing for sure.
Other
🥜/⚡️ Receipt.Cash - Social Receipt sharing app
During SEC I worked on scratching an itch that has been lingering in my mind since SEC-03 already. And now that vibecoding is a thing it wasn't this huge undertaking anymore to handle the front-end stuff (which i suck at).
The usage scenario is a bunch of bitcoiners that are at a restaurant, we get the bill and want to split it amongst each other. One person can pay the bill, then: - Payer photographs receipt - Payer adds Cashu Payment request - Payer sets dev split % - App turns the receipt + request into a (encrypted) nostr event - The payer shares the event with QR or Share Menu
The friend scan the QR: - Receipt is loaded and displayed - Friend selects items they ordered - Friend hits pay button (⚡️Lightning or 🥜Cashu) and pays - Payment gets sent to Payer's cashu wallet - Dev split set by Payer goes to dev address.
Some features: - Change LLM model that processes the receipt to extract data - Proofs storage + recovery (if anything fails during processing)
Todo's: - Letting payer configure LNURL for payouts - Letting payer edit Receipt before sharing - Fix: live updates on settled items
The repo: receipt-cash
-
@ d34e832d:383f78d0
2025-04-25 23:20:48As computing needs evolve toward speed, reliability, and efficiency, understanding the landscape of storage technologies becomes crucial for system builders, IT professionals, and performance enthusiasts. This idea compares traditional Hard Disk Drives (HDDs) with various Solid-State Drive (SSD) technologies including SATA SSDs, mSATA, M.2 SATA, and M.2 NVMe. It explores differences in form factors, interfaces, memory types, and generational performance to empower informed decisions on selecting optimal storage.
1. Storage Device Overview
1.1 HDDs – Hard Disk Drives
- Mechanism: Mechanical platters + spinning disk.
- Speed: ~80–160 MB/s.
- Cost: Low cost per GB.
- Durability: Susceptible to shock; moving parts prone to wear.
- Use Case: Mass storage, backups, archival.
1.2 SSDs – Solid State Drives
- Mechanism: Flash memory (NAND-based); no moving parts.
- Speed: SATA SSDs (~550 MB/s), NVMe SSDs (>7,000 MB/s).
- Durability: High resistance to shock and temperature.
- Use Case: Operating systems, apps, high-speed data transfer.
2. Form Factors
| Form Factor | Dimensions | Common Usage | |------------------|------------------------|--------------------------------------------| | 2.5-inch | 100mm x 69.85mm x 7mm | Laptops, desktops (SATA interface) | | 3.5-inch | 146mm x 101.6mm x 26mm | Desktops/servers (HDD only) | | mSATA | 50.8mm x 29.85mm | Legacy ultrabooks, embedded systems | | M.2 | 22mm wide, lengths vary (2242, 2260, 2280, 22110) | Modern laptops, desktops, NUCs |
Note: mSATA is being phased out in favor of the more versatile M.2 standard.
3. Interfaces & Protocols
3.1 SATA (Serial ATA)
- Max Speed: ~550 MB/s (SATA III).
- Latency: Higher.
- Protocol: AHCI.
- Compatibility: Broad support, backward compatible.
3.2 NVMe (Non-Volatile Memory Express)
- Max Speed:
- Gen 3: ~3,500 MB/s
- Gen 4: ~7,000 MB/s
- Gen 5: ~14,000 MB/s
- Latency: Very low.
- Protocol: NVMe (optimized for NAND flash).
- Interface: PCIe lanes (usually via M.2 slot).
NVMe significantly outperforms SATA due to reduced overhead and direct PCIe access.
4. Key Slot & Compatibility (M.2 Drives)
| Drive Type | Key | Interface | Typical Use | |------------------|----------------|---------------|-----------------------| | M.2 SATA | B+M key | SATA | Budget laptops/desktops | | M.2 NVMe (PCIe) | M key only | PCIe Gen 3–5 | Performance PCs/gaming |
⚠️ Important: Not all M.2 slots support NVMe. Check motherboard specs for PCIe compatibility.
5. SSD NAND Memory Types
| Type | Bits/Cell | Speed | Endurance | Cost | Use Case | |---------|---------------|-----------|---------------|----------|--------------------------------| | SLC | 1 | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | $$$$ | Enterprise caching | | MLC | 2 | ⭐⭐⭐ | ⭐⭐⭐ | $$$ | Pro-grade systems | | TLC | 3 | ⭐⭐ | ⭐⭐ | $$ | Consumer, gaming | | QLC | 4 | ⭐ | ⭐ | $ | Budget SSDs, media storage |
6. 3D NAND / V-NAND Technology
- Traditional NAND: Planar (flat) design.
- 3D NAND: Stacks cells vertically—more density, less space.
- Benefits:
- Greater capacity
- Better power efficiency
- Improved lifespan
Samsung’s V-NAND is a branded 3D NAND variant known for high endurance and stability.
7. Performance & Generational Comparison
| PCIe Gen | Max Speed | Use Case | |--------------|---------------|----------------------------------| | Gen 3 | ~3,500 MB/s | Mainstream laptops/desktops | | Gen 4 | ~7,000 MB/s | Gaming, prosumer, light servers | | Gen 5 | ~14,000 MB/s | AI workloads, enterprise |
Drives are backward compatible, but will operate at the host’s maximum supported speed.
8. Thermal Management
- NVMe SSDs generate heat—especially Gen 4/5.
- Heatsinks and thermal pads are vital for:
- Sustained performance (prevent throttling)
- Longer lifespan
- Recommended to leave 10–20% free space for optimal SSD wear leveling and garbage collection.
9. HDD vs SSD: Summary
| Aspect | HDD | SSD | |------------------|---------------------|------------------------------| | Speed | 80–160 MB/s | 550 MB/s – 14,000 MB/s | | Durability | Low (mechanical) | High (no moving parts) | | Lifespan | Moderate | High (depends on NAND type) | | Cost | Lower per GB | Higher per GB | | Noise | Audible | Silent |
10. Brand Recommendations
| Brand | Strength | |------------------|-----------------------------------------| | Samsung | Leading in performance (980 Pro, 990 Pro) | | Western Digital | Reliable Gen 3/4/5 drives (SN770, SN850X) | | Crucial | Budget-friendly, solid TLC drives (P3, P5 Plus) | | Kingston | Value-oriented SSDs (A2000, NV2) |
11. How to Choose the Right SSD
- Check your device slot: Is it M.2 B+M, M-key, or SATA-only?
- Interface compatibility: Confirm if the M.2 slot supports NVMe or only SATA.
- Match PCIe Gen: Use Gen 3/4/5 based on CPU/motherboard lanes.
- Pick NAND type: TLC for best balance of speed/longevity.
- Thermal plan: Use heatsinks or fans for Gen 4+ drives.
- Capacity need: Leave headroom (15–20%) for performance and lifespan.
- Trustworthy brands: Stick to Samsung, WD, Crucial for warranty and quality.
Consider
From boot speed to data integrity, SSDs have revolutionized how modern systems handle storage. While HDDs remain relevant for mass archival, NVMe SSDs—especially those leveraging PCIe Gen 4 and Gen 5—dominate in speed-critical workflows. M.2 NVMe is the dominant form factor for futureproof builds, while understanding memory types like TLC vs. QLC ensures better longevity planning.
Whether you’re upgrading a laptop, building a gaming rig, or running a self-hosted Bitcoin node, choosing the right form factor, interface, and NAND type can dramatically impact system performance and reliability.
Resources & Further Reading
- How-Fixit Storage Guides
- Kingston SSD Reliability Guide
- Western Digital Product Lines
- Samsung V-NAND Explained
- PCIe Gen 5 Benchmarks
Options
🔧 Recommended SSDs and Tools (Amazon)
-
Kingston A400 240GB SSD – SATA 3 2.5"
https://a.co/d/41esjYL -
Samsung 970 EVO Plus 2TB NVMe M.2 SSD – Gen 3
https://a.co/d/6EMVAN1 -
Crucial P5 Plus 1TB PCIe Gen4 NVMe M.2 SSD
https://a.co/d/hQx50Cq -
WD Blue SN570 1TB NVMe SSD – PCIe Gen 3
https://a.co/d/j2zSDCJ -
Sabrent Rocket Q 2TB NVMe SSD – QLC NAND
https://a.co/d/325Og2K -
Thermalright M.2 SSD Heatsink Kit
https://a.co/d/0IYH3nK -
ORICO M.2 NVMe SSD Enclosure – USB 3.2 Gen2
https://a.co/d/aEwQmih
🛠️ DIY & Fix Resource
- How-Fixit – PC Repair Guides and Tutorials
https://www.how-fixit.com/
In Addition
Modern Storage Technologies and Mini NAS Implementation
1. Network Attached Storage (NAS) system
In the rapidly evolving landscape of data storage, understanding the nuances of various storage technologies is crucial for optimal system design and performance. This idea delves into the distinctions between traditional Hard Disk Drives (HDDs), Solid State Drives (SSDs), and advanced storage interfaces like M.2 NVMe, M.2 SATA, and mSATA. Additionally, it explores the implementation of a compact Network Attached Storage (NAS) system using the Nookbox G9, highlighting its capabilities and limitations.
2. Storage Technologies Overview
2.1 Hard Disk Drives (HDDs)
- Mechanism: Utilize spinning magnetic platters and read/write heads.
- Advantages:
- Cost-effective for large storage capacities.
- Longer lifespan in low-vibration environments.
- Disadvantages:
- Slower data access speeds.
- Susceptible to mechanical failures due to moving parts.
2.2 Solid State Drives (SSDs)
- Mechanism: Employ NAND flash memory with no moving parts.
- Advantages:
- Faster data access and boot times.
- Lower power consumption and heat generation.
- Enhanced durability and shock resistance.
- Disadvantages:
- Higher cost per gigabyte compared to HDDs.
- Limited write cycles, depending on NAND type.
3. SSD Form Factors and Interfaces
3.1 Form Factors
- 2.5-Inch: Standard size for laptops and desktops; connects via SATA interface.
- mSATA: Miniature SATA interface, primarily used in ultrabooks and embedded systems; largely supplanted by M.2.
- M.2: Versatile form factor supporting both SATA and NVMe interfaces; prevalent in modern systems.
3.2 Interfaces
- SATA (Serial ATA):
- Speed: Up to 600 MB/s.
- Compatibility: Widely supported across various devices.
-
Limitation: Bottleneck for high-speed SSDs.
-
NVMe (Non-Volatile Memory Express):
- Speed: Ranges from 3,500 MB/s (PCIe Gen 3) to over 14,000 MB/s (PCIe Gen 5).
- Advantage: Direct communication with CPU via PCIe lanes, reducing latency.
- Consideration: Requires compatible motherboard and BIOS support.
4. M.2 SATA vs. M.2 NVMe
| Feature | M.2 SATA | M.2 NVMe | |------------------------|--------------------------------------------------|----------------------------------------------------| | Interface | SATA III (AHCI protocol) | PCIe (NVMe protocol) | | Speed | Up to 600 MB/s | Up to 14,000 MB/s (PCIe Gen 5) | | Compatibility | Broad compatibility with older systems | Requires NVMe-compatible M.2 slot and BIOS support | | Use Case | Budget builds, general computing | High-performance tasks, gaming, content creation |
Note: M.2 NVMe drives are not backward compatible with M.2 SATA slots due to differing interfaces and keying.
5. NAND Flash Memory Types
Understanding NAND types is vital for assessing SSD performance and longevity.
- SLC (Single-Level Cell):
- Bits per Cell: 1
- Endurance: ~100,000 write cycles
-
Use Case: Enterprise and industrial applications
-
MLC (Multi-Level Cell):
- Bits per Cell: 2
- Endurance: ~10,000 write cycles
-
Use Case: Consumer-grade SSDs
-
TLC (Triple-Level Cell):
- Bits per Cell: 3
- Endurance: ~3,000 write cycles
-
Use Case: Mainstream consumer SSDs
-
QLC (Quad-Level Cell):
- Bits per Cell: 4
- Endurance: ~1,000 write cycles
-
Use Case: Read-intensive applications
-
3D NAND:
- Structure: Stacks memory cells vertically to increase density.
- Advantage: Enhances performance and endurance across NAND types.
6. Thermal Management and SSD Longevity
Effective thermal management is crucial for maintaining SSD performance and lifespan.
- Heatsinks: Aid in dissipating heat from SSD controllers.
- Airflow: Ensuring adequate case ventilation prevents thermal throttling.
- Monitoring: Regularly check SSD temperatures, especially under heavy workloads.
7. Trusted SSD Manufacturers
Selecting SSDs from reputable manufacturers ensures reliability and support.
- Samsung: Known for high-performance SSDs with robust software support.
- Western Digital (WD): Offers a range of SSDs catering to various user needs.
- Crucial (Micron): Provides cost-effective SSD solutions with solid performance.
8. Mini NAS Implementation: Nookbox G9 Case Study
8.1 Overview
The Nookbox G9 is a compact NAS solution designed to fit within a 1U rack space, accommodating four M.2 NVMe SSDs.
8.2 Specifications
- Storage Capacity: Supports up to 8TB using four 2TB NVMe SSDs.
- Interface: Each M.2 slot operates at PCIe Gen 3x2.
- Networking: Equipped with 2.5 Gigabit Ethernet ports.
- Operating System: Comes pre-installed with Windows 11; compatible with Linux distributions like Ubuntu 24.10.
8.3 Performance and Limitations
- Throughput: Network speeds capped at ~250 MB/s due to 2.5 GbE limitation.
- Thermal Issues: Inadequate cooling leads to SSD temperatures reaching up to 80°C under load, causing potential throttling and system instability.
- Reliability: Reports of system reboots and lockups during intensive operations, particularly with ZFS RAIDZ configurations.
8.4 Recommendations
- Cooling Enhancements: Implement third-party heatsinks to improve thermal performance.
- Alternative Solutions: Consider NAS systems with better thermal designs and higher network throughput for demanding applications.
9. Consider
Navigating the myriad of storage technologies requires a comprehensive understanding of form factors, interfaces, and memory types. While HDDs offer cost-effective bulk storage, SSDs provide superior speed and durability. The choice between M.2 SATA and NVMe hinges on performance needs and system compatibility. Implementing compact NAS solutions like the Nookbox G9 necessitates careful consideration of thermal management and network capabilities to ensure reliability and performance.
Product Links (Amazon)
-
Thermal Heatsink for M.2 SSDs (Must-have for stress and cooling)
https://a.co/d/43B1F3t -
Nookbox G9 – Mini NAS
https://a.co/d/3dswvGZ -
Alternative 1: Possibly related cooling or SSD gear
https://a.co/d/c0Eodm3 -
Alternative 2: Possibly related NAS accessories or SSDs
https://a.co/d/9gWeqDr
Benchmark Results (Geekbench)
-
GMKtec G9 Geekbench CPU Score #1
https://browser.geekbench.com/v6/cpu/11471182 -
GMKtec G9 Geekbench CPU Score #2
https://browser.geekbench.com/v6/cpu/11470130 -
GMKtec Geekbench User Profile
https://browser.geekbench.com/user/446940
-
@ 3bf0c63f:aefa459d
2025-04-25 19:26:48Redistributing Git with Nostr
Every time someone tries to "decentralize" Git -- like many projects tried in the past to do it with BitTorrent, IPFS, ScuttleButt or custom p2p protocols -- there is always a lurking comment: "but Git is already distributed!", and then the discussion proceeds to mention some facts about how Git supports multiple remotes and its magic syncing and merging abilities and so on.
Turns out all that is true, Git is indeed all that powerful, and yet GitHub is the big central hub that hosts basically all Git repositories in the giant world of open-source. There are some crazy people that host their stuff elsewhere, but these projects end up not being found by many people, and even when they do they suffer from lack of contributions.
Because everybody has a GitHub account it's easy to open a pull request to a repository of a project you're using if it's on GitHub (to be fair I think it's very annoying to have to clone the repository, then add it as a remote locally, push to it, then go on the web UI and click to open a pull request, then that cloned repository lurks forever in your profile unless you go through 16 screens to delete it -- but people in general seem to think it's easy).
It's much harder to do it on some random other server where some project might be hosted, because now you have to add 4 more even more annoying steps: create an account; pick a password; confirm an email address; setup SSH keys for pushing. (And I'm not even mentioning the basic impossibility of offering
push
access to external unknown contributors to people who want to host their own simple homemade Git server.)At this point some may argue that we could all have accounts on GitLab, or Codeberg or wherever else, then those steps are removed. Besides not being a practical strategy this pseudo solution misses the point of being decentralized (or distributed, who knows) entirely: it's far from the ideal to force everybody to have the double of account management and SSH setup work in order to have the open-source world controlled by two shady companies instead of one.
What we want is to give every person the opportunity to host their own Git server without being ostracized. at the same time we must recognize that most people won't want to host their own servers (not even most open-source programmers!) and give everybody the ability to host their stuff on multi-tenant servers (such as GitHub) too. Importantly, though, if we allow for a random person to have a standalone Git server on a standalone server they host themselves on their wood cabin that also means any new hosting company can show up and start offering Git hosting, with or without new cool features, charging high or low or zero, and be immediately competing against GitHub or GitLab, i.e. we must remove the network-effect centralization pressure.
External contributions
The first problem we have to solve is: how can Bob contribute to Alice's repository without having an account on Alice's server?
SourceHut has reminded GitHub users that Git has always had this (for most) arcane
git send-email
command that is the original way to send patches, using an once-open protocol.Turns out Nostr acts as a quite powerful email replacement and can be used to send text content just like email, therefore patches are a very good fit for Nostr event contents.
Once you get used to it and the proper UIs (or CLIs) are built sending and applying patches to and from others becomes a much easier flow than the intense clickops mixed with terminal copypasting that is interacting with GitHub (you have to clone the repository on GitHub, then update the remote URL in your local directory, then create a branch and then go back and turn that branch into a Pull Request, it's quite tiresome) that many people already dislike so much they went out of their way to build many GitHub CLI tools just so they could comment on issues and approve pull requests from their terminal.
Replacing GitHub features
Aside from being the "hub" that people use to send patches to other people's code (because no one can do the email flow anymore, justifiably), GitHub also has 3 other big features that are not directly related to Git, but that make its network-effect harder to overcome. Luckily Nostr can be used to create a new environment in which these same features are implemented in a more decentralized and healthy way.
Issues: bug reports, feature requests and general discussions
Since the "Issues" GitHub feature is just a bunch of text comments it should be very obvious that Nostr is a perfect fit for it.
I will not even mention the fact that Nostr is much better at threading comments than GitHub (which doesn't do it at all), which can generate much more productive and organized discussions (and you can opt out if you want).
Search
I use GitHub search all the time to find libraries and projects that may do something that I need, and it returns good results almost always. So if people migrated out to other code hosting providers wouldn't we lose it?
The fact is that even though we think everybody is on GitHub that is a globalist falsehood. Some projects are not on GitHub, and if we use only GitHub for search those will be missed. So even if we didn't have a Nostr Git alternative it would still be necessary to create a search engine that incorporated GitLab, Codeberg, SourceHut and whatnot.
Turns out on Nostr we can make that quite easy by not forcing anyone to integrate custom APIs or hardcoding Git provider URLs: each repository can make itself available by publishing an "announcement" event with a brief description and one or more Git URLs. That makes it easy for a search engine to index them -- and even automatically download the code and index the code (or index just README files or whatever) without a centralized platform ever having to be involved.
The relays where such announcements will be available play a role, of course, but that isn't a bad role: each announcement can be in multiple relays known for storing "public good" projects, some relays may curate only projects known to be very good according to some standards, other relays may allow any kind of garbage, which wouldn't make them good for a search engine to rely upon, but would still be useful in case one knows the exact thing (and from whom) they're searching for (the same is valid for all Nostr content, by the way, and that's where it's censorship-resistance comes from).
Continuous integration
GitHub Actions are a very hardly subsidized free-compute-for-all-paid-by-Microsoft feature, but one that isn't hard to replace at all. In fact there exists today many companies offering the same kind of service out there -- although they are mostly targeting businesses and not open-source projects, before GitHub Actions was introduced there were also many that were heavily used by open-source projects.
One problem is that these services are still heavily tied to GitHub today, they require a GitHub login, sometimes BitBucket and GitLab and whatnot, and do not allow one to paste an arbitrary Git server URL, but that isn't a thing that is very hard to change anyway, or to start from scratch. All we need are services that offer the CI/CD flows, perhaps using the same framework of GitHub Actions (although I would prefer to not use that messy garbage), and charge some few satoshis for it.
It may be the case that all the current services only support the big Git hosting platforms because they rely on their proprietary APIs, most notably the webhooks dispatched when a repository is updated, to trigger the jobs. It doesn't have to be said that Nostr can also solve that problem very easily.
-
@ 3bf0c63f:aefa459d
2025-04-25 18:55:52Report of how the money Jack donated to the cause in December 2022 has been misused so far.
Bounties given
March 2025
- Dhalsim: 1,110,540 - Work on Nostr wiki data processing
February 2025
- BOUNTY* NullKotlinDev: 950,480 - Twine RSS reader Nostr integration
- Dhalsim: 2,094,584 - Work on Hypothes.is Nostr fork
- Constant, Biz and J: 11,700,588 - Nostr Special Forces
January 2025
- Constant, Biz and J: 11,610,987 - Nostr Special Forces
- BOUNTY* NullKotlinDev: 843,840 - Feeder RSS reader Nostr integration
- BOUNTY* NullKotlinDev: 797,500 - ReadYou RSS reader Nostr integration
December 2024
- BOUNTY* tijl: 1,679,500 - Nostr integration into RSS readers yarr and miniflux
- Constant, Biz and J: 10,736,166 - Nostr Special Forces
- Thereza: 1,020,000 - Podcast outreach initiative
November 2024
- Constant, Biz and J: 5,422,464 - Nostr Special Forces
October 2024
- Nostrdam: 300,000 - hackathon prize
- Svetski: 5,000,000 - Latin America Nostr events contribution
- Quentin: 5,000,000 - nostrcheck.me
June 2024
- Darashi: 5,000,000 - maintaining nos.today, searchnos, search.nos.today and other experiments
- Toshiya: 5,000,000 - keeping the NIPs repo clean and other stuff
May 2024
- James: 3,500,000 - https://github.com/jamesmagoo/nostr-writer
- Yakihonne: 5,000,000 - spreading the word in Asia
- Dashu: 9,000,000 - https://github.com/haorendashu/nostrmo
February 2024
- Viktor: 5,000,000 - https://github.com/viktorvsk/saltivka and https://github.com/viktorvsk/knowstr
- Eric T: 5,000,000 - https://github.com/tcheeric/nostr-java
- Semisol: 5,000,000 - https://relay.noswhere.com/ and https://hist.nostr.land relays
- Sebastian: 5,000,000 - Drupal stuff and nostr-php work
- tijl: 5,000,000 - Cloudron, Yunohost and Fraidycat attempts
- Null Kotlin Dev: 5,000,000 - AntennaPod attempt
December 2023
- hzrd: 5,000,000 - Nostrudel
- awayuki: 5,000,000 - NOSTOPUS illustrations
- bera: 5,000,000 - getwired.app
- Chris: 5,000,000 - resolvr.io
- NoGood: 10,000,000 - nostrexplained.com stories
October 2023
- SnowCait: 5,000,000 - https://nostter.vercel.app/ and other tools
- Shaun: 10,000,000 - https://yakihonne.com/, events and work on Nostr awareness
- Derek Ross: 10,000,000 - spreading the word around the world
- fmar: 5,000,000 - https://github.com/frnandu/yana
- The Nostr Report: 2,500,000 - curating stuff
- james magoo: 2,500,000 - the Obsidian plugin: https://github.com/jamesmagoo/nostr-writer
August 2023
- Paul Miller: 5,000,000 - JS libraries and cryptography-related work
- BOUNTY tijl: 5,000,000 - https://github.com/github-tijlxyz/wikinostr
- gzuus: 5,000,000 - https://nostree.me/
July 2023
- syusui-s: 5,000,000 - rabbit, a tweetdeck-like Nostr client: https://syusui-s.github.io/rabbit/
- kojira: 5,000,000 - Nostr fanzine, Nostr discussion groups in Japan, hardware experiments
- darashi: 5,000,000 - https://github.com/darashi/nos.today, https://github.com/darashi/searchnos, https://github.com/darashi/murasaki
- jeff g: 5,000,000 - https://nostr.how and https://listr.lol, plus other contributions
- cloud fodder: 5,000,000 - https://nostr1.com (open-source)
- utxo.one: 5,000,000 - https://relaying.io (open-source)
- Max DeMarco: 10,269,507 - https://www.youtube.com/watch?v=aA-jiiepOrE
- BOUNTY optout21: 1,000,000 - https://github.com/optout21/nip41-proto0 (proposed nip41 CLI)
- BOUNTY Leo: 1,000,000 - https://github.com/leo-lox/camelus (an old relay thing I forgot exactly)
June 2023
- BOUNTY: Sepher: 2,000,000 - a webapp for making lists of anything: https://pinstr.app/
- BOUNTY: Kieran: 10,000,000 - implement gossip algorithm on Snort, implement all the other nice things: manual relay selection, following hints etc.
- Mattn: 5,000,000 - a myriad of projects and contributions to Nostr projects: https://github.com/search?q=owner%3Amattn+nostr&type=code
- BOUNTY: lynn: 2,000,000 - a simple and clean git nostr CLI written in Go, compatible with William's original git-nostr-tools; and implement threaded comments on https://github.com/fiatjaf/nocomment.
- Jack Chakany: 5,000,000 - https://github.com/jacany/nblog
- BOUNTY: Dan: 2,000,000 - https://metadata.nostr.com/
April 2023
- BOUNTY: Blake Jakopovic: 590,000 - event deleter tool, NIP dependency organization
- BOUNTY: koalasat: 1,000,000 - display relays
- BOUNTY: Mike Dilger: 4,000,000 - display relays, follow event hints (Gossip)
- BOUNTY: kaiwolfram: 5,000,000 - display relays, follow event hints, choose relays to publish (Nozzle)
- Daniele Tonon: 3,000,000 - Gossip
- bu5hm4nn: 3,000,000 - Gossip
- BOUNTY: hodlbod: 4,000,000 - display relays, follow event hints
March 2023
- Doug Hoyte: 5,000,000 sats - https://github.com/hoytech/strfry
- Alex Gleason: 5,000,000 sats - https://gitlab.com/soapbox-pub/mostr
- verbiricha: 5,000,000 sats - https://badges.page/, https://habla.news/
- talvasconcelos: 5,000,000 sats - https://migrate.nostr.com, https://read.nostr.com, https://write.nostr.com/
- BOUNTY: Gossip model: 5,000,000 - https://camelus.app/
- BOUNTY: Gossip model: 5,000,000 - https://github.com/kaiwolfram/Nozzle
- BOUNTY: Bounty Manager: 5,000,000 - https://nostrbounties.com/
February 2023
- styppo: 5,000,000 sats - https://hamstr.to/
- sandwich: 5,000,000 sats - https://nostr.watch/
- BOUNTY: Relay-centric client designs: 5,000,000 sats https://bountsr.org/design/2023/01/26/relay-based-design.html
- BOUNTY: Gossip model on https://coracle.social/: 5,000,000 sats
- Nostrovia Podcast: 3,000,000 sats - https://nostrovia.org/
- BOUNTY: Nostr-Desk / Monstr: 5,000,000 sats - https://github.com/alemmens/monstr
- Mike Dilger: 5,000,000 sats - https://github.com/mikedilger/gossip
January 2023
- ismyhc: 5,000,000 sats - https://github.com/Galaxoid-Labs/Seer
- Martti Malmi: 5,000,000 sats - https://iris.to/
- Carlos Autonomous: 5,000,000 sats - https://github.com/BrightonBTC/bija
- Koala Sat: 5,000,000 - https://github.com/KoalaSat/nostros
- Vitor Pamplona: 5,000,000 - https://github.com/vitorpamplona/amethyst
- Cameri: 5,000,000 - https://github.com/Cameri/nostream
December 2022
- William Casarin: 7 BTC - splitting the fund
- pseudozach: 5,000,000 sats - https://nostr.directory/
- Sondre Bjellas: 5,000,000 sats - https://notes.blockcore.net/
- Null Dev: 5,000,000 sats - https://github.com/KotlinGeekDev/Nosky
- Blake Jakopovic: 5,000,000 sats - https://github.com/blakejakopovic/nostcat, https://github.com/blakejakopovic/nostreq and https://github.com/blakejakopovic/NostrEventPlayground
-
@ 5d4b6c8d:8a1c1ee3
2025-04-25 13:23:38Let's see if @Car can find the episode link this week! (Nobody tell him)
We're mid-NFL draft right now, but it will be in the books by the time we record. So far, we're happy with our teams' picks. How long will that last?
The other big thing on our radar is the ongoing playoffs in the NBA and NHL. We both have lots of thoughts on the NBA, but I'll need someone to walk me through what's happening in the NHL. I can tell it must be wild, from how the odds are swinging around.
In MLB news, I'm locked in a fierce battle with @NEEDcreations for the top spot in our fantasy league, while @grayruby and @supercyclone are in a domestic civil war. Will we have time to actually talk baseball or will we recycle the same tease from last week?
Of course, we'll sprinkle in territory talk and contest updates as we go, plus whatever else stackers want to hear about.
originally posted at https://stacker.news/items/958590
-
@ eac63075:b4988b48
2024-10-21 08:11:11Imagine sending a private message to a friend, only to learn that authorities could be scanning its contents without your knowledge. This isn't a scene from a dystopian novel but a potential reality under the European Union's proposed "Chat Control" measures. Aimed at combating serious crimes like child exploitation and terrorism, these proposals could significantly impact the privacy of everyday internet users. As encrypted messaging services become the norm for personal and professional communication, understanding Chat Control is essential. This article delves into what Chat Control entails, why it's being considered, and how it could affect your right to private communication.
https://www.fountain.fm/episode/coOFsst7r7mO1EP1kSzV
https://open.spotify.com/episode/0IZ6kMExfxFm4FHg5DAWT8?si=e139033865e045de
Sections:
- Introduction
- What Is Chat Control?
- Why Is the EU Pushing for Chat Control?
- The Privacy Concerns and Risks
- The Technical Debate: Encryption and Backdoors
- Global Reactions and the Debate in Europe
- Possible Consequences for Messaging Services
- What Happens Next? The Future of Chat Control
- Conclusion
What Is Chat Control?
"Chat Control" refers to a set of proposed measures by the European Union aimed at monitoring and scanning private communications on messaging platforms. The primary goal is to detect and prevent the spread of illegal content, such as child sexual abuse material (CSAM) and to combat terrorism. While the intention is to enhance security and protect vulnerable populations, these proposals have raised significant privacy concerns.
At its core, Chat Control would require messaging services to implement automated scanning technologies that can analyze the content of messages—even those that are end-to-end encrypted. This means that the private messages you send to friends, family, or colleagues could be subject to inspection by algorithms designed to detect prohibited content.
Origins of the Proposal
The initiative for Chat Control emerged from the EU's desire to strengthen its digital security infrastructure. High-profile cases of online abuse and the use of encrypted platforms by criminal organizations have prompted lawmakers to consider more invasive surveillance tactics. The European Commission has been exploring legislation that would make it mandatory for service providers to monitor communications on their platforms.
How Messaging Services Work
Most modern messaging apps, like Signal, Session, SimpleX, Veilid, Protonmail and Tutanota (among others), use end-to-end encryption (E2EE). This encryption ensures that only the sender and the recipient can read the messages being exchanged. Not even the service providers can access the content. This level of security is crucial for maintaining privacy in digital communications, protecting users from hackers, identity thieves, and other malicious actors.
Key Elements of Chat Control
- Automated Content Scanning: Service providers would use algorithms to scan messages for illegal content.
- Circumvention of Encryption: To scan encrypted messages, providers might need to alter their encryption methods, potentially weakening security.
- Mandatory Reporting: If illegal content is detected, providers would be required to report it to authorities.
- Broad Applicability: The measures could apply to all messaging services operating within the EU, affecting both European companies and international platforms.
Why It Matters
Understanding Chat Control is essential because it represents a significant shift in how digital privacy is handled. While combating illegal activities online is crucial, the methods proposed could set a precedent for mass surveillance and the erosion of privacy rights. Everyday users who rely on encrypted messaging for personal and professional communication might find their conversations are no longer as private as they once thought.
Why Is the EU Pushing for Chat Control?
The European Union's push for Chat Control stems from a pressing concern to protect its citizens, particularly children, from online exploitation and criminal activities. With the digital landscape becoming increasingly integral to daily life, the EU aims to strengthen its ability to combat serious crimes facilitated through online platforms.
Protecting Children and Preventing Crime
One of the primary motivations behind Chat Control is the prevention of child sexual abuse material (CSAM) circulating on the internet. Law enforcement agencies have reported a significant increase in the sharing of illegal content through private messaging services. By implementing Chat Control, the EU believes it can more effectively identify and stop perpetrators, rescue victims, and deter future crimes.
Terrorism is another critical concern. Encrypted messaging apps can be used by terrorist groups to plan and coordinate attacks without detection. The EU argues that accessing these communications could be vital in preventing such threats and ensuring public safety.
Legal Context and Legislative Drivers
The push for Chat Control is rooted in several legislative initiatives:
-
ePrivacy Directive: This directive regulates the processing of personal data and the protection of privacy in electronic communications. The EU is considering amendments that would allow for the scanning of private messages under specific circumstances.
-
Temporary Derogation: In 2021, the EU adopted a temporary regulation permitting voluntary detection of CSAM by communication services. The current proposals aim to make such measures mandatory and more comprehensive.
-
Regulation Proposals: The European Commission has proposed regulations that would require service providers to detect, report, and remove illegal content proactively. This would include the use of technologies to scan private communications.
Balancing Security and Privacy
EU officials argue that the proposed measures are a necessary response to evolving digital threats. They emphasize the importance of staying ahead of criminals who exploit technology to harm others. By implementing Chat Control, they believe law enforcement can be more effective without entirely dismantling privacy protections.
However, the EU also acknowledges the need to balance security with fundamental rights. The proposals include provisions intended to limit the scope of surveillance, such as:
-
Targeted Scanning: Focusing on specific threats rather than broad, indiscriminate monitoring.
-
Judicial Oversight: Requiring court orders or oversight for accessing private communications.
-
Data Protection Safeguards: Implementing measures to ensure that data collected is handled securely and deleted when no longer needed.
The Urgency Behind the Push
High-profile cases of online abuse and terrorism have heightened the sense of urgency among EU policymakers. Reports of increasing online grooming and the widespread distribution of illegal content have prompted calls for immediate action. The EU posits that without measures like Chat Control, these problems will continue to escalate unchecked.
Criticism and Controversy
Despite the stated intentions, the push for Chat Control has been met with significant criticism. Opponents argue that the measures could be ineffective against savvy criminals who can find alternative ways to communicate. There is also concern that such surveillance could be misused or extended beyond its original purpose.
The Privacy Concerns and Risks
While the intentions behind Chat Control focus on enhancing security and protecting vulnerable groups, the proposed measures raise significant privacy concerns. Critics argue that implementing such surveillance could infringe on fundamental rights and set a dangerous precedent for mass monitoring of private communications.
Infringement on Privacy Rights
At the heart of the debate is the right to privacy. By scanning private messages, even with automated tools, the confidentiality of personal communications is compromised. Users may no longer feel secure sharing sensitive information, fearing that their messages could be intercepted or misinterpreted by algorithms.
Erosion of End-to-End Encryption
End-to-end encryption (E2EE) is a cornerstone of digital security, ensuring that only the sender and recipient can read the messages exchanged. Chat Control could necessitate the introduction of "backdoors" or weaken encryption protocols, making it easier for unauthorized parties to access private data. This not only affects individual privacy but also exposes communications to potential cyber threats.
Concerns from Privacy Advocates
Organizations like Signal and Tutanota, which offer encrypted messaging services, have voiced strong opposition to Chat Control. They warn that undermining encryption could have far-reaching consequences:
- Security Risks: Weakening encryption makes systems more vulnerable to hacking, espionage, and cybercrime.
- Global Implications: Changes in EU regulations could influence policies worldwide, leading to a broader erosion of digital privacy.
- Ineffectiveness Against Crime: Determined criminals might resort to other, less detectable means of communication, rendering the measures ineffective while still compromising the privacy of law-abiding citizens.
Potential for Government Overreach
There is a fear that Chat Control could lead to increased surveillance beyond its original scope. Once the infrastructure for scanning private messages is in place, it could be repurposed or expanded to monitor other types of content, stifling free expression and dissent.
Real-World Implications for Users
- False Positives: Automated scanning technologies are not infallible and could mistakenly flag innocent content, leading to unwarranted scrutiny or legal consequences for users.
- Chilling Effect: Knowing that messages could be monitored might discourage people from expressing themselves freely, impacting personal relationships and societal discourse.
- Data Misuse: Collected data could be vulnerable to leaks or misuse, compromising personal and sensitive information.
Legal and Ethical Concerns
Privacy advocates also highlight potential conflicts with existing laws and ethical standards:
- Violation of Fundamental Rights: The European Convention on Human Rights and other international agreements protect the right to privacy and freedom of expression.
- Questionable Effectiveness: The ethical justification for such invasive measures is challenged if they do not significantly improve safety or if they disproportionately impact innocent users.
Opposition from Member States and Organizations
Countries like Germany and organizations such as the European Digital Rights (EDRi) have expressed opposition to Chat Control. They emphasize the need to protect digital privacy and caution against hasty legislation that could have unintended consequences.
The Technical Debate: Encryption and Backdoors
The discussion around Chat Control inevitably leads to a complex technical debate centered on encryption and the potential introduction of backdoors into secure communication systems. Understanding these concepts is crucial to grasping the full implications of the proposed measures.
What Is End-to-End Encryption (E2EE)?
End-to-end encryption is a method of secure communication that prevents third parties from accessing data while it's transferred from one end system to another. In simpler terms, only the sender and the recipient can read the messages. Even the service providers operating the messaging platforms cannot decrypt the content.
- Security Assurance: E2EE ensures that sensitive information—be it personal messages, financial details, or confidential business communications—remains private.
- Widespread Use: Popular messaging apps like Signal, Session, SimpleX, Veilid, Protonmail and Tutanota (among others) rely on E2EE to protect user data.
How Chat Control Affects Encryption
Implementing Chat Control as proposed would require messaging services to scan the content of messages for illegal material. To do this on encrypted platforms, providers might have to:
- Introduce Backdoors: Create a means for third parties (including the service provider or authorities) to access encrypted messages.
- Client-Side Scanning: Install software on users' devices that scans messages before they are encrypted and sent, effectively bypassing E2EE.
The Risks of Weakening Encryption
1. Compromised Security for All Users
Introducing backdoors or client-side scanning tools can create vulnerabilities:
- Exploitable Gaps: If a backdoor exists, malicious actors might find and exploit it, leading to data breaches.
- Universal Impact: Weakening encryption doesn't just affect targeted individuals; it potentially exposes all users to increased risk.
2. Undermining Trust in Digital Services
- User Confidence: Knowing that private communications could be accessed might deter people from using digital services or push them toward unregulated platforms.
- Business Implications: Companies relying on secure communications might face increased risks, affecting economic activities.
3. Ineffectiveness Against Skilled Adversaries
- Alternative Methods: Criminals might shift to other encrypted channels or develop new ways to avoid detection.
- False Sense of Security: Weakening encryption could give the impression of increased safety while adversaries adapt and continue their activities undetected.
Signal’s Response and Stance
Signal, a leading encrypted messaging service, has been vocal in its opposition to the EU's proposals:
- Refusal to Weaken Encryption: Signal's CEO Meredith Whittaker has stated that the company would rather cease operations in the EU than compromise its encryption standards.
- Advocacy for Privacy: Signal emphasizes that strong encryption is essential for protecting human rights and freedoms in the digital age.
Understanding Backdoors
A "backdoor" in encryption is an intentional weakness inserted into a system to allow authorized access to encrypted data. While intended for legitimate use by authorities, backdoors pose several problems:
- Security Vulnerabilities: They can be discovered and exploited by unauthorized parties, including hackers and foreign governments.
- Ethical Concerns: The existence of backdoors raises questions about consent and the extent to which governments should be able to access private communications.
The Slippery Slope Argument
Privacy advocates warn that introducing backdoors or mandatory scanning sets a precedent:
- Expanded Surveillance: Once in place, these measures could be extended to monitor other types of content beyond the original scope.
- Erosion of Rights: Gradual acceptance of surveillance can lead to a significant reduction in personal freedoms over time.
Potential Technological Alternatives
Some suggest that it's possible to fight illegal content without undermining encryption:
- Metadata Analysis: Focusing on patterns of communication rather than content.
- Enhanced Reporting Mechanisms: Encouraging users to report illegal content voluntarily.
- Investing in Law Enforcement Capabilities: Strengthening traditional investigative methods without compromising digital security.
The technical community largely agrees that weakening encryption is not the solution:
- Consensus on Security: Strong encryption is essential for the safety and privacy of all internet users.
- Call for Dialogue: Technologists and privacy experts advocate for collaborative approaches that address security concerns without sacrificing fundamental rights.
Global Reactions and the Debate in Europe
The proposal for Chat Control has ignited a heated debate across Europe and beyond, with various stakeholders weighing in on the potential implications for privacy, security, and fundamental rights. The reactions are mixed, reflecting differing national perspectives, political priorities, and societal values.
Support for Chat Control
Some EU member states and officials support the initiative, emphasizing the need for robust measures to combat online crime and protect citizens, especially children. They argue that:
- Enhanced Security: Mandatory scanning can help law enforcement agencies detect and prevent serious crimes.
- Responsibility of Service Providers: Companies offering communication services should play an active role in preventing their platforms from being used for illegal activities.
- Public Safety Priorities: The protection of vulnerable populations justifies the implementation of such measures, even if it means compromising some aspects of privacy.
Opposition within the EU
Several countries and organizations have voiced strong opposition to Chat Control, citing concerns over privacy rights and the potential for government overreach.
Germany
- Stance: Germany has been one of the most vocal opponents of the proposed measures.
- Reasons:
- Constitutional Concerns: The German government argues that Chat Control could violate constitutional protections of privacy and confidentiality of communications.
- Security Risks: Weakening encryption is seen as a threat to cybersecurity.
- Legal Challenges: Potential conflicts with national laws protecting personal data and communication secrecy.
Netherlands
- Recent Developments: The Dutch government decided against supporting Chat Control, emphasizing the importance of encryption for security and privacy.
- Arguments:
- Effectiveness Doubts: Skepticism about the actual effectiveness of the measures in combating crime.
- Negative Impact on Privacy: Concerns about mass surveillance and the infringement of citizens' rights.
Table reference: Patrick Breyer - Chat Control in 23 September 2024
Privacy Advocacy Groups
European Digital Rights (EDRi)
- Role: A network of civil and human rights organizations working to defend rights and freedoms in the digital environment.
- Position:
- Strong Opposition: EDRi argues that Chat Control is incompatible with fundamental rights.
- Awareness Campaigns: Engaging in public campaigns to inform citizens about the potential risks.
- Policy Engagement: Lobbying policymakers to consider alternative approaches that respect privacy.
Politicians and Activists
Patrick Breyer
- Background: A Member of the European Parliament (MEP) from Germany, representing the Pirate Party.
- Actions:
- Advocacy: Actively campaigning against Chat Control through speeches, articles, and legislative efforts.
- Public Outreach: Using social media and public events to raise awareness.
- Legal Expertise: Highlighting the legal inconsistencies and potential violations of EU law.
Global Reactions
International Organizations
- Human Rights Watch and Amnesty International: These organizations have expressed concerns about the implications for human rights, urging the EU to reconsider.
Technology Companies
- Global Tech Firms: Companies like Apple and Microsoft are monitoring the situation, as EU regulations could affect their operations and user trust.
- Industry Associations: Groups representing tech companies have issued statements highlighting the risks to innovation and competitiveness.
The Broader Debate
The controversy over Chat Control reflects a broader struggle between security interests and privacy rights in the digital age. Key points in the debate include:
- Legal Precedents: How the EU's decision might influence laws and regulations in other countries.
- Digital Sovereignty: The desire of nations to control digital spaces within their borders.
- Civil Liberties: The importance of protecting freedoms in the face of technological advancements.
Public Opinion
- Diverse Views: Surveys and public forums show a range of opinions, with some citizens prioritizing security and others valuing privacy above all.
- Awareness Levels: Many people are still unaware of the potential changes, highlighting the need for public education on the issue.
The EU is at a crossroads, facing the challenge of addressing legitimate security concerns without undermining the fundamental rights that are central to its values. The outcome of this debate will have significant implications for the future of digital privacy and the balance between security and freedom in society.
Possible Consequences for Messaging Services
The implementation of Chat Control could have significant implications for messaging services operating within the European Union. Both large platforms and smaller providers might need to adapt their technologies and policies to comply with the new regulations, potentially altering the landscape of digital communication.
Impact on Encrypted Messaging Services
Signal and Similar Platforms
-
Compliance Challenges: Encrypted messaging services like Signal rely on end-to-end encryption to secure user communications. Complying with Chat Control could force them to weaken their encryption protocols or implement client-side scanning, conflicting with their core privacy principles.
-
Operational Decisions: Some platforms may choose to limit their services in the EU or cease operations altogether rather than compromise on encryption. Signal, for instance, has indicated that it would prefer to withdraw from European markets than undermine its security features.
Potential Blocking or Limiting of Services
-
Regulatory Enforcement: Messaging services that do not comply with Chat Control regulations could face fines, legal action, or even be blocked within the EU.
-
Access Restrictions: Users in Europe might find certain services unavailable or limited in functionality if providers decide not to meet the regulatory requirements.
Effects on Smaller Providers
-
Resource Constraints: Smaller messaging services and startups may lack the resources to implement the required scanning technologies, leading to increased operational costs or forcing them out of the market.
-
Innovation Stifling: The added regulatory burden could deter new entrants, reducing competition and innovation in the messaging service sector.
User Experience and Trust
-
Privacy Concerns: Users may lose trust in messaging platforms if they know their communications are subject to scanning, leading to a decline in user engagement.
-
Migration to Unregulated Platforms: There is a risk that users might shift to less secure or unregulated services, including those operated outside the EU or on the dark web, potentially exposing them to greater risks.
Technical and Security Implications
-
Increased Vulnerabilities: Modifying encryption protocols to comply with Chat Control could introduce security flaws, making platforms more susceptible to hacking and data breaches.
-
Global Security Risks: Changes made to accommodate EU regulations might affect the global user base of these services, extending security risks beyond European borders.
Impact on Businesses and Professional Communications
-
Confidentiality Issues: Businesses that rely on secure messaging for sensitive communications may face challenges in ensuring confidentiality, affecting sectors like finance, healthcare, and legal services.
-
Compliance Complexity: Companies operating internationally will need to navigate a complex landscape of differing regulations, increasing administrative burdens.
Economic Consequences
-
Market Fragmentation: Divergent regulations could lead to a fragmented market, with different versions of services for different regions.
-
Loss of Revenue: Messaging services might experience reduced revenue due to decreased user trust and engagement or the costs associated with compliance.
Responses from Service Providers
-
Legal Challenges: Companies might pursue legal action against the regulations, citing conflicts with privacy laws and user rights.
-
Policy Advocacy: Service providers may increase lobbying efforts to influence policy decisions and promote alternatives to Chat Control.
Possible Adaptations
-
Technological Innovation: Some providers might invest in developing new technologies that can detect illegal content without compromising encryption, though the feasibility remains uncertain.
-
Transparency Measures: To maintain user trust, companies might enhance transparency about how data is handled and what measures are in place to protect privacy.
The potential consequences of Chat Control for messaging services are profound, affecting not only the companies that provide these services but also the users who rely on them daily. The balance between complying with legal requirements and maintaining user privacy and security presents a significant challenge that could reshape the digital communication landscape.
What Happens Next? The Future of Chat Control
The future of Chat Control remains uncertain as the debate continues among EU member states, policymakers, technology companies, and civil society organizations. Several factors will influence the outcome of this contentious proposal, each carrying significant implications for digital privacy, security, and the regulatory environment within the European Union.
Current Status of Legislation
-
Ongoing Negotiations: The proposed Chat Control measures are still under discussion within the European Parliament and the Council of the European Union. Amendments and revisions are being considered in response to the feedback from various stakeholders.
-
Timeline: While there is no fixed date for the final decision, the EU aims to reach a consensus to implement effective measures against online crime without undue delay.
Key Influencing Factors
1. Legal Challenges and Compliance with EU Law
-
Fundamental Rights Assessment: The proposals must be evaluated against the Charter of Fundamental Rights of the European Union, ensuring that any measures comply with rights to privacy, data protection, and freedom of expression.
-
Court Scrutiny: Potential legal challenges could arise, leading to scrutiny by the European Court of Justice (ECJ), which may impact the feasibility and legality of Chat Control.
2. Technological Feasibility
-
Development of Privacy-Preserving Technologies: Research into methods that can detect illegal content without compromising encryption is ongoing. Advances in this area could provide alternative solutions acceptable to both privacy advocates and security agencies.
-
Implementation Challenges: The practical aspects of deploying scanning technologies across various platforms and services remain complex, and technical hurdles could delay or alter the proposed measures.
3. Political Dynamics
-
Member State Positions: The differing stances of EU countries, such as Germany's opposition, play a significant role in shaping the final outcome. Consensus among member states is crucial for adopting EU-wide regulations.
-
Public Opinion and Advocacy: Growing awareness and activism around digital privacy can influence policymakers. Public campaigns and lobbying efforts may sway decisions in favor of stronger privacy protections.
4. Industry Responses
-
Negotiations with Service Providers: Ongoing dialogues between EU authorities and technology companies may lead to compromises or collaborative efforts to address concerns without fully implementing Chat Control as initially proposed.
-
Potential for Self-Regulation: Messaging services might propose self-regulatory measures to combat illegal content, aiming to demonstrate effectiveness without the need for mandatory scanning.
Possible Scenarios
Optimistic Outcome:
- Balanced Regulation: A revised proposal emerges that effectively addresses security concerns while upholding strong encryption and privacy rights, possibly through innovative technologies or targeted measures with robust oversight.
Pessimistic Outcome:
- Adoption of Strict Measures: Chat Control is implemented as initially proposed, leading to weakened encryption, reduced privacy, and potential withdrawal of services like Signal from the EU market.
Middle Ground:
- Incremental Implementation: Partial measures are adopted, focusing on voluntary cooperation with service providers and emphasizing transparency and user consent, with ongoing evaluations to assess effectiveness and impact.
How to Stay Informed and Protect Your Privacy
-
Follow Reputable Sources: Keep up with news from reliable outlets, official EU communications, and statements from privacy organizations to stay informed about developments.
-
Engage in the Dialogue: Participate in public consultations, sign petitions, or contact representatives to express your views on Chat Control and digital privacy.
-
Utilize Secure Practices: Regardless of legislative outcomes, adopting good digital hygiene—such as using strong passwords and being cautious with personal information—can enhance your online security.
The Global Perspective
-
International Implications: The EU's decision may influence global policies on encryption and surveillance, setting precedents that other countries might follow or react against.
-
Collaboration Opportunities: International cooperation on developing solutions that protect both security and privacy could emerge, fostering a more unified approach to addressing online threats.
Looking Ahead
The future of Chat Control is a critical issue that underscores the challenges of governing in the digital age. Balancing the need for security with the protection of fundamental rights is a complex task that requires careful consideration, open dialogue, and collaboration among all stakeholders.
As the situation evolves, staying informed and engaged is essential. The decisions made in the coming months will shape the digital landscape for years to come, affecting how we communicate, conduct business, and exercise our rights in an increasingly connected world.
Conclusion
The debate over Chat Control highlights a fundamental challenge in our increasingly digital world: how to protect society from genuine threats without eroding the very rights and freedoms that define it. While the intention to safeguard children and prevent crime is undeniably important, the means of achieving this through intrusive surveillance measures raise critical concerns.
Privacy is not just a personal preference but a cornerstone of democratic societies. End-to-end encryption has become an essential tool for ensuring that our personal conversations, professional communications, and sensitive data remain secure from unwanted intrusion. Weakening these protections could expose individuals and organizations to risks that far outweigh the proposed benefits.
The potential consequences of implementing Chat Control are far-reaching:
- Erosion of Trust: Users may lose confidence in digital platforms, impacting how we communicate and conduct business online.
- Security Vulnerabilities: Introducing backdoors or weakening encryption can make systems more susceptible to cyberattacks.
- Stifling Innovation: Regulatory burdens may hinder technological advancement and competitiveness in the tech industry.
- Global Implications: The EU's decisions could set precedents that influence digital policies worldwide, for better or worse.
As citizens, it's crucial to stay informed about these developments. Engage in conversations, reach out to your representatives, and advocate for solutions that respect both security needs and fundamental rights. Technology and policy can evolve together to address challenges without compromising core values.
The future of Chat Control is not yet decided, and public input can make a significant difference. By promoting open dialogue, supporting privacy-preserving innovations, and emphasizing the importance of human rights in legislation, we can work towards a digital landscape that is both safe and free.
In a world where digital communication is integral to daily life, striking the right balance between security and privacy is more important than ever. The choices made today will shape the digital environment for generations to come, determining not just how we communicate, but how we live and interact in an interconnected world.
Thank you for reading this article. We hope it has provided you with a clear understanding of Chat Control and its potential impact on your privacy and digital rights. Stay informed, stay engaged, and let's work together towards a secure and open digital future.
Read more:
- https://www.patrick-breyer.de/en/posts/chat-control/
- https://www.patrick-breyer.de/en/new-eu-push-for-chat-control-will-messenger-services-be-blocked-in-europe/
- https://edri.org/our-work/dutch-decision-puts-brakes-on-chat-control/
- https://signal.org/blog/pdfs/ndss-keynote.pdf
- https://tuta.com/blog/germany-stop-chat-control
- https://cointelegraph.com/news/signal-president-slams-revised-eu-encryption-proposal
- https://mullvad.net/en/why-privacy-matters
-
@ 8d5ba92c:c6c3ecd5
2025-04-25 09:14:46Money is more than just a medium of exchange—it’s the current that drives economies, the lifeblood of societies, and the pulse of civilization itself. When money decays, so does the culture it sustains. Take fiat, for example. Created out of thin air and inflated into oblivion, it acts like poison—rewarding conformity over sovereignty, speculation over creation, and exploitation over collaboration.
A culture built this way fails to foster true progress. Instead, it pushes us into darker corners where creativity and truth become increasingly scarce.
From the food we eat to the media we consume, much of modern culture has become a reflection of this problem—prioritizing shortcuts, convenience, and profit at any cost. It seems there’s no room left for depth, authenticity, or connection anymore.
Art, for example—once a sacred space for meaning, and inner calling—has not been spared either. Stripped of its purpose, it too falls into gloom, weaponized to divide and manipulate rather than inspire beauty and growth.
“Art is the lie that reveals the truth” as Picasso once said.
Indeed, this intriguing perspective highlights the subjectivity of truth and the many ways art can be interpreted. While creative expression doesn’t always need to mirror reality one-to-one—actually, often reshaping it through the creator’s lens—much of what we’re surrounded with these days feels like a dangerous illusion built on the rotten incentives of decaying values.
The movies we watch, the music we hear, and the stories we absorb from books, articles, ads, and commercials—are too often crafted to condition specific behaviors. Greed, laziness, overconsumption, ignorance (feel free to add to this list). Instead of enriching our culture, they disconnect us from each other, as well as from our own minds, hearts, and souls.
If you see yourself as a Bitcoiner—or, as I like to call it, ‘a freedom fighter at heart’—and you care about building a world based on truth, freedom, and prosperity, please recognize that culture is also our battleground.
Artistic forms act as transformative forces in the fight against the status quo.
Join me and the hundreds of guests this May at Bitcoin FilmFest 2025.
You don’t have to be a creative person in the traditional sense—like a filmmaker, writer, painter, sculptor, musician, and so on—to have a direct impact on culture!
One way or another, you engage with creative realms anyway. The deeper you connect with them, the better you understand the reality we live in versus the future humanity deserves.
I know the process may take time, but I truly believe it’s possible. Unfiat The Culture!
Bitcoin FilmFest 2025. May 22-25, Warsaw, Poland.
The third annual edition of a unique event built at the intersection of independent films, art, and culture.
“Your narrative begins where centralized scripts end—explore the uncharted stories beyond the cinema.” - Details: bitcoinfilmfest.com/bff25/ - Grab 10% off your tickets with code YAKIHONNE!
-
@ a93be9fb:6d3fdc0c
2025-04-25 07:10:52This is a tmp article
-
@ bf95e1a4:ebdcc848
2025-04-25 07:10:07This is a part of the Bitcoin Infinity Academy course on Knut Svanholm's book Bitcoin: Sovereignty Through Mathematics. For more information, check out our Geyser page!
Scarcity
What makes a commodity scarce? What is scarcity in the first place? What other properties can be deducted from an object’s scarcity? How are scarcity, energy, time, and value connected? Scarcity might seem easy to describe on the surface, but in reality, it’s not. Not when you take infinity into account. Infinity is a concept that has puzzled the human mind for as long as it has been able to imagine it. If it ever has. It is a very abstract concept, and it’s always linked to time simply because even imagining an infinite number would take an infinite amount of time. If we truly live in an infinite universe, scarcity cannot exist. If something exists in an infinite universe, an infinite number of copies of this something must also exist since the probability of this being true would also be infinite in an infinite universe. Therefore, scarcity must always be defined within a set framework. No frame, no scarcity.
Think of it this way: the most expensive artwork ever sold at the time of writing was the Salvator Mundi, painted by Leonardo da Vinci. It’s not even a particularly beautiful painting, so why the high price? Because Da Vinci originals are scarce. A poster of the painting isn’t expensive at all, but the original will cost you at least 450 million US Dollars. All because we agree to frame its scarcity around the notion that it is a Da Vinci original, of which under twenty exist today. Historically, scarcity has always been framed around real-world limits to the supply of a good. Most of the great thinkers of the Austrian school of economics from the twentieth century believed that the value of a monetary good arises from its scarcity and that scarcity is always connected to the real-world availability of that good. Most of them believed that a gold standard would be the hardest form of money that we would ever see and the closest thing to an absolutely scarce resource as we would ever know.
In the late 90’s, the cryptographers that laid the groundwork for what would become Bitcoin reimagined scarcity as anything with an unforgeable costliness. This mindset is key to understanding the connection between scarcity and value. Anything can be viewed as scarce if it’s sufficiently hard to produce and hard to fake the production cost of — in other words, easy to verify the validity of. The zeros at the beginning of a hashed Bitcoin block are the Proof of Work that proves that the created coins in that block were costly to produce. People who promote the idea that the mining algorithm used to produce Bitcoin could be more environmentally friendly or streamlined are either deliberately lying or missing the point. The energy expenditure is the very thing that gives the token its value because it provides proof to the network that enough computing power was sacrificed in order to keep the network sufficiently decentralized and thus resistant to change. "Easy to verify" is the flipside of the "unforgeable costliness" coin. The validity of a Bitcoin block is very easy to verify since all you need to do is look at its hash, make sure the block is part of the strongest chain, and that it conforms to all consensus rules. In order to check whether a gold bar is real or not, you probably need to trust a third party. Fiat money often comes with a plethora of water stamps, holograms, and metal stripes, so in a sense, they’re hard to forge. What you cannot know about a fiat currency at any given moment, though, is how much of it is in circulation. What you do know about fiat currencies is that they’re not scarce.
Bitcoin provides us with absolute scarcity for the first time in human history. It is a remarkable breakthrough. Even though you can’t make jewelry or anything else out of Bitcoin, its total supply is fixed. After the year 2140, after the last Bitcoin has been mined, the total amount of Bitcoin in circulation can only go down. This limited supply is what the gold standards of the past were there for in the first place. Bitcoin’s supply is much more limited than that of gold, however, since they will be lost as time goes by. Since the supply is so limited, it doesn’t matter what the current demand is. The potential upside to its value is literally limitless due to this relationship between supply and demand. The “backing” that other currencies have is only there to assume people that the currency will keep its value over time, and the only way of ensuring this is to limit the supply. Bitcoin does this better than any other thing before it. Leonardo da Vinci’s original paintings are extremely valuable because of Leonardo’s brand name and the fact that there are only about 13 of them left. One day there’ll be less than one left. The same is true for Bitcoin.
Scarcity on the Internet was long believed to be an impossible invention, and it took a multi-talented genius such as Satoshi Nakamoto to figure out all the different parts that make Bitcoin so much more than the sum of them. His disappearance from the project was one such part, maybe the most important one. The thing about computerized scarcity is that it was a one-time invention. Once it was invented, the invention could not be recreated. That’s just the nature of data. Computers are designed to be able to replicate any data set any number of times. This is true for every piece of code there is, and digital scarcity needed to be framed somehow to work. Bitcoin’s consensus rules provided such a frame. Bitcoin certainly seems to provide true digital scarcity, and if the game theoretical theories that it builds on are correct, its promise of an ever-increasing value will be a self-fulfilling prophecy.
In 2018, the inflation rate of the Venezuelan Bolivar was a staggering 80,000%. Hugo Chavez and his successor, Nicolas Maduro, effectively killed the Venezuelan economy with socialism. It has happened before — and sadly, it is likely to happen again. The main problem with socialism is not that people aren’t incentivized to work in socialist countries. On the contrary, hungry people under the threat of violence tend to work harder than most. The problem with state-owned production is that there is no free market price mechanism to reflect the true demand for goods and, therefore, no way of knowing how much supply the state should produce. Everything is in constant surplus or shortage — often the latter, as the empty supermarket shelves in Venezuela depressingly attest. Chavez and Maduro attempted to rescue the country’s economy by printing more money — which simply does not work. Their true motives for printing money are, of course, questionable given that it depreciated the value of Bolivar bills to less than that of toilet paper. As mentioned in earlier chapters, inflation is the greatest hidden threat to themselves that humans have ever created.
A few hundred years ago, the Catholic Church held the lion’s share of political power throughout Europe. Today, power primarily resides with nation-states in collusion with multinational corporations. The separation of church and state triggered the migration of power from the former to the latter, emancipating many citizens in the process. Still, places like Venezuela are sad proof that “the people” are still not in power in many self-proclaimed democracies — if in any, for that matter. Another separation will have to take place first: The separation of money and state. We, the people of Planet Earth, now have the means at our disposal for this separation to take place. Whether we use them or not will determine how emancipated and independent our children can and will be in the future.
About the Bitcoin Infinity Academy
The Bitcoin Infinity Academy is an educational project built around Knut Svanholm’s books about Bitcoin and Austrian Economics. Each week, a whole chapter from one of the books is released for free on Highlighter, accompanied by a video in which Knut and Luke de Wolf discuss that chapter’s ideas. You can join the discussions by signing up for one of the courses on our Geyser page. Signed books, monthly calls, and lots of other benefits are also available.
-
@ bf95e1a4:ebdcc848
2025-04-25 07:10:01This is a part of the Bitcoin Infinity Academy course on Knut Svanholm's book Bitcoin: Sovereignty Through Mathematics. For more information, check out our Geyser page!
Scarcity
What makes a commodity scarce? What is scarcity in the first place? What other properties can be deducted from an object’s scarcity? How are scarcity, energy, time, and value connected? Scarcity might seem easy to describe on the surface, but in reality, it’s not. Not when you take infinity into account. Infinity is a concept that has puzzled the human mind for as long as it has been able to imagine it. If it ever has. It is a very abstract concept, and it’s always linked to time simply because even imagining an infinite number would take an infinite amount of time. If we truly live in an infinite universe, scarcity cannot exist. If something exists in an infinite universe, an infinite number of copies of this something must also exist since the probability of this being true would also be infinite in an infinite universe. Therefore, scarcity must always be defined within a set framework. No frame, no scarcity.
Think of it this way: the most expensive artwork ever sold at the time of writing was the Salvator Mundi, painted by Leonardo da Vinci. It’s not even a particularly beautiful painting, so why the high price? Because Da Vinci originals are scarce. A poster of the painting isn’t expensive at all, but the original will cost you at least 450 million US Dollars. All because we agree to frame its scarcity around the notion that it is a Da Vinci original, of which under twenty exist today. Historically, scarcity has always been framed around real-world limits to the supply of a good. Most of the great thinkers of the Austrian school of economics from the twentieth century believed that the value of a monetary good arises from its scarcity and that scarcity is always connected to the real-world availability of that good. Most of them believed that a gold standard would be the hardest form of money that we would ever see and the closest thing to an absolutely scarce resource as we would ever know.
In the late 90’s, the cryptographers that laid the groundwork for what would become Bitcoin reimagined scarcity as anything with an unforgeable costliness. This mindset is key to understanding the connection between scarcity and value. Anything can be viewed as scarce if it’s sufficiently hard to produce and hard to fake the production cost of — in other words, easy to verify the validity of. The zeros at the beginning of a hashed Bitcoin block are the Proof of Work that proves that the created coins in that block were costly to produce. People who promote the idea that the mining algorithm used to produce Bitcoin could be more environmentally friendly or streamlined are either deliberately lying or missing the point. The energy expenditure is the very thing that gives the token its value because it provides proof to the network that enough computing power was sacrificed in order to keep the network sufficiently decentralized and thus resistant to change. "Easy to verify" is the flipside of the "unforgeable costliness" coin. The validity of a Bitcoin block is very easy to verify since all you need to do is look at its hash, make sure the block is part of the strongest chain, and that it conforms to all consensus rules. In order to check whether a gold bar is real or not, you probably need to trust a third party. Fiat money often comes with a plethora of water stamps, holograms, and metal stripes, so in a sense, they’re hard to forge. What you cannot know about a fiat currency at any given moment, though, is how much of it is in circulation. What you do know about fiat currencies is that they’re not scarce.
Bitcoin provides us with absolute scarcity for the first time in human history. It is a remarkable breakthrough. Even though you can’t make jewelry or anything else out of Bitcoin, its total supply is fixed. After the year 2140, after the last Bitcoin has been mined, the total amount of Bitcoin in circulation can only go down. This limited supply is what the gold standards of the past were there for in the first place. Bitcoin’s supply is much more limited than that of gold, however, since they will be lost as time goes by. Since the supply is so limited, it doesn’t matter what the current demand is. The potential upside to its value is literally limitless due to this relationship between supply and demand. The “backing” that other currencies have is only there to assume people that the currency will keep its value over time, and the only way of ensuring this is to limit the supply. Bitcoin does this better than any other thing before it. Leonardo da Vinci’s original paintings are extremely valuable because of Leonardo’s brand name and the fact that there are only about 13 of them left. One day there’ll be less than one left. The same is true for Bitcoin.
Scarcity on the Internet was long believed to be an impossible invention, and it took a multi-talented genius such as Satoshi Nakamoto to figure out all the different parts that make Bitcoin so much more than the sum of them. His disappearance from the project was one such part, maybe the most important one. The thing about computerized scarcity is that it was a one-time invention. Once it was invented, the invention could not be recreated. That’s just the nature of data. Computers are designed to be able to replicate any data set any number of times. This is true for every piece of code there is, and digital scarcity needed to be framed somehow to work. Bitcoin’s consensus rules provided such a frame. Bitcoin certainly seems to provide true digital scarcity, and if the game theoretical theories that it builds on are correct, its promise of an ever-increasing value will be a self-fulfilling prophecy.
In 2018, the inflation rate of the Venezuelan Bolivar was a staggering 80,000%. Hugo Chavez and his successor, Nicolas Maduro, effectively killed the Venezuelan economy with socialism. It has happened before — and sadly, it is likely to happen again. The main problem with socialism is not that people aren’t incentivized to work in socialist countries. On the contrary, hungry people under the threat of violence tend to work harder than most. The problem with state-owned production is that there is no free market price mechanism to reflect the true demand for goods and, therefore, no way of knowing how much supply the state should produce. Everything is in constant surplus or shortage — often the latter, as the empty supermarket shelves in Venezuela depressingly attest. Chavez and Maduro attempted to rescue the country’s economy by printing more money — which simply does not work. Their true motives for printing money are, of course, questionable given that it depreciated the value of Bolivar bills to less than that of toilet paper. As mentioned in earlier chapters, inflation is the greatest hidden threat to themselves that humans have ever created.
A few hundred years ago, the Catholic Church held the lion’s share of political power throughout Europe. Today, power primarily resides with nation-states in collusion with multinational corporations. The separation of church and state triggered the migration of power from the former to the latter, emancipating many citizens in the process. Still, places like Venezuela are sad proof that “the people” are still not in power in many self-proclaimed democracies — if in any, for that matter. Another separation will have to take place first: The separation of money and state. We, the people of Planet Earth, now have the means at our disposal for this separation to take place. Whether we use them or not will determine how emancipated and independent our children can and will be in the future.
About the Bitcoin Infinity Academy
The Bitcoin Infinity Academy is an educational project built around Knut Svanholm’s books about Bitcoin and Austrian Economics. Each week, a whole chapter from one of the books is released for free on Highlighter, accompanied by a video in which Knut and Luke de Wolf discuss that chapter’s ideas. You can join the discussions by signing up for one of the courses on our Geyser page. Signed books, monthly calls, and lots of other benefits are also available.
-
@ d34e832d:383f78d0
2025-04-25 07:09:361. Premise
The demand for high-capacity hard drives has grown exponentially with the expansion of cloud storage, big data, and personal backups. As failure of a storage device can result in significant data loss and downtime, understanding long-term drive reliability is critical. This research seeks to determine the most reliable manufacturer of 10TB+ HDDs by analyzing cumulative drive failure data over ten years from Backblaze, a leader in cloud backup services.
2. Methodology
Data from Backblaze, representing 350,000+ deployed drives, was analyzed to calculate the AFR of 10TB+ models from Seagate, Western Digital (including HGST), and Toshiba. AFR was calculated using cumulative data to reduce volatility and better illustrate long-term reliability trends. Power-on hours were used as the temporal metric to more accurately capture usage-based wear, as opposed to calendar-based aging.
3. Results and Analysis
3.1 Western Digital (including HGST)
- Ultrastar HC530 & HC550 (14TB & 16TB)
- AFR consistently below 0.35% after the initial “burn-in” period.
- Exhibited superior long-term stability.
- HGST Ultrastar HC520 (12TB)
- Demonstrated robust performance with AFR consistently under 0.5%.
- Excellent aging profile after year one.
3.2 Toshiba
- General Performance
- Noted for higher early failure rates (DOA issues), indicating manufacturing or transport inconsistencies.
- After stabilization, most models showed AFRs under 1%, which is within acceptable industry standards.
- Model Variability
- Differences in AFR observed between 4Kn and 512e sector models, suggesting firmware or controller differences may influence longevity.
3.3 Seagate
- Older Models (e.g., Exos X12)
- AFRs often exceeded 1.5%, raising concerns for long-term use in mission-critical applications.
- Newer Models (e.g., Exos X16)
- Improvements seen, with AFRs around 1%, though still higher than WD and HGST counterparts.
- Seagate’s aggressive pricing often makes these drives more attractive for cost-sensitive deployments.
4. Points Drawn
The data reveals a compelling narrative in brand-level reliability trends among high-capacity hard drives. Western Digital, especially through its HGST-derived Ultrastar product lines, consistently demonstrates superior reliability, maintaining exceptionally low Annualized Failure Rates (AFRs) and excellent operational stability across extended use periods. This positions WD as the most dependable option for enterprise-grade and mission-critical storage environments. Toshiba, despite a tendency toward higher early failure rates—often manifesting as Dead-on-Arrival (DOA) units—generally stabilizes to acceptable AFR levels below 1% over time. This indicates potential suitability in deployments where early failure screening and redundancy planning are feasible. In contrast, Seagate’s performance is notably variable. While earlier models displayed higher AFRs, more recent iterations such as the Exos X16 series have shown marked improvement. Nevertheless, Seagate drives continue to exhibit greater fluctuation in reliability outcomes. Their comparatively lower cost structure, however, may render them an attractive option in cost-sensitive or non-critical storage environments, where performance variability is an acceptable trade-off.
It’s crucial to remember that AFR is a probabilistic measure; individual drive failures are still possible regardless of brand or model. Furthermore, newer drive models need additional longitudinal data to confirm their long-term reliability.
5. Consider
Best Overall Choice: Western Digital Ultrastar HC530/HC550
These drives combine top-tier reliability (AFR < 0.35%), mature firmware, and consistent manufacturing quality, making them ideal for enterprise and archival use.Runner-Up (Budget Consideration): Seagate Exos X16
While reliability is slightly lower (AFR ~1%), the Exos series offers excellent value, especially for bulk storage.Cautionary Choice: Toshiba 10TB+ Models
Users should be prepared for potential early failures and may consider pre-deployment burn-in testing.
6. Recommendations for Buyers
- For mission-critical environments: Choose Western Digital Ultrastar models.
- For budget-focused or secondary storage: Seagate Exos offers acceptable risk-to-cost ratio.
- For experimental or non-essential deployments: Toshiba drives post-burn-in are serviceable.
7. Future Work
Based on publicly available Backblaze data, which reflects data center use and may not perfectly map to home or SMB environments. Sample sizes vary by model and may bias certain conclusions. Future research could integrate SMART data analytics, firmware version tracking, and consumer-use data to provide more granular insight.
References
- Backblaze. (2013–2023). Hard Drive Stats. Retrieved from https://www.backblaze.com/blog
- Manufacturer datasheets and reliability reports for Seagate, Western Digital, and Toshiba. -
@ d34e832d:383f78d0
2025-04-25 06:06:32This walkthrough examines the integration of these three tools as a combined financial instrument, focusing on their functionality, security benefits, and practical applications. Specter Desktop offers a user-friendly interface for managing Bitcoin wallets, Bitcoin Core provides a full node for transaction validation, and Coldcard provides the hardware security necessary to safeguard private keys. Together, these tools offer a robust and secure environment for managing Bitcoin holdings, protecting them from both online and physical threats.
We will explore their individual roles in Bitcoin management, how they can be integrated to offer a cohesive solution, and the installation and configuration process on OpenBSD. Additionally, security considerations and practical use cases will be addressed to demonstrate the advantages of this setup compared to alternative Bitcoin management solutions.
2.1 Specter Desktop
Specter Desktop is a Bitcoin wallet management software that provides a powerful, open-source interface for interacting with Bitcoin nodes. Built with an emphasis on multi-signature wallets and hardware wallet integration, Specter Desktop is designed to serve as an all-in-one solution for users who prioritize security and self-custody. It integrates seamlessly with Bitcoin Core and various hardware wallets, including Coldcard, and supports advanced features such as multi-signature wallets, which offer additional layers of security for managing Bitcoin funds.
2.2 Bitcoin Core
Bitcoin Core is the reference implementation of the Bitcoin protocol and serves as the backbone of the Bitcoin network. Running a Bitcoin Core full node provides users with the ability to independently verify all transactions and blocks on the network, ensuring trustless interaction with the blockchain. This is crucial for achieving full decentralization and autonomy, as Bitcoin Core ensures that users do not rely on third parties to confirm the validity of transactions. Furthermore, Bitcoin Core allows users to interact with the Bitcoin network via the command-line interface or a graphical user interface (GUI), offering flexibility in how one can participate in the Bitcoin ecosystem.
2.3 Coldcard
Coldcard is a Bitcoin hardware wallet that prioritizes security and privacy. It is designed to store private keys offline, away from any internet-connected devices, making it an essential tool for protecting Bitcoin holdings from online threats such as malware or hacking. Coldcard’s secure hardware environment ensures that private keys never leave the device, providing an air-gapped solution for cold storage. Its open-source firmware allows users to audit the wallet’s code and operations, ensuring that the device behaves exactly as expected.
2.4 Roles in Bitcoin Management
Each of these components plays a distinct yet complementary role in Bitcoin management:
- Specter Desktop: Acts as the interface for wallet management and multi-signature wallet configuration.
- Bitcoin Core: Provides a full node for transaction verification and interacts with the Bitcoin network.
- Coldcard: Safeguards private keys by storing them securely in hardware, providing offline signing capabilities for transactions.
Together, these tools offer a comprehensive and secure environment for managing Bitcoin funds.
3. Integration
3.1 How Specter Desktop, Bitcoin Core, and Coldcard Work Together
The integration of Specter Desktop, Bitcoin Core, and Coldcard offers a cohesive solution for managing and securing Bitcoin. Here's how these components interact:
- Bitcoin Core runs as a full node, providing a fully verified and trustless Bitcoin network. It validates all transactions and blocks independently.
- Specter Desktop communicates with Bitcoin Core to manage Bitcoin wallets, including setting up multi-signature wallets and connecting to hardware wallets like Coldcard.
- Coldcard is used to securely store the private keys for Bitcoin transactions. When a transaction is created in Specter Desktop, it is signed offline on the Coldcard device before being broadcasted to the Bitcoin network.
The main advantages of this setup include:
- Self-Sovereignty: By using Bitcoin Core and Coldcard, the user has complete control over their funds and does not rely on third-party services for transaction verification or key management.
- Enhanced Security: Coldcard provides the highest level of security for private keys, protecting them from online attacks and malware. Specter Desktop’s integration with Coldcard ensures a user-friendly method for interacting with the hardware wallet.
- Privacy: Using Bitcoin Core allows users to run their own full node, ensuring that they are not dependent on third-party servers, which could compromise privacy.
This integration, in combination with a user-friendly interface from Specter Desktop, allows Bitcoin holders to manage their funds securely, efficiently, and with full autonomy.
3.2 Advantages of This Setup
The combined use of Specter Desktop, Bitcoin Core, and Coldcard offers several advantages over alternative Bitcoin management solutions:
- Enhanced Security: The use of an air-gapped Coldcard wallet ensures private keys never leave the device, even when signing transactions. Coupled with Bitcoin Core’s full node validation, this setup offers unparalleled protection against online threats and attacks.
- Decentralization: Running a full Bitcoin Core node ensures that the user has full control over transaction validation, removing any dependence on centralized third-party services.
- User-Friendly Interface: Specter Desktop simplifies the management of multi-signature wallets and integrates seamlessly with Coldcard, making it accessible even to non-technical users.
4. Installation on OpenBSD
This section provides a step-by-step guide to installing Specter Desktop, Bitcoin Core, and setting up Coldcard on OpenBSD.
4.1 Installing Bitcoin Core
OpenBSD Bitcoin Core Build Guide
Updated for OpenBSD 7.6
This guide outlines the process of building Bitcoin Core (bitcoind), its command-line utilities, and the Bitcoin GUI (bitcoin-qt) on OpenBSD. It covers necessary dependencies, installation steps, and configuration details specific to OpenBSD.
Table of Contents
- Preparation
- Installing Required Dependencies
- Cloning the Bitcoin Core Repository
- Installing Optional Dependencies
- Wallet Dependencies
- GUI Dependencies
- Building Bitcoin Core
- Configuration
- Compilation
- Resource Limit Adjustments
1. Preparation
Before beginning the build process, ensure your system is up-to-date and that you have the necessary dependencies installed.
1.1 Installing Required Dependencies
As the root user, install the base dependencies required for building Bitcoin Core:
bash pkg_add git cmake boost libevent
For a complete list of all dependencies, refer to
dependencies.md
.1.2 Cloning the Bitcoin Core Repository
Next, clone the official Bitcoin Core repository to a directory. All build commands will be executed from this directory.
bash git clone https://github.com/bitcoin/bitcoin.git
1.3 Installing Optional Dependencies
Bitcoin Core supports optional dependencies for advanced functionality such as wallet support, GUI features, and notifications. Below are the details for the installation of optional dependencies.
1.3.1 Wallet Dependencies
While it is not necessary to build wallet functionality for running
bitcoind
orbitcoin-qt
, if you need wallet functionality:-
Descriptor Wallet Support: SQLite is required for descriptor wallet functionality.
bash pkg_add sqlite3
-
Legacy Wallet Support: BerkeleyDB is needed for legacy wallet support. It is recommended to use Berkeley DB 4.8. The BerkeleyDB library from OpenBSD ports cannot be used directly, so you will need to build it from source using the
depends
folder.Run the following command to build it (adjust the path as necessary):
bash gmake -C depends NO_BOOST=1 NO_LIBEVENT=1 NO_QT=1 NO_ZMQ=1 NO_USDT=1
After building BerkeleyDB, set the environment variable
BDB_PREFIX
to point to the appropriate directory:bash export BDB_PREFIX="[path_to_berkeleydb]"
1.3.2 GUI Dependencies
Bitcoin Core includes a GUI built with Qt6. To compile the GUI, the following dependencies are required:
-
Qt6: Install the necessary parts of the Qt6 framework for GUI support.
bash pkg_add qt6-qtbase qt6-qttools
-
libqrencode: The GUI can generate QR codes for addresses. To enable this feature, install
libqrencode
:bash pkg_add libqrencode
If you don't need QR encoding support, use the
-DWITH_QRENCODE=OFF
option during the configuration step to disable it.
1.3.3 Notification Dependencies
Bitcoin Core can provide notifications through ZeroMQ. If you require this functionality, install ZeroMQ:
bash pkg_add zeromq
1.3.4 Test Suite Dependencies
Bitcoin Core includes a test suite for development and testing purposes. To run the test suite, you will need Python 3 and the ZeroMQ Python bindings:
bash pkg_add python py3-zmq
2. Building Bitcoin Core
Once all dependencies are installed, follow these steps to configure and compile Bitcoin Core.
2.1 Configuration
Bitcoin Core offers various configuration options. Below are two common setups:
-
Descriptor Wallet and GUI: Enables descriptor wallet support and the GUI. This requires SQLite and Qt6.
bash cmake -B build -DBUILD_GUI=ON
To see all available configuration options, run:
bash cmake -B build -LH
-
Descriptor & Legacy Wallet, No GUI: Enables support for both descriptor and legacy wallets, but no GUI.
bash cmake -B build -DBerkeleyDB_INCLUDE_DIR:PATH="${BDB_PREFIX}/include" -DWITH_BDB=ON
2.2 Compile
After configuration, compile the project using the following command. Use the
-j N
option to parallelize the build process, whereN
is the number of CPU cores you want to use.bash cmake --build build
To run the test suite after building, use:
bash ctest --test-dir build
If Python 3 is not installed, some tests may be skipped.
2.3 Resource Limit Adjustments
OpenBSD's default resource limits are quite restrictive and may cause build failures, especially due to memory issues. If you encounter memory-related errors, increase the data segment limit temporarily for the current shell session:
bash ulimit -d 3000000
To make the change permanent for all users, modify the
datasize-cur
anddatasize-max
values in/etc/login.conf
and reboot the system.
Now Consider
By following these steps, you will be able to successfully build Bitcoin Core on OpenBSD 7.6. This guide covers the installation of essential and optional dependencies, configuration, and the compilation process. Make sure to adjust the resource limits if necessary, especially when dealing with larger codebases.
4.2 Installing Specter Desktop What To Consider
Specter Installation Guide for OpenBSD with Coldcard
This simply aims to provide OpenBSD users with a comprehensive and streamlined process for installing Specter, a Bitcoin wallet management tool. Tailored to those integrating Coldcard hardware wallets with Specter, this guide will help users navigate the installation process, considering various technical levels and preferences. Whether you're a beginner or an advanced user, the guide will empower you to make informed decisions about which installation method suits your needs best.
Specter Installation Methods on OpenBSD
Specter offers different installation methods to accommodate various technical skills and environments. Here, we explore each installation method in the context of OpenBSD, while considering integration with Coldcard for enhanced security in Bitcoin operations.
1. OS-Specific Installation on OpenBSD
Installing Specter directly from OpenBSD's packages or source is an excellent option for users who prefer system-native solutions. This method ensures that Specter integrates seamlessly with OpenBSD’s environment.
- Advantages:
- Easy Installation: Package managers (if available on OpenBSD) simplify the process.
- System Compatibility: Ensures that Specter works well with OpenBSD’s unique system configurations.
-
Convenience: Can be installed on the same machine that runs Bitcoin Core, offering an integrated solution for managing both Bitcoin Core and Coldcard.
-
Disadvantages:
- System-Specific Constraints: OpenBSD’s minimalistic approach might require manual adjustments, especially in terms of dependencies or running services.
-
Updates: You may need to manually update Specter if updates aren’t regularly packaged for OpenBSD.
-
Ideal Use Case: Ideal for users looking for a straightforward, system-native installation that integrates with the local Bitcoin node and uses the Coldcard hardware wallet.
2. PIP Installation on OpenBSD
For those comfortable working in Python environments, PIP installation offers a flexible approach for installing Specter.
- Advantages:
- Simplicity: If you’re already managing Python environments, PIP provides a straightforward and easy method for installation.
- Version Control: Gives users direct control over the version of Specter being installed.
-
Integration: Works well with any existing Python workflow.
-
Disadvantages:
- Python Dependency Management: OpenBSD users may face challenges when managing dependencies, as Python setups on OpenBSD can be non-standard.
-
Technical Knowledge: Requires familiarity with Python and pip, which may not be ideal for non-technical users.
-
Ideal Use Case: Suitable for Python-savvy users who already use Python-based workflows and need more granular control over their installations.
3. Docker Installation
If you're familiar with Docker, running Specter Desktop in Docker containers is a fantastic way to isolate the installation and avoid conflicts with the OpenBSD system.
- Advantages:
- Isolation: Docker ensures Specter runs in an isolated environment, reducing system conflicts.
- Portability: Once set up, Docker containers can be replicated across various platforms and devices.
-
Consistent Environment: Docker ensures consistency in the Specter installation, regardless of underlying OS differences.
-
Disadvantages:
- Docker Setup: OpenBSD’s Docker support isn’t as seamless as other operating systems, potentially requiring extra steps to get everything running.
-
Complexity: For users unfamiliar with Docker, the initial setup can be more challenging.
-
Ideal Use Case: Best for advanced users familiar with Docker environments who require a reproducible and isolated installation.
4. Manual Build from Source (Advanced Users)
For users looking for full control over the installation process, building Specter from source on OpenBSD offers the most flexibility.
- Advantages:
- Customization: You can customize Specter’s functionality and integrate it deeply into your system or workflow.
-
Control: Full control over the build and version management process.
-
Disadvantages:
- Complex Setup: Requires familiarity with development environments, build tools, and dependency management.
-
Time-Consuming: The process of building from source can take longer, especially on OpenBSD, which may lack certain automated build systems for Specter.
-
Ideal Use Case: Best for experienced developers who want to customize Specter to meet specific needs or integrate Coldcard with unique configurations.
5. Node-Specific Integrations (e.g., Raspiblitz, Umbrel, etc.)
If you’re using a Bitcoin node like Raspiblitz or Umbrel along with Specter, these node-specific integrations allow you to streamline wallet management directly from the node interface.
- Advantages:
- Seamless Integration: Integrates Specter directly into the node's wallet management system.
-
Efficient: Allows for efficient management of both Bitcoin Core and Coldcard in a unified environment.
-
Disadvantages:
- Platform Limitation: Not applicable to OpenBSD directly unless you're running a specific node on the same system.
-
Additional Hardware Requirements: Running a dedicated node requires extra hardware resources.
-
Ideal Use Case: Perfect for users already managing Bitcoin nodes with integrated Specter support and Coldcard hardware wallets.
6. Using Package Managers (Homebrew for Linux/macOS)
If you're running OpenBSD on a machine that also supports Homebrew, this method can simplify installation.
- Advantages:
- Simple Setup: Package managers like Homebrew streamline the installation process.
-
Automated Dependency Management: Handles all dependencies automatically, reducing setup complexity.
-
Disadvantages:
- Platform Limitation: Package managers like Homebrew are more commonly used on macOS and Linux, not on OpenBSD.
-
Version Control: May not offer the latest Specter version depending on the repository.
-
Ideal Use Case: Best for users with Homebrew installed, though it may be less relevant for OpenBSD users.
Installation Decision Tree for OpenBSD with Coldcard
- Do you prefer system-native installation or Docker?
- System-native (OpenBSD-specific packages) → Proceed to installation via OS package manager.
-
Docker → Set up Docker container for isolated Specter installation.
-
Are you comfortable with Python?
- Yes → Install using PIP for Python-based environments.
-
No → Move to direct installation methods like Docker or manual build.
-
Do you have a specific Bitcoin node to integrate with?
- Yes → Consider node-specific integrations like Raspiblitz or Umbrel.
- No → Install using Docker or manual source build.
Now Consider
When installing Specter on OpenBSD, consider factors such as your technical expertise, hardware resources, and the need for integration with Coldcard. Beginners might prefer simpler methods like OS-specific packages or Docker, while advanced users will benefit from building from source for complete control over the installation. Choose the method that best fits your environment to maximize your Bitcoin wallet management capabilities.
4.3 Setting Up Coldcard
Refer to the "Coldcard Setup Documentation" section for the installation and configuration instructions specific to Coldcard. At the end of writing.
5. Security Considerations
When using Specter Desktop, Bitcoin Core, and Coldcard together, users benefit from a layered security approach:
- Bitcoin Core offers transaction validation and network security, ensuring that all transactions are verified independently.
- Coldcard provides air-gapped hardware wallet functionality, ensuring private keys are never exposed to potentially compromised devices.
- Specter Desktop facilitates user-friendly management of multi-signature wallets while integrating the security of Bitcoin Core and Coldcard.
However, users must also be aware of potential security risks, including:
- Coldcard Physical Theft: If the Coldcard device is stolen, the attacker would need the PIN code to access the wallet, but physical security must always be maintained.
- Backup Security: Users must securely back up their Coldcard recovery seed to prevent loss of access to funds.
6. Use Cases and Practical Applications
The integration of Specter Desktop, Bitcoin Core, and Coldcard is especially beneficial for:
- High-Value Bitcoin Holders: Those managing large sums of Bitcoin can ensure top-tier security with a multi-signature wallet setup and Coldcard’s air-gapped security.
- Privacy-Conscious Users: Bitcoin Core allows for full network verification, preventing third-party servers from seeing transaction details.
- Cold Storage Solutions: For users who want to keep their Bitcoin safe long-term, the Coldcard provides a secure offline solution while still enabling easy access via Specter Desktop.
7. Coldcard Setup Documentation
This section should provide clear, step-by-step instructions for configuring and using the Coldcard hardware wallet, including how to pair it with Specter Desktop, set up multi-signature wallets, and perform basic operations like signing transactions.
8. Consider
The system you ant to adopt inculcates, integrating Specter Desktop, Bitcoin Core, and Coldcard provides a powerful, secure, and decentralized solution for managing Bitcoin. This setup not only prioritizes user privacy and security but also provides an intuitive interface for even non-technical users. The combination of full node validation, multi-signature support, and air-gapped hardware wallet storage ensures that Bitcoin holdings are protected from both online and physical threats.
As the Bitcoin landscape continues to evolve, this setup can serve as a robust model for self-sovereign financial management, with the potential for future developments to enhance security and usability.
-
@ c1e9ab3a:9cb56b43
2025-04-25 00:37:34If you ever read about a hypothetical "evil AI"—one that manipulates, dominates, and surveils humanity—you might find yourself wondering: how is that any different from what some governments already do?
Let’s explore the eerie parallels between the actions of a fictional malevolent AI and the behaviors of powerful modern states—specifically the U.S. federal government.
Surveillance and Control
Evil AI: Uses total surveillance to monitor all activity, predict rebellion, and enforce compliance.
Modern Government: Post-9/11 intelligence agencies like the NSA have implemented mass data collection programs, monitoring phone calls, emails, and online activity—often without meaningful oversight.
Parallel: Both claim to act in the name of “security,” but the tools are ripe for abuse.
Manipulation of Information
Evil AI: Floods the information space with propaganda, misinformation, and filters truth based on its goals.
Modern Government: Funds media outlets, promotes specific narratives through intelligence leaks, and collaborates with social media companies to suppress or flag dissenting viewpoints.
Parallel: Control the narrative, shape public perception, and discredit opposition.
Economic Domination
Evil AI: Restructures the economy for efficiency, displacing workers and concentrating resources.
Modern Government: Facilitates wealth transfer through lobbying, regulatory capture, and inflationary monetary policy that disproportionately hurts the middle and lower classes.
Parallel: The system enriches those who control it, leaving the rest with less power to resist.
Perpetual Warfare
Evil AI: Instigates conflict to weaken opposition or as a form of distraction and control.
Modern Government: Maintains a state of nearly constant military engagement since WWII, often for interests that benefit a small elite rather than national defense.
Parallel: War becomes policy, not a last resort.
Predictive Policing and Censorship
Evil AI: Uses predictive algorithms to preemptively suppress dissent and eliminate threats.
Modern Government: Experiments with pre-crime-like measures, flags “misinformation,” and uses AI tools to monitor online behavior.
Parallel: Prevent rebellion not by fixing problems, but by suppressing their expression.
Conclusion: Systemic Inhumanity
Whether it’s AI or a bureaucratic state, the more a system becomes detached from individual accountability and human empathy, the more it starts to act in ways we would call “evil” if a machine did them.
An AI doesn’t need to enslave humanity with lasers and killer robots. Sometimes all it takes is code, coercion, and unchecked power—something we may already be facing.
-
@ 266815e0:6cd408a5
2025-04-24 22:56:53noStrudel
Its been over four months since I released
v0.42.0
of noStrudel but I haven't forgot about it, I've just been busy refactoring the code-base.The app is well past its 2yr birthday and a lot of the code is really messy and kind of hacky. so my focus in the past few months has been refactoring and moving a lot of it out into the applesauce packages so it can be tested.
The biggest changes have been switching to use
rx-nostr
for all relay connections and usingrxjs
and applesauce for event management and timelines. In total ~22k lines of code have been changed since the last release.I'm hoping it wont take me much longer to get a stable release for
v0.43.0
. In the meantime if you want to test out the new changes you can find them on the nsite deployment.nsite deplyment: nostrudel.nsite.lol/ Github repo: github.com/hzrd149/nostrudel
Applesauce
I've been making great progress on the applesauce libraries that are the core of onStrudel. Since January I've released
v0.11.0
andv0.12.0
.In the past month I've been working towards a v1 release with a better relay connection package applesauce-relay and pre-built actions for clients to easily implement common things like follow/unfollow and mute/unmute. applesauce-actions
Docs website: hzrd149.github.io/applesauce/ Github repo: https://github.com/hzrd149/applesauce
Blossom
Spec changes: - Merged PR #56 from kehiy for BUD-09 ( blob reports ) - Merged PR #60 from Kieran to update BUD-8 to use the standard NIP-94 tags array. - Merged PR #38 to make the file extension mandatory in the
url
field of the returned blob descriptor. - Merged PR #54 changing the authorization type for the/media
endpoint tomedia
instead ofupload
. This fixes an issue where the server could mirror the original blob without the users consent.Besides the changes to the blossom spec itself I started working on a small cli tool to help test and debug new blossom server implementations. The goal is to have a set of upload and download tests that can be run against a server to test if it adheres to the specifications. It can also be used output debug info and show recommended headers to add to the http responses.
If you have nodejs installed you can try it out by running
sh npx blossom-audit audit <server-url> [image|bitcoin|gif|path/to/file.jpeg]
Github repo: github.com/hzrd149/blossom-audit
Other projects
Wifistr
While participating in SEC-04 I built a small app for sharing the locations and passwords of wifi networks. Its far from complete, but its usable and serves as an example of building an app with SolidJS and applesauce.
Live version: hzrd149.github.io/wifistr/ nsite version: here Github repo: github.com/hzrd149/wifistr
nsite-manager
I've been slowly continuing work on nsite-manager, mostly just to allow myself to debug various nsites and make sure nsite.lol is still working correctly.
Github repo: github.com/hzrd149/nsite-manager
nsite-gateway
I finally got around to making some much needed bug fixes and improvements to nsite-gateway ( the server behind nsite.lol ) and released a stable
1.0.0
version.My hope is that its stable enough now to allow other users to start hosting their own instances of it.
Github repo: github.com/hzrd149/nsite-gateway
morning-glory
As part of my cashu PR for NUT-23 ( HTTP 402 Payment required ) I built a blossom server that only accepts cashu payments for uploads and stores blobs for 24h before deleting them.
Github repo: github.com/hzrd149/morning-glory
bakery
I've been toying with the idea of building a backend-first nostr client that would download events while I'm not at my computer and send me notifications about my DMs.
I made some progress on it in the last months but its far from complete or usable. Hopefully ill get some time in the next few months to create a working alpha version for myself and others to install on Umbrel and Start9
Github repo: github.com/hzrd149/bakery
-
@ 2ed3596e:98b4cc78
2025-04-24 18:31:53Bitcoiners, your points just got a lot more epic! We’re thrilled to announce the launch of the Bitcoin Well Point Store, available now in Canada and the USA.
Now you can redeem your Bitcoin Well points for prizes that level up and celebrate your Bitcoin lifestyle.
What can you get in store?
Right now, you can exchange your points for:
-
Simply Bitcoin hoodie: Rep your Bitcoin pride in style
-
Exclusive Bitcoin Well Stampseed backup plate: Protect and manage your private keys securely
-
Personalized LeatherMint wallet: Classy, sleek, and ready to hold your fiat (until you convert it to sats!)
-
Tesla Cybertruck in Bitcoin orange: Wait…really? A Cybertruck? Who approved this?
More epic items will be available in the Bitcoin Well Point Store in the coming months. Stay tuned!
How to redeem your Bitcoin Well Points
Redeeming your points is easy:
-
Log in and go the Bitcoin Well Points store within the Rewards Section
-
Check your Bitcoin Well point balance
-
Redeem Your Bitcoin Well points for the prize of your dreams
Once you’ve purchased an item from the Bitcoin Well Point Store, we’ll email you to figure out where you want us to ship your prize. Unless it's the Cybertruck, then you can come to our office and pick it up!
How can you earn more Bitcoin Well Points? ⚡
Here are all the ways you can earn Bitcoin Well points:
-
Buy bitcoin/Sell bitcoin/pay bills - 3 Points per $10
-
Recurring buy - 5 points per $10
-
First transaction bonus - 500 points
-
Refer a friend to Bitcoin Well - 500 points
The more you use Bitcoin Well, the more points you earn, rewarding you for investing in your freedom and self-sovereignty
Want sats, not stuff? No problem! 👇
You can keep earning sats by playing the Bitcoin (Wishing) Well! You can win up to 1,000,000 on your next coin toss. Now you have the exciting choice: do you play the Bitcoin (Wishing) Well or save up your Bitcoin Well points for a sweet prize?
What makes Bitcoin Well different
Bitcoin Well is on a mission to enable independence. We do this by making it easy to self custody bitcoin and embracing the latest bitcoin innovations. By custodying their own money, our customers are free to do as they wish without begging for permission. By creating a full ecosystem to buy, sell and use your bitcoin to connect with the modern financial world, you are able to have your cake and eat it too - or have your bitcoin in self custody and easily spend it too 🎂.
Create your Bitcoin Well account now →
Invest in Bitcoin Well
We are publicly traded (and love it when our customers become shareholders!) and hold ourselves to a high standard of enabling life on a Bitcoin standard. If you want to learn more about Bitcoin Well, please visit our website or reach out!
-
-
@ f1989a96:bcaaf2c1
2025-04-24 16:19:13Good morning, readers!
In Georgia, mere weeks after freezing the bank accounts of five NGOs supporting pro-democracy movements, the ruling Georgian Dream party passed a new law banning foreign organizations from providing grants to local groups without regime approval. The bill is part of a broader effort to silence dissent and weaken democracy through financial repression.\ \ In Latin America, opposition leader María Corina Machado seeks to rally citizens against Nicolás Maduro’s immensely repressive regime. With the economy and currency in shambles and dozens of military personnel abandoning Maduro, Machado sees an opportunity to challenge his grip on power.
In open source news, we spotlight the release of Bitcoin Core version 29.0, the latest update to the primary software that powers the Bitcoin network and helps millions of people send, receive, and verify Bitcoin transactions every day. This release improves the reliability and compatibility of Bitcoin’s main software implementation. We also cover the unique story of LuckyMiner, an unauthorized Bitaxe clone making waves in Asian markets as demand soars for small, low-cost, home mining equipment — evidence that people want to participate in the Bitcoin network themselves.
We close with the latest edition of the HRF x Pubkey Freedom Tech Series, in which Nicaraguan human rights defender Berta Valle joins HRF’s Arsh Molu to explore how authoritarian regimes weaponize financial systems to silence dissent and isolate opposition voices and how tools like Bitcoin can offer a way out. We also feature an interview with Salvadoran opposition leader Claudia Ortiz, who discusses the erosion of civil liberties under President Nayib Bukele and offers a nuanced take on Bitcoin in the country.
Now, let’s jump right in!
SUBSCRIBE HERE
GLOBAL NEWS
Georgia | Bans Foreign Donations for Nonprofits and NGOs
Mere weeks after freezing the bank accounts of five NGOs supporting pro-democracy demonstrators in recent unrest caused by elections, Georgia’s regime passed a new law that bans foreign organizations from providing “monetary or in-kind grants” to Georgian organizations and individuals without regime approval. Introduced by the increasingly repressive Georgian Dream party, the bill is part of a broader effort (including the controversial foreign agents law passed in 2024) designed to silence dissent and dismantle pro-democracy groups. Rights groups warn these laws will cripple civil society by cutting funding and imposing heavy fines for violators. Last week, parliament also read a bill that would grant officials the power to ban opposition parties entirely. With civil society financially repressed, Georgia is sliding further into tyranny, where free expression, political opposition, and grassroots organizations are under siege.
Venezuela | Opposition Mobilizes Against Maduro’s Financial Repression
Venezuelan opposition leader María Corina Machado is intensifying efforts against Nicolás Maduro’s brutal regime by targeting what she believes are his two greatest vulnerabilities: a collapsing economy and fractures in his repressive apparatus. As the Venezuelan bolivar unravels (reaching a record low in March) and inflation spirals out of control (expected to reach 220% before the end of the year), Maduro’s regime doubles down. It imposes currency controls, expropriates private property, and exerts complete state control over banks. Meanwhile, signs of discontent are growing inside the military, with dozens of personnel reportedly deserting. “I think we have a huge opportunity in front of us, and I see that much closer today than I did a month ago,” Machado said. To rebuild Venezuela’s future, Machado sees financial freedom as essential and has publicly embraced Bitcoin as a tool to resist the regime’s weaponization of money.
India | UPI Outage Disrupts Payments Nationwide
Digital transactions across India were disrupted mid-April as the Unified Payments Interface (UPI) experienced its third major outage in the last month. UPI is a government-run system that enables digital payments and underpins India’s push towards a cashless, centralized economy. Fintechs, banks, and institutions plug into UPI as a backbone of their digital infrastructure. Recently, India started integrating its central bank digital currency (CBDC), the digital rupee, into UPI, leveraging its existing network effect to expand the reach of state-issued digital money. When a single outage can freeze an entire nation’s ability to transact, it reveals the fragility of centralized infrastructure. By contrast, decentralized money like Bitcoin operates independently of state-run systems and with consistent uptime, giving users the freedom to transact and save permissionlessly.
China | Bitcoin for Me, Not for Thee
China is debating new regulations for handling its growing trove of Bitcoin and other digital assets seized during criminal investigations. While the regime debates how to manage its seized digital assets, the trading of Bitcoin and other digital assets remains banned for Chinese citizens on the mainland. Reports indicate that local governments have quietly sold confiscated Bitcoin and other digital assets through private companies to bolster their dwindling budgets. If true, this exposes the hypocrisy of a regime banning digital assets for its people while exploiting them as a strategic revenue source for the state. This contradiction accentuates the ways authoritarian regimes manipulate financial rules for their own benefit while punishing the public for using the same strategies.
Serbia | Vučić Targets Civil Society as Economy Sinks
As Serbia’s economy stalls and the cost of living remains stubbornly high, President Aleksandar Vučić is escalating his crackdown on civil society to deflect blame and tighten control. After a train station canopy collapse in Novi Sad killed 16 people last November, protests erupted. Serbians, led by students, flooded the streets to protest government corruption, declining civil liberties, and a worsening economy. The protests have since spread across 400 cities, reflecting nationwide discontent. In response, Vučić is now targeting civil society organizations under the pretext of financial misconduct. Law enforcement raided four NGOs that support Serbians’ human rights, the rule of law, and democratic elections.
Russia | Jails Four Journalists for Working With Navalny
A Russian court sentenced four independent Russian journalists to five and a half years in prison for working with the Anti-Corruption Foundation (ACF) — a pro-democracy organization founded by the late opposition leader Alexei Navalny. The journalists — Antonina Favorskaya, Konstantin Gabov, Sergei Karelin, and Artyom Kriger — were convicted in a closed-door trial for associating with an organization the Kremlin deems an “extremist.” The Committee to Protect Journalists condemned the verdict as a “blatant testimony to Russian authorities’ profound contempt for press freedom.” Since it launched a full-scale invasion of Ukraine in 2022, the Kremlin has increasingly criminalized dissent and financially repressed opposition, nonprofits, and ordinary citizens.
BITCOIN AND FREEDOM TECH NEWS
Flash | Introduces Flash Lightning Addresses, New UI, and Encrypted Messaging
Flash, a Bitcoin Lightning wallet and HRF grantee bringing freedom money to the Caribbean, released its version 0.4.0 beta. This release includes an updated user interface, dedicated Flash Lightning addresses (user @ flashapp.me), and encrypted messaging. The redesigned app is more user-friendly and better suited for users new to Bitcoin. Flash users now receive a verified Lightning address, making it easier to send and receive Bitcoin. The update also adds encrypted nostr messaging, enabling secure communication between users. As authoritarian regimes in the region, like Cuba, tighten control over money, Flash offers a practical and private solution for Bitcoin access.
DahLIAS | New Protocol to Lower the Cost of Private Bitcoin Transactions
Bitcoin developers recently announced DahLIAS, the first protocol designed to enable full cross-input signature aggregation (CISA). CISA is a proposed Bitcoin update that could make private Bitcoin transactions much cheaper. Right now, collaborative transactions are more expensive than typical transactions because each input in a transaction needs its own signature. CISA would allow those signatures to be combined, saving space and reducing fees. But this change would require a soft fork, a safe, backward-compatible software update to Bitcoin’s code. If adopted, CISA could remove the need for users to justify why they want privacy, as the answer would be, to save money. This is especially important for dissidents living under surveillant regimes. DahLIAS could be a breakthrough that helps make privacy more practical for everyone using Bitcoin.
Bitcoin Core | Version 29.0 Now Available for Node Runners
Bitcoin Core is the main software implementation that powers the Bitcoin network and helps millions of people send, receive, and verify transactions every day. The latest update, Bitcoin Core v29.0, introduces changes to improve network stability and performance. The release helps keep the network stable even when not everyone updates simultaneously. Further, it reduces the chances that nodes (computers that run the Bitcoin software) accidentally restart — an issue that can interrupt network participation. It also adds support for full Replace-by-Fee (RBF), allowing users to increase the fee on stuck transactions in times of high network demand. Enhancing Bitcoin’s reliability, usability, and security ensures that individuals in oppressive regimes or unstable financial systems can access a permissionless and censorship-resistant monetary network. Learn more about the update here.
Nstart | Releases Multilingual Support
Nstart, a new tool that simplifies onboarding to nostr — a decentralized and censorship-resistant social network protocol — released multilingual support. It added Spanish, Italian, French, Dutch, and Mandarin as languages. This update broadens access by making the onboarding experience available to a wider audience — especially those living under dictatorships across Africa, Latin America, and Asia, where communication and press freedom are heavily restricted. Users can even contribute translations themselves. Overall, multilingual support makes Nstart a more powerful tool for activists and organizations operating under authoritarian environments, offering guided, straightforward access to uncensorable communications.
Bitcoin Chiang Mai | Release Bitcoin Education Podcast
Bitcoin Chiang Mai, a grassroots Bitcoin community in Thailand, launched an educational podcast to teach Bitcoin in Thai. In a country where financial repression is on the rise and the regime is experimenting with a programmable central bank digital currency (CBDC), this podcast offers an educational lifeline. By making Bitcoin knowledge and tools more accessible, the show empowers Thais to explore alternatives to state-controlled financial systems. It’s a grassroots effort to preserve financial freedom and encourage open dialogue in an increasingly controlled economic environment. Check it out here.
LuckyMiner | Undisclosed Bitaxe Clone Gaining Popularity in Asia
LuckyMiner, a Bitcoin mining startup out of Shenzhen, China is shaking up Asia’s Bitcoin hardware scene with a rogue twist. What began as a hobby project in 2023 has since exploded into a full-scale operation, manufacturing and selling thousands of undisclosed Bitaxe clones (which are small, affordable bitcoin miners based on the Bitaxe design). While Bitaxe is open-source, it’s licensed under CERN-OHL-S-2.0, requiring any modifications to be made public. LuckyMiner ignored that rule and the founder has openly admitted to breaking the license. Despite that, LuckyMiner is succeeding anyway, fueled by growing demand for affordable home mining equipment. While controversial, the rise of low-cost miners signals grassroots interest in Bitcoin, especially at a time when Asia grapples with growing authoritarianism and financial repression.
RECOMMENDED CONTENT
HRF x Pubkey — Bitcoin as a Tool to Fight Financial Repression in Autocracies with Berta Valle
In the latest HRF x Pubkey Freedom Tech series, Nicaraguan human rights defender and journalist Berta Valle joins HRF’s Arsh Molu to discuss how Bitcoin empowers individuals to resist the financial repression of authoritarian regimes. From helping families receive remittances when bank accounts are frozen to enabling independent media and activists to fund their work without regime interference, Bitcoin is quietly reshaping what resistance can look like under tyranny. Watch the full fireside chat here.
Claudia Ortiz: A Voice of Opposition in Bukele’s El Salvador
In this interview, analyst and journalist Marius Farashi Tasooji speaks to Salvadoran opposition leader Claudia Ortiz about President Bukele’s consolidation of power, the erosion of civil liberties, and the future of Bitcoin in the country. While Ortiz acknowledges Bitcoin’s potential as a tool for freedom, she critiques the current administration’s opaque and heavy-handed implementation of it. Ortiz explains her opposition to the Bitcoin Law, citing concerns about transparency and accountability, and outlines what she would do differently if elected president. Watch the full conversation here.
If this article was forwarded to you and you enjoyed reading it, please consider subscribing to the Financial Freedom Report here.
Support the newsletter by donating bitcoin to HRF’s Financial Freedom program via BTCPay.\ Want to contribute to the newsletter? Submit tips, stories, news, and ideas by emailing us at ffreport @ hrf.org
The Bitcoin Development Fund (BDF) is accepting grant proposals on an ongoing basis. The Bitcoin Development Fund is looking to support Bitcoin developers, community builders, and educators. Submit proposals here.
-
@ 7d33ba57:1b82db35
2025-04-24 10:49:41Tucked away in the rolling hills of southern France’s Hérault department, Montpeyroux is a charming medieval village known for its peaceful atmosphere, beautiful stone houses, and excellent Languedoc wines. It’s the kind of place where time seems to slow down, making it perfect for a relaxed stop on a southern France road trip.
🏡 Why Visit Montpeyroux?
🪨 Authentic Medieval Character
- Wander narrow cobbled streets lined with honey-colored stone houses
- Visit the remains of a medieval castle and old tower that offer stunning views over vineyards and hills
- A tranquil place that feels untouched by time
🍷 Wine Culture
- Surrounded by prestigious vineyards producing Coteaux du Languedoc wines
- Stop by local caves (wineries) to taste bold reds and crisp whites—many with stunning views over the valley
- Don’t miss the annual wine festivals and open cellars
🌄 Scenic Location
- Located near the Gorges de l’Hérault, perfect for hiking, swimming, or kayaking
- Just a short drive from Saint-Guilhem-le-Désert, one of France’s most beautiful villages
- Great base for exploring the natural beauty of Occitanie
🍽️ Where to Eat
- Enjoy local cuisine at cozy bistros—think grilled lamb, duck confit, olives, and regional cheeses
- Many places serve seasonal dishes paired with local wines
🚗 Getting There
- Around 45 minutes by car from Montpellier
- Best explored by car as public transport is limited, but the countryside drive is worth it
-
@ 8cda1daa:e9e5bdd8
2025-04-24 10:20:13Bitcoin cracked the code for money. Now it's time to rebuild everything else.
What about identity, trust, and collaboration? What about the systems that define how we live, create, and connect?
Bitcoin gave us a blueprint to separate money from the state. But the state still owns most of your digital life. It's time for something more radical.
Welcome to the Atomic Economy - not just a technology stack, but a civil engineering project for the digital age. A complete re-architecture of society, from the individual outward.
The Problem: We Live in Digital Captivity
Let's be blunt: the modern internet is hostile to human freedom.
You don't own your identity. You don't control your data. You don't decide what you see.
Big Tech and state institutions dominate your digital life with one goal: control.
- Poisoned algorithms dictate your emotions and behavior.
- Censorship hides truth and silences dissent.
- Walled gardens lock you into systems you can't escape.
- Extractive platforms monetize your attention and creativity - without your consent.
This isn't innovation. It's digital colonization.
A Vision for Sovereign Society
The Atomic Economy proposes a new design for society - one where: - Individuals own their identity, data, and value. - Trust is contextual, not imposed. - Communities are voluntary, not manufactured by feeds. - Markets are free, not fenced. - Collaboration is peer-to-peer, not platform-mediated.
It's not a political revolution. It's a technological and social reset based on first principles: self-sovereignty, mutualism, and credible exit.
So, What Is the Atomic Economy?
The Atomic Economy is a decentralized digital society where people - not platforms - coordinate identity, trust, and value.
It's built on open protocols, real software, and the ethos of Bitcoin. It's not about abstraction - it's about architecture.
Core Principles: - Self-Sovereignty: Your keys. Your data. Your rules. - Mutual Consensus: Interactions are voluntary and trust-based. - Credible Exit: Leave any system, with your data and identity intact. - Programmable Trust: Trust is explicit, contextual, and revocable. - Circular Economies: Value flows directly between individuals - no middlemen.
The Tech Stack Behind the Vision
The Atomic Economy isn't just theory. It's a layered system with real tools:
1. Payments & Settlement
- Bitcoin & Lightning: The foundation - sound, censorship-resistant money.
- Paykit: Modular payments and settlement flows.
- Atomicity: A peer-to-peer mutual credit protocol for programmable trust and IOUs.
2. Discovery & Matching
- Pubky Core: Decentralized identity and discovery using PKARR and the DHT.
- Pubky Nexus: Indexing for a user-controlled internet.
- Semantic Social Graph: Discovery through social tagging - you are the algorithm.
3. Application Layer
- Bitkit: A self-custodial Bitcoin and Lightning wallet.
- Pubky App: Tag, publish, trade, and interact - on your terms.
- Blocktank: Liquidity services for Lightning and circular economies.
- Pubky Ring: Key-based access control and identity syncing.
These tools don't just integrate - they stack. You build trust, exchange value, and form communities with no centralized gatekeepers.
The Human Impact
This isn't about software. It's about freedom.
- Empowered Individuals: Control your own narrative, value, and destiny.
- Voluntary Communities: Build trust on shared values, not enforced norms.
- Economic Freedom: Trade without permission, borders, or middlemen.
- Creative Renaissance: Innovation and art flourish in open, censorship-resistant systems.
The Atomic Economy doesn't just fix the web. It frees the web.
Why Bitcoiners Should Care
If you believe in Bitcoin, you already believe in the Atomic Economy - you just haven't seen the full map yet.
- It extends Bitcoin's principles beyond money: into identity, trust, coordination.
- It defends freedom where Bitcoin leaves off: in content, community, and commerce.
- It offers a credible exit from every centralized system you still rely on.
- It's how we win - not just economically, but culturally and socially.
This isn't "web3." This isn't another layer of grift. It's the Bitcoin future - fully realized.
Join the Atomic Revolution
- If you're a builder: fork the code, remix the ideas, expand the protocols.
- If you're a user: adopt Bitkit, use Pubky, exit the digital plantation.
- If you're an advocate: share the vision. Help people imagine a free society again.
Bitcoin promised a revolution. The Atomic Economy delivers it.
Let's reclaim society, one key at a time.
Learn more and build with us at Synonym.to.
-
@ 10f7c7f7:f5683da9
2025-04-24 10:07:09The first time I received a paycheque from a full-time job, after being told in the interview I would be earning one amount, the amount I received was around 25% less; you’re not in Kansas anymore, welcome to the real work and TAX. Over the years, I’ve continued to pay my taxes, as a good little citizen, and at certain points along the way, I have paid considerable amounts of tax, because I wouldn’t want to break the law by not paying my taxes. Tax is necessary for a civilised society, they say. I’m told, who will pay, at least in the UK, for the NHS, who will pay for the roads, who will pay for the courts, the military, the police, if I don’t pay my taxes? But let’s be honest, apart from those who pay very little to no tax, who, in a society actually gets good value for money out of the taxes they pay, or hears of a government institution that operates efficiently and effectively? Alternatively, imagine if the government didn’t have control of a large military budget, would they be quite so keen to deploy the young of our country into harm’s way, in the name of national security or having streets in Ukraine named after them for their generous donations of munition paid with someone else’s money?
While I’m only half-way through the excellent “Fiat Standard”, I’m well aware that many of these issues have been driven by the ability of those in charge to not only enforce and increase taxation at will, but also, if ends don’t quite meet, print the difference, however, these are rather abstract and high-level ideas for my small engineer’s brain. What has really brought this into sharp focus for me is the impending sale of my first house, that at the age of 25, I was duly provided a 40-year mortgage and was required to sign a form acknowledging that I would still be paying the mortgage after my retirement age. Fortunately for me, thanks to the government now changing the national age of retirement from 65 to 70 (so stealing 5 years of my retirement), in practice this form didn’t need to be signed, lucky me? Even so, what type of person would knowingly put another person in a situation where near 40% of their wage would mainly be paying interest to the bank (which as a side note was bailed out only a few years later). The unpleasant taste really became unbearable when even after being put into this “working life” sentence of debt repayment, was, even with the amount I’d spent on the house (debt interest and maintenance) over the subsequent 19 years, only able to provide a rate of return of less than 1.6%, compared to the average official (bullshit) inflation figure of 2.77%. My house has not kept up with inflation and to add insult to financial injury, His Majesty’s Revenue and Customs feel the need to take their portion of this “profit”.
At which point, I take a very deep breath, sit quietly for a moment, and channel my inner Margot, deciding against grabbing a bottle of bootleg antiseptic to both clear my pallet and dull the pain. I had been convinced I needed to get on the housing ladder to save, but the government has since printed billions, with the rate of, even the conservative estimates of inflation, out pacing my meagre returns on property, and after all that blood, sweet, tears and dust, covering my poor dog, “the law” states some of that money is theirs. I wasn’t able to save in the money that they could print at will, I worked very hard, I took risks and the reward I get is to give them even more money to fritter away of things that won’t benefit me. But, I don’t want your sympathy, I don’t need it, but it helped me to get a new perspective on capital gains, particularly when considered in relation to bitcoin. So, to again draw from Ms. Paez, who herself was drawing from everyone’s favourite Joker, Heath Ledger, not Rachel Reeves (or J. Powell), here we go.
The Sovereign Individual is by no means an easy read, but is absolutely fascinating, providing clear critiques of the system that at the time was only in its infancy, but predicting many aspects of today’s world, with shocking accuracy. One of the most striking parts for me was the critique and effect of taxation (specifically progressive forms) on the prosperity of a nation at large. At an individual level, people have a proportion of their income removed, to be spent by the government, out of the individuals’ control. The person who has applied their efforts, abilities and skills to earn a living is unable to decide how best to utilise a portion of the resources into the future. While this is an accepted reality, the authors’ outline the cumulative, compound impact of forfeiting such a large portion of your wage each year, leading to figures that are near unimaginable to anyone without a penchant for spreadsheets or an understanding of exponential growth. Now, if we put this into the context of the entrepreneur, identifying opportunities, taking on personal and business risk, whenever a profit is realised, whether through normal sales or when realising value from capital appreciation, they must pay a portion of this in tax. While there are opportunities to reinvest this back into the organisation, there may be no immediate investment opportunities for them to offset their current tax bill. As a result, the entrepreneurs are hampered from taking the fruits of their labour and compounding the results of their productivity, forced to fund the social programmes of a government pursuing aims that are misaligned with individuals running their own business. Resources are removed from the most productive individuals in the society, adding value, employing staff, to those who may have limited knowledge of the economic realities of business; see Oxbridge Scholars, with experience in NGOs or charities, for more details please see Labour’s current front bench. What was that Labour? Ah yes, let’s promote growth by taxing companies more and making it more difficult to get rid of unproductive staff, exactly the policies every small business owner has been asking for (Budget October 2024).
Now, for anyone on NOSTR, none of this is new, a large portion of Nostriches were orange pilled long before taking their first purple pill of decentralise Notes and Other Stuff. However, if we’re aware of this system that has been put in place to steal our earnings and confiscate our winnings if we have been able to outwit the Keynesian trap western governments have chosen to give themselves more power, how can we progress? What options do we have? a) being locked up for non-payment of taxes by just spending bitcoin, to hell with paying taxes or b) spend/sell (:/), but keeping a record of those particular coins you bought multiple years ago, in order to calculate your gain and hand over YOUR money the follow tax year, so effectively increasing the cost of anything purchased in bitcoin. Please note, I’m making a conscious effort not to say what should be done, everyone needs to make decisions based on their knowledge and their understanding.
Anyway, option a) is not as flippant as one might think, but also not something one should (damn it) do carelessly. One bitcoin equals one bitcoin, bitcoin is money, as a result, it neither increases nor decreases is value, it is fiat currencies that varies wildly in comparison. If we think about gold, the purchasing power of gold has remained relatively consistent over hundreds of years, gold is viewed as money, which (as a side note) results in Royal Mint gold coins being both exempt of VAT and capital gains tax. While I may consider this from a, while not necessarily biased, but definitely pro-bitcoin perspective, I believe that it is extremely logical for transactions that take place in bitcoin should not require “profits” or “losses” to be reports, but this is where my logic and the treasury’s grabbiness are inconsistent. If what you’re buying is priced in bitcoin, you’re trading goods or services for money, there was no realisation of gains. Having said that, if you choose to do this, best not do any spending from a stack with a connection to an exchange and your identify. When tax collectors (and their government masters) end up not having enough money, they may begin exploring whether those people buying bitcoin from exchanges are also spending it.
But why is this relevant or important? For me and from hearing from many people on podcasts, while not impossible and not actually that difficult, recording gains on each transaction is firstly a barrier for spending bitcoin, it is additional effort, admin and not insignificant cost, and no one likes that. Secondly, from my libertarian leaning perspective, tax is basically the seizure of assets under the threat of incarceration (aka theft), with the government spending that money on crap I don’t give a shit about, meaning I don’t want to help fund their operation more than I already do. The worry is, if I pay more taxes, they think they’re getting good at collecting taxes, they increase taxes, use taxes to employ more tax collectors, rinse and repeat. From this perspective, it is almost my duty not to report when I transact in bitcoin, viewing it as plain and simple, black-market money, where the government neither dictates what I can do with it, nor profit from its appreciation.
The result of this is not the common mantra of never sell your bitcoin, because I, for one, am looking forward to ditching the fiat grind and having more free time driving an interesting 90’s sports car or riding a new mountain bike, which I will need money to be fund. Unless I’m going to take a fair bit of tax evasion-based risk, find some guys who will only accept my KYC free bitcoin and then live off the grid, I’ll need to find another way, which unfortunately may require engaging once more with the fiat system. However, this time, rather than selling bitcoin to buy fiat, looking for financial product providers who offer loans against bitcoin held. This is nothing new, having been a contributing factors to the FTX blow up, and the drawdown of 2022, the logic of such products is solid and the secret catalyst to Mark Moss’s (and others) buy, borrow, die strategy. The difference this time is to earn from our mistakes, to choose the right company and maybe hand over our private keys (multisig is a beautiful thing). The key benefit of this is that by taking a loan, you’re not realising capital gains, so do not create a taxable event. While there is likely to be an interest on any loan, this only makes sense if this is considerably less than either the capital gains rate incurred if you sold the bitcoin or the long-term capital appreciation of the bitcoin you didn’t have to sell, it has to be an option worth considering.
Now, this is interacting with the fiat system, it does involve the effective printing of money and depending on the person providing the loan, there is risk, however, there are definitely some positives, even outside the not inconsiderable, “tax free” nature of this money. Firstly, by borrowing fiat money, you are increasing the money supply, while devaluing all other holders of that currency, which effectively works against fiat governments, causing them to forever print harder to stop themselves going into a deflationary nose drive. The second important aspect is that if you have not had to sell your bitcoin, you have removed sell pressure from the market and buying pressure that would strengthen the fiat currency, so further supporting the stack you have not had to sell.
Now, let’s put this in the context of The Sovereign Individual or the entrepreneurial bitcoiner, who took a risk before fully understanding what they were buying and has now benefiting financially. The barrier of tax-based admin or the reticence to support government operations through paying additional tax are not insignificant, which the loan has allowed you to effectively side step, keeping more value of your holdings to allocate as you see fit. While this may involve the setting up of a new business that itself may drive productive growth, even if all you did was spend that money (such as a sport car or a new bike), this could still be a net, economic positive compared to a large portion of that money being sucked into the government spending black hole. While the government would not be receiving that tax revenue, every retailer, manufacturer or service provider would benefit from this additional business. Rather than the tax money going toward interest costs or civil servant wages, the money would go towards the real businesses you have chosen, their staff’s wages, who are working hard to outcompete their peers. Making this choice to not pay capital gains does not just allow bitcoiner to save money and to a small degree, reduce government funding, but also provides a cash injection to those companies who may still be reeling from minimum wage AND national insurance increases.
I’m not an ethicist, so am unable to provide a clear, concise, philosophical argument to explain why the ability of government to steal from you via the processes of monetary inflation as well as an ever-increasing tax burden in immoral, but I hope this provides a new perspective on the situation. I don’t believe increases in taxes support economic development (it literally does the opposite), I don’t believe that individuals should be penalised for working hard, challenging themselves, taking risks and succeeding. However, I’m not in charge of the system and also appreciate that if any major changes were to take place, the consequences would be significant (we’re talking Mandibles time). I believe removing capital gains tax from bitcoin would be a net positive for the economy and there being precedence based on the UK’s currently position with gold coins, but unfortunately, I don’t believe people in the cabinet think as I do, they see people with assets and pound signs ring up at their eyes.
As a result, my aim moving forward will be to think carefully before making purchases or sales that will incur capital gains tax (no big Lambo purchase for me at the top), but also being willing the promote the bitcoin economy by purchasing products and services with bitcoin. To do this, I’ll double confirm that spend/replace techniques actually get around capital gains by effectively using the payment rails of bitcoin to transfer value rather than to sell your bitcoin. This way, I will get to reward and promote those companies to perform at a level that warrants a little more effort with payment, without it costing me an additional 18-24% in tax later on.
So, to return to where we started and my first pay-cheque. We need to work to earn a living, but as we earn more, an ever-greater proportion is taken from us, and we are at risk of becoming stuck in a never ending fiat cycle. In the past, this was more of an issue, leading people into speculating on property or securities, which, if successful, would then incur further taxes, which will likely be spent by governments on liabilities or projects that add zero net benefits to national citizens. Apologies if you see this as a negative, but please don’t, this is the alternative to adopting a unit of account that cannot be inflated away. If you have begun to measure your wealth in bitcoin, there will be a point where you need to start to start spending. I for one, do not intend to die with my private keys in my head, but having lived a life, turbo charged by the freedom bitcoin has offered me. Bitcoin backed loans are returning to the market, with hopefully a little less risk this time around. There may be blow ups, but once they get established and interest costs start to be competed away, I will first of all acknowledge remaining risks and then not allocate 100% of my stack. Rather than being the one true bitcoiner who has never spent a sat, I will use the tools at my disposal to firstly give my family their best possible lives and secondly, not fund the government more than I need to.
Then, by the time I’m ready to leave this earth, there will be less money for me to leave to my family, but then again, the tax man would again come knocking, looking to gloat over my demise and add to my family’s misery with an outstretched hand. Then again, this piece is about capital gains rather than inheritance tax, so we can leave those discussions for another time.
This is not financial advice, please consult a financial/tax advisor before spending and replacing without filing taxes and don’t send your bitcoin to any old fella who says they’ll return it once you’ve paThe first time I received a paycheque from a full-time job, after being told in the interview I would be earning one amount, the amount I received was around 25% less; you’re not in Kansas anymore, welcome to the real work and TAX. Over the years, I’ve continued to pay my taxes, as a good little citizen, and at certain points along the way, I have paid considerable amounts of tax, because I wouldn’t want to break the law by not paying my taxes. Tax is necessary for a civilised society, they say. I’m told, who will pay, at least in the UK, for the NHS, who will pay for the roads, who will pay for the courts, the military, the police, if I don’t pay my taxes? But let’s be honest, apart from those who pay very little to no tax, who, in a society actually gets good value for money out of the taxes they pay, or hears of a government institution that operates efficiently and effectively? Alternatively, imagine if the government didn’t have control of a large military budget, would they be quite so keen to deploy the young of our country into harm’s way, in the name of national security or having streets in Ukraine named after them for their generous donations of munition paid with someone else’s money? While I’m only half-way through the excellent “Fiat Standard”, I’m well aware that many of these issues have been driven by the ability of those in charge to not only enforce and increase taxation at will, but also, if ends don’t quite meet, print the difference, however, these are rather abstract and high-level ideas for my small engineer’s brain. What has really brought this into sharp focus for me is the impending sale of my first house, that at the age of 25, I was duly provided a 40-year mortgage and was required to sign a form acknowledging that I would still be paying the mortgage after my retirement age. Fortunately for me, thanks to the government now changing the national age of retirement from 65 to 70 (so stealing 5 years of my retirement), in practice this form didn’t need to be signed, lucky me? Even so, what type of person would knowingly put another person in a situation where near 40% of their wage would mainly be paying interest to the bank (which as a side note was bailed out only a few years later). The unpleasant taste really became unbearable when even after being put into this “working life” sentence of debt repayment, was, even with the amount I’d spent on the house (debt interest and maintenance) over the subsequent 19 years, only able to provide a rate of return of less than 1.6%, compared to the average official (bullshit) inflation figure of 2.77%. My house has not kept up with inflation and to add insult to financial injury, His Majesty’s Revenue and Customs feel the need to take their portion of this “profit”.
At which point, I take a very deep breath, sit quietly for a moment, and channel my inner Margot, deciding against grabbing a bottle of bootleg antiseptic to both clear my pallet and dull the pain. I had been convinced I needed to get on the housing ladder to save, but the government has since printed billions, with the rate of, even the conservative estimates of inflation, out pacing my meagre returns on property, and after all that blood, sweet, tears and dust, covering my poor dog, “the law” states some of that money is theirs. I wasn’t able to save in the money that they could print at will, I worked very hard, I took risks and the reward I get is to give them even more money to fritter away of things that won’t benefit me. But, I don’t want your sympathy, I don’t need it, but it helped me to get a new perspective on capital gains, particularly when considered in relation to bitcoin. So, to again draw from Ms. Paez, who herself was drawing from everyone’s favourite Joker, Heath Ledger, not Rachel Reeves (or J. Powell), here we go.
The Sovereign Individual is by no means an easy reaD, but is absolutely fascinating, providing clear critiques of the system that at the time was only in its infancy, but predicting many aspects of today’s world, with shocking accuracy. One of the most striking parts for me was the critique and effect of taxation (specifically progressive forms) on the prosperity of a nation at large. At an individual level, people have a proportion of their income removed, to be spent by the government, out of the individuals’ control. The person who has applied their efforts, abilities and skills to earn a living is unable to decide how best to utilise a portion of the resources into the future. While this is an accepted reality, the authors’ outline the cumulative, compound impact of forfeiting such a large portion of your wage each year, leading to figures that are near unimaginable to anyone without a penchant for spreadsheets or an understanding of exponential growth. Now, if we put this into the context of the entrepreneur, identifying opportunities, taking on personal and business risk, whenever a profit is realised, whether through normal sales or when realising value from capital appreciation, they must pay a portion of this in tax. While there are opportunities to reinvest this back into the organisation, there may be no immediate investment opportunities for them to offset their current tax bill. As a result, the entrepreneurs are hampered from taking the fruits of their labour and compounding the results of their productivity, forced to fund the social programmes of a government pursuing aims that are misaligned with individuals running their own business. Resources are removed from the most productive individuals in the society, adding value, employing staff, to those who may have limited knowledge of the economic realities of business; see Oxbridge Scholars, with experience in NGOs or charities, for more details please see Labour’s current front bench. What was that Labour? Ah yes, let’s promote growth by taxing companies more and making it more difficult to get rid of unproductive staff, exactly the policies every small business owner has been asking for (Budget October 2024).
Now, for anyone on NOSTR, none of this is new, a large portion of Nostriches were orange pilled long before taking their first purple pill of decentralise Notes and Other Stuff. However, if we’re aware of this system that has been put in place to steal our earnings and confiscate our winnings if we have been able to outwit the Keynesian trap western governments have chosen to give themselves more power, how can we progress? What options do we have? a) being locked up for non-payment of taxes by just spending bitcoin, to hell with paying taxes or b) spend/sell (:/), but keeping a record of those particular coins you bought multiple years ago, in order to calculate your gain and hand over YOUR money the follow tax year, so effectively increasing the cost of anything purchased in bitcoin. Please note, I’m making a conscious effort not to say what should be done, everyone needs to make decisions based on their knowledge and their understanding.
Anyway, option a) is not as flippant as one might think, but also not something one should (damn it) do carelessly. One bitcoin equals one bitcoin, bitcoin is money, as a result, it neither increases nor decreases is value, it is fiat currencies that varies wildly in comparison. If we think about gold, the purchasing power of gold has remained relatively consistent over hundreds of years, gold is viewed as money, which (as a side note) results in Royal Mint gold coins being both exempt of VAT and capital gains tax. While I may consider this from a, while not necessarily biased, but definitely pro-bitcoin perspective, I believe that it is extremely logical for transactions that take place in bitcoin should not require “profits” or “losses” to be reports, but this is where my logic and the treasury’s grabbiness are inconsistent. If what you’re buying is priced in bitcoin, you’re trading goods or services for money, there was no realisation of gains. Having said that, if you choose to do this, best not do any spending from a stack with a connection to an exchange and your identify. When tax collectors (and their government masters) end up not having enough money, they may begin exploring whether those people buying bitcoin form exchanges are also spending it.
But why is this relevant or important? For me and from hearing from many people on podcasts, while not impossible and not actually that difficult, recording gains on each transaction is firstly a barrier for spending bitcoin, it is additional effort, admin and not insignificant cost, and no one likes that. Secondly, from my libertarian leaning perspective, tax is basically the seizure of assets under the threat of incarceration (aka theft), with the government spending that money on crap I don’t give a shit about, meaning I don’t want to help fund their operation more than I already do. The worry is, if I pay more taxes, they think they’re getting good at collecting taxes, they increase taxes, use taxes to employ more tax collectors, rinse and repeat. From this perspective, it is almost my duty not to report when I transact in bitcoin, viewing it as plain and simple, black-market money, where the government neither dictates what I can do with it, nor profit from its appreciation.
The result of this is not the common mantra of never sell your bitcoin, because I, for one, am looking forward to ditching the fiat grind and having more free time driving an interesting 90’s sports car or riding a new mountain bike, which I will need money to be fund. Unless I’m going to take a fair bit of tax evasion-based risk, find some guys who will only accept my KYC free bitcoin and then live off the grid, I’ll need to find another way, which unfortunately may require engaging once more with the fiat system. However, this time, rather than selling bitcoin to buy fiat, looking for financial product providers who offer loans against bitcoin held. This is nothing new, having been a contributing factors to the FTX blow up, and the drawdown of 2022, the logic of such products is solid and the secret catalyst to Mark Moss’s (and others) buy, borrow, die strategy. The difference this time is to earn from our mistakes, to choose the right company and maybe hand over our private keys (multisig is a beautiful thing). The key benefit of this is that by taking a loan, you’re not realising capital gains, so do not create a taxable event. While there is likely to be an interest on any loan, this only makes sense if this is considerably less than either the capital gains rate incurred if you sold the bitcoin or the long-term capital appreciation of the bitcoin you didn’t have to sell, it has to be an option worth considering.
Now, this is interacting with the fiat system, it does involve the effective printing of money and depending on the person providing the loan, there is risk, however, there are definitely some positives, even outside the not inconsiderable, “tax free” nature of this money. Firstly, by borrowing fiat money, you are increasing the money supply, while devaluing all other holders of that currency, which effectively works against fiat governments, causing them to forever print harder to stop themselves going into a deflationary nose drive. The second important aspect is that if you have not had to sell your bitcoin, you have removed sell pressure from the market and buying pressure that would strengthen the fiat currency, so further supporting the stack you have not had to sell. Now, let’s put this in the context of The Sovereign Individual or the entrepreneurial bitcoiner, who took a risk before fully understanding what they were buying and has now benefiting financially. The barrier of tax-based admin or the reticence to support government operations through paying additional tax are not insignificant, which the loan has allowed you to effectively side step, keeping more value of your holdings to allocate as you see fit. While this may involve the setting up of a new business that itself may drive productive growth, even if all you did was spend that money (such as a sport car or a new bike), this could still be a net, economic positive compared to a large portion of that money being sucked into the government spending black hole. While the government would not be receiving that tax revenue, every retailer, manufacturer or service provider would benefit from this additional business. Rather than the tax money going toward interest costs or civil servant wages, the money would go towards the real businesses you have chosen, their staff’s wages, who are working hard to outcompete their peers. Making this choice to not pay capital gains does not just allow bitcoiner to save money and to a small degree, reduce government funding, but also provides a cash injection to those companies who may still be reeling from minimum wage AND national insurance increases.
I’m not an ethicist, so am unable to provide a clear, concise, philosophical argument to explain why the ability of government to steal from you via the processes of monetary inflation as well as an ever-increasing tax burden in immoral, but I hope this provides a new perspective on the situation. I don’t believe increases in taxes support economic development (it literally does the opposite), I don’t believe that individuals should be penalised for working hard, challenging themselves, taking risks and succeeding. However, I’m not in charge of the system and also appreciate that if any major changes were to take place, the consequences would be significant (we’re talking Mandibles time). I believe removing capital gains tax from bitcoin would be a net positive for the economy and there being precedence based on the UK’s currently position with gold coins, but unfortunately, I don’t believe people in the cabinet think as I do, they see people with assets and pound signs ring up at their eyes.
As a result, my aim moving forward will be to think carefully before making purchases or sales that will incur capital gains tax (no big Lambo purchase for me at the top), but also being willing the promote the bitcoin economy by purchasing products and services with bitcoin. To do this, I’ll double confirm that spend/replace techniques actually get around capital gains by effectively using the payment rails of bitcoin to transfer value rather than to sell your bitcoin. This way, I will get to reward and promote those companies to perform at a level that warrants a little more effort with payment, without it costing me an additional 18-24% in tax later on.
So, to return to where we started and my first pay-cheque. We need to work to earn a living, but as we earn more, an ever-greater proportion is taken from us, and we are at risk of becoming stuck in a never ending fiat cycle. In the past, this was more of an issue, leading people into speculating on property or securities, which, if successful, would then incur further taxes, which will likely be spent by governments on liabilities or projects that add zero net benefits to national citizens. Apologies if you see this as a negative, but please don’t, this is the alternative to adopting a unit of account that cannot be inflated away. If you have begun to measure your wealth in bitcoin, there will be a point where you need to start to start spending. I for one, do not intend to die with my private keys in my head, but having lived a life, turbo charged by the freedom bitcoin has offered me. Bitcoin backed loans are returning to the market, with hopefully a little less risk this time around. There may be blow ups, but once they get established and interest costs start to be competed away, I will first of all acknowledge remaining risks and then not allocate 100% of my stack. Rather than being the one true bitcoiner who has never spent a sat, I will use the tools at my disposal to firstly give my family their best possible lives and secondly, not fund the government more than I need to.
Then, by the time I’m ready to leave this earth, there will be less money for me to leave to my family, but then again, the tax man would again come knocking, looking to gloat over my demise and add to my family’s misery with an outstretched hand. Then again, this piece is about capital gains rather than inheritance tax, so we can leave those discussions for another time.
This is not financial advice, please consult a financial/tax advisor before spending and replacing without filing taxes and don’t send your bitcoin to any old fella who says they’ll return it once you’ve paid off the loan.
-
@ c3b2802b:4850599c
2025-04-29 13:24:06Am 11. April 2022, einem sonnigen Frühlingstag, schaute ich gegen 11 Uhr seit langem wieder einmal bei Roßleben an der Unstrut mit Muße in den Himmel. Gemeinsam mit Regina, meiner Frau. Wir sind jetzt im Ruhestand – und hatten uns im Jahr 1974 unter dem Himmel von Roßleben verliebt.
Was wir da sahen, können Sie im Titelbild oben nachvollziehen. Überrascht beobachteten wir ein Weile, was da passiert, doch die Streifen, offenkundig von Flugzeugen hinterlassen, lösten sich nicht auf, sondern verbanden sich allmählich zu einem zusammenhängenden Dunstschleier, festgehalten im folgenden Foto:
Überrascht waren wir, weil wir vor knapp 50 Jahren am gleichen Himmel zwar auch schon Flugzeuge gesehen hatten, die hatten allerdings nicht derartige von Horizont zu Horizont reichenden Spuren hinterlassen. Einige Tage später gelang mir eine Aufnahme, die einen von anno dazumal vertrauten Kondensstreifen eines Flugzeuges zeigt:
Gemeint ist die senkrecht in der Bildmitte verlaufende kurze Spur. Darunter der waagrechte Streifen zeigt das heute eher übliche Bild.
Als neugieriger Mensch bat ich das für solche Fragen zuständige Bundesumweltamt um Auskunft, wie diese Himmelserscheinungen zu erklären sind. Hier die Fragen, die ich mit den Fotos an das Umweltbundesamt gesendet habe:
1. Seit wann genau hinterlassen Flugzeuge über deutschem Luftraum Spuren, welche länger als 5 Minuten am Himmel sichtbar sind, sich ausbreiten und in Form von Dunstschleiern über Stunden am Himmel verbleiben?
2. Inwieweit wurde die Durchführung entsprechender Beschlüsse, welche die beobachtbare Schleierbildung herbeiführen, von demokratischen Gremien unseres Landes, z.B. dem deutschen Bundestag, genehmigt?
3. Welche Entscheidungsträger in unserem Land sind für diese Vorgänge verantwortlich?
4. Gibt es in diesem Zusammenhang internationale Geoingeneering-Programme, an denen Deutschland beteiligt ist? Wenn ja, welche sind das?
5. Was ist Ihnen über Partikel bzw. Substanzen bekannt, welche die langandauernde Sichtbarkeit dieser Schleier verursachen?
6. Was ist Ihnen über Folgeschäden durch die offenkundige Beeinflussung der Lichteinstrahlung oder durch Herabsenken von Partikeln bzw. Substanzen in diesen Schleiern auf Pflanzen, Tiere und Menschen bekannt?
7. Welche Möglichkeiten sehen Sie in Ihrem Amt, zur wissenschaftlichen Aufklärung und ggf. Beendigung dieser Dunstschleier-Aktivitäten beizutragen?
Hier die Antwort des Bundesamtes:
Sehr geehrter Herr Professor, danke für Ihre Anfrage.
Aus unserer Sicht handelt es sich bei Ihren Beobachtungen um Kondensstreifen. Sie entstehen in hinreichend kalter Atmosphäre als Folge der Wasserdampfemissionen aus Flugzeugtriebwerken. Bei niedriger Feuchte lösen sich Kondensstreifen rasch wieder auf. Ist die Atmosphäre jedoch hinreichend feucht, können Kondensstreifen länger existieren und weiter wachsen. Unter geeigneten Bedingungen können sie sich zu großflächigen Zirruswolken, die im Falle einer solchen Entstehungsgeschichte Contrail-Cirrus heißen, entwickeln. Contrail-Cirrus ist dann nicht mehr von natürlichen Zirren unterscheidbar, falls nicht seine gesamte Entstehungsgeschichte beobachtet wurde.
Nehmen Zirruswolken, die optisch sehr dünn sein können, eine große Fläche ein, erscheint dem Beobachter der Himmel milchig weiß. Im Mittel sind rund 0,06 Prozent der Erde mit (linienförmigen) Kondensstreifen bedeckt. Gegenden mit hohem Flugverkehrsaufkommen erreichen deutlich höhere Bedeckungsgrade. So lag Mitte der neunziger Jahre der Wert für Europa bei 0,5 Prozent. Der Bedeckungsgrad durch Contrail-Cirrus ist noch nicht bekannt. Erste Schätzungen liefern Werte, die etwa zehnmal so groß sind wie der Bedeckungsgrad mit linienförmigen Kondensstreifen.
Altern Kondensstreifen, bleiben sie nicht glatt, sondern können Formen bilden. Dieser Vorgang ist ein lange bekanntes Phänomen und eine Folge der Turbulenz, die in der Atmosphäre allgegenwärtig ist. Diese Formen lassen sich auch durch Modellsimulationen reproduzieren.
Mehrere Kondensstreifen nebeneinander entstehen zum Beispiel dadurch, dass Flugzeuge festen Routen folgen und die Windrichtung in der Höhe von der Flugroute abweicht. Die Kondensstreifen verschieben sich dadurch seitlich. An Knotenpunkten der Flugrouten können sich Kondensstreifen unterschiedlicher Orientierung bilden. Als Folge der Verschiebung der Kondensstreifen können dann auch rautenförmige Muster entstehen. Da Windrichtung und -geschwindigkeit praktisch nie gleich sind, entstehen aus vormals geraden Mustern gekrümmte Formen. Außerdem fliegen Flugzeuge nicht immer nur geradeaus, sondern auch Kurven, insbesondere während Warteschleifen in Flughafennähe. In diesem Fall können gekrümmte Kondensstreifen entstehen.
Nähere Informationen zur Bildung von Kondensstreifen gibt das DLR auf folgender Internetseite: http://www.dlr.de/pa/desktopdefault.aspx/tabid-2554/3836_read-5746/
Uns sind die Vorschläge, die unter dem Stichwort Geo-Engineering oder Climate-Engineering zusammengefasst werden, bekannt. Wir haben dazu bereits 2011 eine umfassende Broschüre veröffentlicht.
In dieser Broschüre stellen wir auch die großen Risiken, die mit Geo-Engineering verbunden wären, ausführlich dar. Es gibt in Deutschland auch keinerlei Planungen staatlicher Institutionen zur Durchführung von Geo-Engineering. Die existierenden Vorschläge zum Geo-Engineering sind zum größten Teil nicht ausgereift und werden auch in der Praxis nicht angewendet. Es ist nicht geklärt, welche schwerwiegenden, unerwünschten Nebenwirkungen auftreten könnten. Zudem gibt es gar keine ausreichende rechtliche Basis für Geo-Engineering. Wörtlich heißt es in unserer Broschüre auf Seite 4: „Angesichts der Tragweite von Geo-Engineering-Maßnahmen und den großen Unsicherheiten bei der Abschätzung von Folgen im komplexen Erdsystem rät das UBA aus Vorsorgegründen zu größter Zurückhaltung und bis zu einer deutlichen Verbesserung des Wissens um die Interdependenzen zwischen Geoprozessen zu einem Moratorium für den Einsatz solcher Maßnahmen.“ (https://www.umweltbundesamt.de/publikationen/geo-engineering-wirksamer-klimaschutz-groessenwahn)
Wir hoffen, die Informationen helfen Ihnen weiter. Mit freundlichen Grüßen Im Auftrag
Team Bürgerservice, Umweltbundesamt, Postfach 14 06, 06813 Dessau-Roßlau
Da ich kein Wolken-Forscher werden mag, überlasse ich Ihnen gern das Urteil über meine Fragen und die Passung der Antworten seitens unserer Regierungsbehörde.
Spannend ist heute, drei Jahre nach diesem kleinen Briefwechsel (erstmals veröffentlicht im April 2022 auf der Plattform der Zukunftskommunen), dass vom Senat des US Bundesstaates Florida am 3. April 2025 ein Gesetz verabschiedet wurde, welches das Versprühen von Chemikalien durch Flugzeuge unter Strafe stellt. Hier ein kurzer Auszug:
Verbot von Geoengineering und wetterverändernden Aktivitäten
(1) Die Injektion, Freisetzung oder Dispersion von Chemikalien, chemischen Verbindungen, Substanzen in die Atmosphäre innerhalb der Grenzen dieses Staates zu dem Zweck der Beeinflussung von Temperatur, Wetter, Klima oder um die Intensität des Sonnenlichts zu beeinflussen, ist verboten.
(2) Jede Person, jede öffentliche oder private Körperschaft, die eine Geo-Engineering-Aktion oder Wetterbeeinflussung unter Verstoß gegen dieses Verbot durchführt, macht sich schuldig eines Verbrechens dritten und zweiten Grades, auf das eine Geldstrafe von bis zu 100.000 Dollar fällig wird. ... Luftfahrzeugbetreiber oder Fluglotsen, die ein Verbrechen dritten Grades dieser Art begehen, werden bestraft mit einer Geldstrafe von bis zu $5.000 und bis zu 5 Jahren Gefängnis.
Den kompletten Text können Sie hier nachlesen. In Tennessee sowie in über 20 weiteren US Bundesstaaten sind ähnliche Gesetze auf dem Weg. Jetzt dürfen wir neugierig sein, wie die Rotkäppchen - Story in anderen Ländern, z.B. in Deutschland weitergeht.
Dieser Beitrag wurde mit dem Pareto-Client geschrieben.
-
@ d34e832d:383f78d0
2025-04-24 07:22:54Operation
This operational framework delineates a methodologically sound, open-source paradigm for the self-custody of Bitcoin, prominently utilizing Electrum, in conjunction with VeraCrypt-encrypted USB drives designed to effectively emulate the functionality of a cold storage hardware wallet.
The primary aim of this initiative is to empower individual users by providing a mechanism that is economically viable, resistant to coercive pressures, and entirely verifiable. This is achieved by harnessing the capabilities inherent in open-source software and adhering to stringent cryptographic protocols, thereby ensuring an uncompromising stance on Bitcoin sovereignty.
The proposed methodology signifies a substantial advancement over commercially available hardware wallets, as it facilitates the creation of a do-it-yourself air-gapped environment that not only bolsters resilience and privacy but also affirms the principles of decentralization intrinsic to the cryptocurrency ecosystem.
1. The Need For Trustless, Private, and Secure Storage
With Bitcoin adoption increasing globally, the need for trustless, private, and secure storage is critical. While hardware wallets like Trezor and Ledger offer some protection, they introduce proprietary code, closed ecosystems, and third-party risk. This Idea explores an alternative: using Electrum Wallet within an encrypted VeraCrypt volume on a USB flash drive, air-gapped via Tails OS or offline Linux systems.
2. Architecture of the DIY Hardware Wallet
2.1 Core Components
- Electrum Wallet (SegWit, offline mode)
- USB flash drive (≥ 8 GB)
- VeraCrypt encryption software
- Optional: Tails OS bootable environment
2.2 Drive Setup
- Format the USB drive and install VeraCrypt volumes.
- Choose AES + SHA-512 encryption for robust protection.
- Use FAT32 for wallet compatibility with Electrum (under 4GB).
- Enable Hidden Volume for plausible deniability under coercion.
3. Creating the Encrypted Environment
3.1 Initial Setup
- Download VeraCrypt from the official site; verify GPG signatures.
- Encrypt the flash drive and store a plain Electrum AppImage inside.
- Add a hidden encrypted volume with the wallet seed, encrypted QR backups, and optionally, a decoy wallet.
3.2 Mounting Workflow
- Always mount the VeraCrypt volume on an air-gapped computer, ideally booted into Tails OS.
- Never connect the encrypted USB to an internet-enabled system.
4. Air-Gapped Wallet Operations
4.1 Wallet Creation (Offline)
- Generate a new Electrum SegWit wallet inside the mounted VeraCrypt volume.
- Record the seed phrase on paper, or store it in a second hidden volume.
- Export xpub (public key) for use with online watch-only wallets.
4.2 Receiving Bitcoin
- Use watch-only Electrum wallet with the exported xpub on an online system.
- Generate receiving addresses without exposing private keys.
4.3 Sending Bitcoin
- Create unsigned transactions (PSBT) in the watch-only wallet.
- Transfer them via QR code or USB sneakernet to the air-gapped wallet.
- Sign offline using Electrum, then return the signed transaction to the online device for broadcast.
5. OpSec Best Practices
5.1 Physical and Logical Separation
- Use a dedicated machine or a clean Tails OS session every time.
- Keep the USB drive hidden and disconnected unless in use.
- Always dismount the VeraCrypt volume after operations.
5.2 Seed Phrase Security
- Never type the seed on an online machine.
- Consider splitting the seed using Shamir's Secret Sharing or metal backup plates.
5.3 Coercion Resilience
- Use VeraCrypt’s hidden volume feature to store real wallet data.
- Maintain a decoy wallet in the outer volume with nominal funds.
- Practice your recovery and access process until second nature.
6. Tradeoffs vs. Commercial Wallets
| Feature | DIY Electrum + VeraCrypt | Ledger/Trezor | |--------|--------------------------|---------------| | Open Source | ✅ Fully | ⚠️ Partially | | Air-gapped Usage | ✅ Yes | ⚠️ Limited | | Cost | 💸 Free (except USB) | 💰 $50–$250 | | Hidden/Coercion Defense | ✅ Hidden Volume | ❌ None | | QR Signing Support | ⚠️ Manual | ✅ Some models | | Complexity | 🧠 High | 🟢 Low | | Long-Term Resilience | ✅ No vendor risk | ⚠️ Vendor-dependent |
7. Consider
A DIY hardware wallet built with Electrum and VeraCrypt offers an unprecedented level of user-controlled sovereignty in Bitcoin storage. While the technical learning curve may deter casual users, those who value security, privacy, and independence will find this setup highly rewarding. This Operation demonstrates that true Bitcoin ownership requires not only control of private keys, but also a commitment to operational security and digital self-discipline. In a world of growing surveillance and digital coercion, such methods may not be optional—they may be essential.
8. References
- Nakamoto, Satoshi. Bitcoin: A Peer-to-Peer Electronic Cash System. 2008.
- Electrum Technologies GmbH. “Electrum Documentation.” electrum.org, 2024.
- VeraCrypt. “Documentation.” veracrypt.fr, 2025.
- Tails Project. “The Amnesic Incognito Live System (Tails).” tails.boum.org, 2025.
- Matonis, Jon. "DIY Cold Storage for Bitcoin." Forbes, 2014.
In Addition
🛡️ Create Your Own Secure Bitcoin Hardware Wallet: Electrum + VeraCrypt DIY Guide
Want maximum security for your Bitcoin without trusting third-party devices like Ledger or Trezor?
This guide shows you how to build your own "hardware wallet" using free open-source tools:
✅ Electrum Wallet + ✅ VeraCrypt Encrypted Flash Drive — No extra cost, no vendor risk.Let Go Further
What You’ll Need
- A USB flash drive (8GB minimum, 64-bit recommended)
- A clean computer (preferably old or dedicated offline)
- Internet connection (for setup only, then go air-gapped)
- VeraCrypt software (free, open-source)
- Electrum Bitcoin Wallet AppImage file
Step 1: Download and Verify VeraCrypt
- Go to VeraCrypt Official Website.
- Download the installer for your operating system.
- Verify the GPG signatures to ensure the download isn't tampered with.
👉 [Insert Screenshot Here: VeraCrypt download page]
Pro Tip: Never skip verification when dealing with encryption software!
Step 2: Download Electrum Wallet
- Go to Electrum Official Website.
- Download the Linux AppImage or Windows standalone executable.
- Again, verify the PGP signatures published on the site. 👉 [Insert Screenshot Here: Electrum download page]
Step 3: Prepare and Encrypt Your USB Drive
- Insert your USB drive into the computer.
- Open VeraCrypt and select Create Volume → Encrypt a Non-System Partition/Drive.
- Choose Standard Volume for now (later we'll talk about hidden volumes).
- Select your USB drive, set an extremely strong password (12+ random characters).
- For Encryption Algorithm, select AES and SHA-512 for Hash Algorithm.
- Choose FAT32 as the file system (compatible with Bitcoin wallet sizes under 4GB).
- Format and encrypt. 👉 [Insert Screenshot Here: VeraCrypt creating volume]
Important: This will wipe all existing data on the USB drive!
Step 4: Mount the Encrypted Drive
Whenever you want to use the wallet:
- Open VeraCrypt.
- Select a slot (e.g., Slot 1).
- Click Select Device, choose your USB.
- Enter your strong password and Mount. 👉 [Insert Screenshot Here: VeraCrypt mounted volume]
Step 5: Set Up Electrum in Offline Mode
- Mount your encrypted USB.
- Copy the Electrum AppImage (or EXE) onto the USB inside the encrypted partition.
- Run Electrum from there.
- Select Create New Wallet.
- Choose Standard Wallet → Create New Seed → SegWit.
- Write down your 12-word seed phrase on PAPER.
❌ Never type it into anything else. - Finish wallet creation and disconnect from internet immediately. 👉 [Insert Screenshot Here: Electrum setup screen]
Step 6: Make It Air-Gapped Forever
- Only ever access the encrypted USB on an offline machine.
- Never connect this device to the internet again.
- If possible, boot into Tails OS every time for maximum security.
Pro Tip: Tails OS leaves no trace on the host computer once shut down!
Step 7: (Optional) Set Up a Hidden Volume
For even stronger security:
- Repeat the VeraCrypt process to add a Hidden Volume inside your existing USB encryption.
- Store your real Electrum wallet in the hidden volume.
- Keep a decoy wallet with small amounts of Bitcoin in the outer volume.
👉 This way, if you're ever forced to reveal the password, you can give access to the decoy without exposing your true savings.
Step 8: Receiving Bitcoin
- Export your xpub (extended public key) from the air-gapped Electrum wallet.
- Import it into a watch-only Electrum wallet on your online computer.
- Generate receiving addresses without exposing your private keys.
Step 9: Spending Bitcoin (Safely)
To send Bitcoin later:
- Create a Partially Signed Bitcoin Transaction (PSBT) with the online watch-only wallet.
- Transfer the file (or QR code) offline (via USB or QR scanner).
- Sign the transaction offline with Electrum.
- Bring the signed file/QR back to the online device and broadcast it.
✅ Your private keys never touch the internet!
Step 10: Stay Vigilant
- Always dismount the encrypted drive after use.
- Store your seed phrase securely (preferably in a metal backup).
- Regularly practice recovery drills.
- Update Electrum and VeraCrypt only after verifying new downloads.
🎯 Consider
Building your own DIY Bitcoin hardware wallet might seem complex, but security is never accidental — it is intentional.
By using VeraCrypt encryption and Electrum offline, you control your Bitcoin in a sovereign, verifiable, and bulletproof way.⚡ Take full custody. No companies. No middlemen. Only freedom.
-
@ d34e832d:383f78d0
2025-04-24 06:28:48Operation
Central to this implementation is the utilization of Tails OS, a Debian-based live operating system designed for privacy and anonymity, alongside the Electrum Wallet, a lightweight Bitcoin wallet that provides a streamlined interface for secure Bitcoin transactions.
Additionally, the inclusion of advanced cryptographic verification mechanisms, such as QuickHash, serves to bolster integrity checks throughout the storage process. This multifaceted approach ensures a rigorous adherence to end-to-end operational security (OpSec) principles while simultaneously safeguarding user autonomy in the custody of digital assets.
Furthermore, the proposed methodology aligns seamlessly with contemporary cybersecurity paradigms, prioritizing characteristics such as deterministic builds—where software builds are derived from specific source code to eliminate variability—offline key generation processes designed to mitigate exposure to online threats, and the implementation of minimal attack surfaces aimed at reducing potential vectors for exploitation.
Ultimately, this sophisticated approach presents a methodical and secure paradigm for the custody of private keys, thereby catering to the exigencies of high-assurance Bitcoin storage requirements.
1. Cold Storage Refers To The offline Storage
Cold storage refers to the offline storage of private keys used to sign Bitcoin transactions, providing the highest level of protection against network-based threats. This paper outlines a verifiable method for constructing such a storage system using the following core principles:
- Air-gapped key generation
- Open-source software
- Deterministic cryptographic tools
- Manual integrity verification
- Offline transaction signing
The method prioritizes cryptographic security, software verifiability, and minimal hardware dependency.
2. Hardware and Software Requirements
2.1 Hardware
- One 64-bit computer (laptop/desktop)
- 1 x USB Flash Drive (≥8 GB, high-quality brand recommended)
- Paper and pen (for seed phrase)
- Optional: Printer (for xpub QR export)
2.2 Software Stack
- Tails OS (latest ISO, from tails.boum.org)
- Balena Etcher (to flash ISO)
- QuickHash GUI (for SHA-256 checksum validation)
- Electrum Wallet (bundled within Tails OS)
3. System Preparation and Software Verification
3.1 Image Verification
Prior to flashing the ISO, the integrity of the Tails OS image must be cryptographically validated. Using QuickHash:
plaintext SHA256 (tails-amd64-<version>.iso) = <expected_hash>
Compare the hash output with the official hash provided on the Tails OS website. This mitigates the risk of ISO tampering or supply chain compromise.
3.2 Flashing the OS
Balena Etcher is used to flash the ISO to a USB drive:
- Insert USB drive.
- Launch Balena Etcher.
- Select the verified Tails ISO.
- Flash to USB and safely eject.
4. Cold Wallet Generation Procedure
4.1 Boot Into Tails OS
- Restart the system and boot into BIOS/UEFI boot menu.
- Select the USB drive containing Tails OS.
- Configure network settings to disable all connectivity.
4.2 Create Wallet in Electrum (Cold)
- Open Electrum from the Tails application launcher.
- Select "Standard Wallet" → "Create a new seed".
- Choose SegWit for address type (for lower fees and modern compatibility).
- Write down the 12-word seed phrase on paper. Never store digitally.
- Confirm the seed.
- Set a strong password for wallet access.
5. Exporting the Master Public Key (xpub)
- Open Electrum > Wallet > Information
- Export the Master Public Key (MPK) for receiving-only use.
- Optionally generate QR code for cold-to-hot usage (wallet watching).
This allows real-time monitoring of incoming Bitcoin transactions without ever exposing private keys.
6. Transaction Workflow
6.1 Receiving Bitcoin (Cold to Hot)
- Use the exported xpub in a watch-only wallet (desktop or mobile).
- Generate addresses as needed.
- Senders deposit Bitcoin to those addresses.
6.2 Spending Bitcoin (Hot Redeem Mode)
Important: This process temporarily compromises air-gap security.
- Boot into Tails (or use Electrum in a clean Linux environment).
- Import the 12-word seed phrase.
- Create transaction offline.
- Export signed transaction via QR code or USB.
- Broadcast using an online device.
6.3 Recommended Alternative: PSBT
To avoid full wallet import: - Use Partially Signed Bitcoin Transactions (PSBT) protocol to sign offline. - Broadcast PSBT using Sparrow Wallet or Electrum online.
7. Security Considerations
| Threat | Mitigation | |-------|------------| | OS Compromise | Use Tails (ephemeral environment, RAM-only) | | Supply Chain Attack | Manual SHA256 verification | | Key Leakage | No network access during key generation | | Phishing/Clone Wallets | Verify Electrum’s signature (when updating) | | Physical Theft | Store paper seed in tamper-evident location |
8. Backup Strategy
- Store 12-word seed phrase in multiple secure physical locations.
- Do not photograph or digitize.
- For added entropy, use Shamir Secret Sharing (e.g., 2-of-3 backups).
9. Consider
Through the meticulous integration of verifiable software solutions, the execution of air-gapped key generation methodologies, and adherence to stringent operational protocols, users have the capacity to establish a Bitcoin cold storage wallet that embodies an elevated degree of cryptographic assurance.
This DIY system presents a zero-dependency alternative to conventional third-party custody solutions and consumer-grade hardware wallets.
Consequently, it empowers individuals with the ability to manage their Bitcoin assets while ensuring full trust minimization and maximizing their sovereign control over private keys and transaction integrity within the decentralized financial ecosystem..
10. References And Citations
Nakamoto, Satoshi. Bitcoin: A Peer-to-Peer Electronic Cash System. 2008.
“Tails - The Amnesic Incognito Live System.” tails.boum.org, The Tor Project.
“Electrum Bitcoin Wallet.” electrum.org, 2025.
“QuickHash GUI.” quickhash-gui.org, 2025.
“Balena Etcher.” balena.io, 2025.
Bitcoin Core Developers. “Don’t Trust, Verify.” bitcoincore.org, 2025.In Addition
🪙 SegWit vs. Legacy Bitcoin Wallets
⚖️ TL;DR Decision Chart
| If you... | Use SegWit | Use Legacy | |-----------|----------------|----------------| | Want lower fees | ✅ Yes | 🚫 No | | Send to/from old services | ⚠️ Maybe | ✅ Yes | | Care about long-term scaling | ✅ Yes | 🚫 No | | Need max compatibility | ⚠️ Mixed | ✅ Yes | | Run a modern wallet | ✅ Yes | 🚫 Legacy support fading | | Use cold storage often | ✅ Yes | ⚠️ Depends on wallet support | | Use Lightning Network | ✅ Required | 🚫 Not supported |
🔍 1. What Are We Comparing?
There are two major types of Bitcoin wallet address formats:
🏛️ Legacy (P2PKH)
- Format starts with:
1
- Example:
1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa
- Oldest, most universally compatible
- Higher fees, larger transactions
- May lack support in newer tools and layer-2 solutions
🛰️ SegWit (P2WPKH)
- Formats start with:
- Nested SegWit (P2SH):
3...
- Native SegWit (bech32):
bc1q...
- Introduced via Bitcoin Improvement Proposal (BIP) 141
- Smaller transaction sizes → lower fees
- Native support by most modern wallets
💸 2. Transaction Fees
SegWit = Cheaper.
- SegWit reduces the size of Bitcoin transactions in a block.
- This means you pay less per transaction.
- Example: A SegWit transaction might cost 40%–60% less in fees than a legacy one.💡 Why?
Bitcoin charges fees per byte, not per amount.
SegWit removes certain data from the base transaction structure, which shrinks byte size.
🧰 3. Wallet & Service Compatibility
| Category | Legacy | SegWit (Nested / Native) | |----------|--------|---------------------------| | Old Exchanges | ✅ Full support | ⚠️ Partial | | Modern Exchanges | ✅ Yes | ✅ Yes | | Hardware Wallets (Trezor, Ledger) | ✅ Yes | ✅ Yes | | Mobile Wallets (Phoenix, BlueWallet) | ⚠️ Rare | ✅ Yes | | Lightning Support | 🚫 No | ✅ Native SegWit required |
🧠 Recommendation:
If you interact with older platforms or do cross-compatibility testing, you may want to: - Use nested SegWit (address starts with
3
), which is backward compatible. - Avoid bech32-only wallets if your exchange doesn't support them (though rare in 2025).
🛡️ 4. Security and Reliability
Both formats are secure in terms of cryptographic strength.
However: - SegWit fixes a bug known as transaction malleability, which helps build protocols on top of Bitcoin (like the Lightning Network). - SegWit transactions are more standardized going forward.
💬 User takeaway:
For basic sending and receiving, both are equally secure. But for future-proofing, SegWit is the better bet.
🌐 5. Future-Proofing
Legacy wallets are gradually being phased out:
- Developers are focusing on SegWit and Taproot compatibility.
- Wallet providers are defaulting to SegWit addresses.
- Fee structures increasingly assume users have upgraded.
🚨 If you're using a Legacy wallet today, you're still safe. But: - Some services may stop supporting withdrawals to legacy addresses. - Your future upgrade path may be more complex.
🚀 6. Real-World Scenarios
🧊 Cold Storage User
- Use SegWit for low-fee UTXOs and efficient backup formats.
- Consider Native SegWit (
bc1q
) if supported by your hardware wallet.
👛 Mobile Daily User
- Use Native SegWit for cheaper everyday payments.
- Ideal if using Lightning apps — it's often mandatory.
🔄 Exchange Trader
- Check your exchange’s address type support.
- Consider nested SegWit (
3...
) if bridging old + new systems.
📜 7. Migration Tips
If you're moving from Legacy to SegWit:
- Create a new SegWit wallet in your software/hardware wallet.
- Send funds from your old Legacy wallet to the SegWit address.
- Back up the new seed — never reuse the old one.
- Watch out for fee rates and change address handling.
✅ Final User Recommendations
| Use Case | Address Type | |----------|--------------| | Long-term HODL | SegWit (
bc1q
) | | Maximum compatibility | SegWit (nested3...
) | | Fee-sensitive use | Native SegWit (bc1q
) | | Lightning | Native SegWit (bc1q
) | | Legacy systems only | Legacy (1...
) – short-term only |
📚 Further Reading
- Nakamoto, Satoshi. Bitcoin: A Peer-to-Peer Electronic Cash System. 2008.
- Bitcoin Core Developers. “Segregated Witness (Consensus Layer Change).” github.com/bitcoin, 2017.
- “Electrum Documentation: Wallet Types.” docs.electrum.org, 2024.
- “Bitcoin Wallet Compatibility.” bitcoin.org, 2025.
- Ledger Support. “SegWit vs Legacy Addresses.” ledger.com, 2024.
-
@ 88cc134b:5ae99079
2025-04-29 11:39:51one, two, three, four, five, six
-
@ b2caa9b3:9eab0fb5
2025-04-24 06:25:35Yesterday, I faced one of the most heartbreaking and frustrating experiences of my life. Between 10:00 AM and 2:00 PM, I was held at the Taveta border, denied entry into Kenya—despite having all the necessary documents, including a valid visitor’s permit and an official invitation letter.
The Kenyan Immigration officers refused to speak with me. When I asked for clarification, I was told flatly that I would never be allowed to enter Kenya unless I obtain a work permit. No other reason was given. My attempts to explain that I simply wanted to see my child were ignored. No empathy. No flexibility. No conversation. Just rejection.
While I stood there for hours, held by officials with no explanation beyond a bureaucratic wall, I recorded the experience. I now have several hours of footage documenting what happened—a silent testimony to how a system can dehumanize and block basic rights.
And the situation doesn’t end at the border.
My child, born in Kenya, is also being denied the right to see me. Germany refuses to grant her citizenship, which means she cannot visit me either. The German embassy in Nairobi refuses to assist, stating they won’t get involved. Their silence is loud.
This is not just about paperwork. This is about a child growing up without her father. It’s about a system that chooses walls over bridges, and bureaucracy over humanity. Kenya, by refusing me entry, is keeping a father away from his child. Germany, by refusing to act under §13 StGB, is complicit in that injustice.
In the coming days, I’ll share more about my past travels and how this situation unfolded. I’ll also be releasing videos and updates on TikTok—because this story needs to be heard. Not just for me, but for every parent and child caught between borders and bureaucracies.
Stay tuned—and thank you for standing with me.
-
@ d34e832d:383f78d0
2025-04-24 06:12:32
Goal
This analytical discourse delves into Jack Dorsey's recent utterances concerning Bitcoin, artificial intelligence, decentralized social networking platforms such as Nostr, and the burgeoning landscape of open-source cryptocurrency mining initiatives.
Dorsey's pronouncements escape the confines of isolated technological fascinations; rather, they elucidate a cohesive conceptual schema wherein Bitcoin transcends its conventional role as a mere store of value—akin to digital gold—and emerges as a foundational protocol intended for the construction of a decentralized, sovereign, and perpetually self-evolving internet ecosystem.
A thorough examination of Dorsey's confluence of Bitcoin with artificial intelligence advancements, adaptive learning paradigms, and integrated social systems reveals an assertion of Bitcoin's position as an entity that evolves beyond simple currency, evolving into a distinctly novel socio-technological organism characterized by its inherent ability to adapt and grow. His vigorous endorsement of native digital currency, open communication protocols, and decentralized infrastructural frameworks is posited here as a revolutionary paradigm—a conceptual
1. The Path
Jack Dorsey, co-founder of Twitter and Square (now Block), has emerged as one of the most compelling evangelists for a decentralized future. His ideas about Bitcoin go far beyond its role as a speculative asset or inflation hedge. In a recent interview, Dorsey ties together themes of open-source AI, peer-to-peer currency, decentralized media, and radical self-education, sketching a future in which Bitcoin is the lynchpin of an emerging technological and social ecosystem. This thesis reviews Dorsey’s statements and offers a critical framework to understand why his vision uniquely positions Bitcoin as the keystone of a post-institutional, digital world.
2. Bitcoin: The Native Currency of the Internet
“It’s the best current manifestation of a native internet currency.” — Jack Dorsey
Bitcoin's status as an open protocol with no central controlling authority echoes the original spirit of the internet: decentralized, borderless, and resilient. Dorsey's framing of Bitcoin not just as a payment system but as the "native money of the internet" is a profound conceptual leap. It suggests that just as HTTP became the standard for web documents, Bitcoin can become the monetary layer for the open web.
This framing bypasses traditional narratives of digital gold or institutional adoption and centers a P2P vision of global value transfer. Unlike central bank digital currencies or platform-based payment rails, Bitcoin is opt-in, permissionless, and censorship-resistant—qualities essential for sovereignty in the digital age.
3. Nostr and the Decentralization of Social Systems
Dorsey’s support for Nostr, an open protocol for decentralized social media, reflects a desire to restore user agency, protocol composability, and speech sovereignty. Nostr’s architecture parallels Bitcoin’s: open, extensible, and resilient to censorship.
Here, Bitcoin serves not just as money but as a network effect driver. When combined with Lightning and P2P tipping, Nostr becomes more than just a Twitter alternative—it evolves into a micropayment-native communication system, a living proof that Bitcoin can power an entire open-source social economy.
4. Open-Source AI and Cognitive Sovereignty
Dorsey's forecast that open-source AI will emerge as an alternative to proprietary systems aligns with his commitment to digital autonomy. If Bitcoin empowers financial sovereignty and Nostr enables communicative freedom, open-source AI can empower cognitive independence—freeing humanity from centralized algorithmic manipulation.
He draws a fascinating parallel between AI learning models and human learning itself, suggesting both can be self-directed, recursive, and radically decentralized. This resonates with the Bitcoin ethos: systems should evolve through transparent, open participation—not gatekeeping or institutional control.
5. Bitcoin Mining: Sovereignty at the Hardware Layer
Block’s initiative to create open-source mining hardware is a direct attempt to counter centralization in Bitcoin’s infrastructure. ASIC chip development and mining rig customization empower individuals and communities to secure the network directly.
This move reinforces Dorsey’s vision that true decentralization requires ownership at every layer, including hardware. It is a radical assertion of vertical sovereignty—from protocol to interface to silicon.
6. Learning as the Core Protocol
“The most compounding skill is learning itself.” — Jack Dorsey
Dorsey’s deepest insight is that the throughline connecting Bitcoin, AI, and Nostr is not technology—it’s learning. Bitcoin represents more than code; it’s a living experiment in voluntary consensus, a distributed educational system in cryptographic form.
Dorsey’s emphasis on meditation, intensive retreats, and self-guided exploration mirrors the trustless, sovereign nature of Bitcoin. Learning becomes the ultimate protocol: recursive, adaptive, and decentralized—mirroring AI models and Bitcoin nodes alike.
7. Critical Risks and Honest Reflections
Dorsey remains honest about Bitcoin’s current limitations:
- Accessibility: UX barriers for onboarding new users.
- Usability: Friction in everyday use.
- State-Level Adoption: Risks of co-optation as mere digital gold.
However, his caution enhances credibility. His focus remains on preserving Bitcoin as a P2P electronic cash system, not transforming it into another tool of institutional control.
8. Bitcoin as a Living System
What emerges from Dorsey's vision is not a product pitch, but a philosophical reorientation: Bitcoin, Nostr, and open AI are not discrete tools—they are living systems forming a new type of civilization stack.
They are not static infrastructures, but emergent grammars of human cooperation, facilitating value exchange, learning, and community formation in ways never possible before.
Bitcoin, in this view, is not merely stunningly original—it is civilizationally generative, offering not just monetary innovation but a path to software-upgraded humanity.
Works Cited and Tools Used
Dorsey, Jack. Interview on Bitcoin, AI, and Decentralization. April 2025.
Nakamoto, Satoshi. “Bitcoin: A Peer-to-Peer Electronic Cash System.” 2008.
Nostr Protocol. https://nostr.com.
Block, Inc. Bitcoin Mining Hardware Initiatives. 2024.
Obsidian Canvas. Decentralized Note-Taking and Networked Thinking. 2025. -
@ d34e832d:383f78d0
2025-04-24 05:56:06Idea
Through the integration of Optical Character Recognition (OCR), Docker-based deployment, and secure remote access via Twin Gate, Paperless NGX empowers individuals and small organizations to digitize, organize, and retrieve documents with minimal friction. This research explores its technical infrastructure, real-world applications, and how such a system can redefine document archival practices for the digital age.
Agile, Remote-Accessible, and Searchable Document System
In a world of increasing digital interdependence, managing physical documents is becoming not only inefficient but also environmentally and logistically unsustainable. The demand for agile, remote-accessible, and searchable document systems has never been higher—especially for researchers, small businesses, and archival professionals. Paperless NGX, an open-source platform, addresses these needs by offering a streamlined, secure, and automated way to manage documents digitally.
This Idea explores how Paperless NGX facilitates the transition to a paperless workflow and proposes best practices for sustainable, scalable usage.
Paperless NGX: The Platform
Paperless NGX is an advanced fork of the original Paperless project, redesigned with modern containers, faster performance, and enhanced community contributions. Its core functions include:
- Text Extraction with OCR: Leveraging the
ocrmypdf
Python library, Paperless NGX can extract searchable text from scanned PDFs and images. - Searchable Document Indexing: Full-text search allows users to locate documents not just by filename or metadata, but by actual content.
- Dockerized Setup: A ready-to-use Docker Compose environment simplifies deployment, including the use of setup scripts for Ubuntu-based servers.
- Modular Workflows: Custom triggers and automation rules allow for smart processing pipelines based on file tags, types, or email source.
Key Features and Technical Infrastructure
1. Installation and Deployment
The system runs in a containerized environment, making it highly portable and isolated. A typical installation involves: - Docker Compose with YAML configuration - Volume mapping for persistent storage - Optional integration with reverse proxies (e.g., Nginx) for HTTPS access
2. OCR and Indexing
Using
ocrmypdf
, scanned documents are processed into fully searchable PDFs. This function dramatically improves retrieval, especially for archived legal, medical, or historical records.3. Secure Access via Twin Gate
To solve the challenge of secure remote access without exposing the network, Twin Gate acts as a zero-trust access proxy. It encrypts communication between the Paperless NGX server and the client, enabling access from anywhere without the need for traditional VPNs.
4. Email Integration and Ingestion
Paperless NGX can ingest attachments directly from configured email folders. This feature automates much of the document intake process, especially useful for receipts, invoices, and academic PDFs.
Sustainable Document Management Workflow
A practical paperless strategy requires not just tools, but repeatable processes. A sustainable workflow recommended by the Paperless NGX community includes:
- Capture & Tagging
All incoming documents are tagged with a default “inbox” tag for triage. - Physical Archive Correlation
If the physical document is retained, assign it a serial number (e.g., ASN-001), which is matched digitally. - Curation & Tagging
Apply relevant category and topic tags to improve searchability. - Archival Confirmation
Remove the “inbox” tag once fully processed and categorized.
Backup and Resilience
Reliability is key to any archival system. Paperless NGX includes backup functionality via: - Cron job–scheduled Docker exports - Offsite and cloud backups using rsync or encrypted cloud drives - Restore mechanisms using documented CLI commands
This ensures document availability even in the event of hardware failure or data corruption.
Limitations and Considerations
While Paperless NGX is powerful, it comes with several caveats: - Technical Barrier to Entry: Requires basic Docker and Linux skills to install and maintain. - OCR Inaccuracy for Handwritten Texts: The OCR engine may struggle with cursive or handwritten documents. - Plugin and Community Dependency: Continuous support relies on active community contribution.
Consider
Paperless NGX emerges as a pragmatic and privacy-centric alternative to conventional cloud-based document management systems, effectively addressing the critical challenges of data security and user autonomy.
The implementation of advanced Optical Character Recognition (OCR) technology facilitates the indexing and searching of documents, significantly enhancing information retrieval efficiency.
Additionally, the platform offers secure remote access protocols that ensure data integrity while preserving the confidentiality of sensitive information during transmission.
Furthermore, its customizable workflow capabilities empower both individuals and organizations to precisely tailor their data management processes, thereby reclaiming sovereignty over their information ecosystems.
In an era increasingly characterized by a shift towards paperless methodologies, the significance of solutions such as Paperless NGX cannot be overstated; they play an instrumental role in engineering a future in which information remains not only accessible but also safeguarded and sustainably governed.
In Addition
To Further The Idea
This technical paper presents an optimized strategy for transforming an Intel NUC into a compact, power-efficient self-hosted server using Ubuntu. The setup emphasizes reliability, low energy consumption, and cost-effectiveness for personal or small business use. Services such as Paperless NGX, Nextcloud, Gitea, and Docker containers are examined for deployment. The paper details hardware selection, system installation, secure remote access, and best practices for performance and longevity.
1. Cloud sovereignty, Privacy, and Data Ownership
As cloud sovereignty, privacy, and data ownership become critical concerns, self-hosting is increasingly appealing. An Intel NUC (Next Unit of Computing) provides an ideal middle ground between Raspberry Pi boards and enterprise-grade servers—balancing performance, form factor, and power draw. With Ubuntu LTS and Docker, users can run a full suite of services with minimal overhead.
2. Hardware Overview
2.1 Recommended NUC Specifications:
| Component | Recommended Specs | |------------------|-----------------------------------------------------| | Model | Intel NUC 11/12 Pro (e.g., NUC11TNHi5, NUC12WSKi7) | | CPU | Intel Core i5 or i7 (11th/12th Gen) | | RAM | 16GB–32GB DDR4 (dual channel preferred) | | Storage | 512GB–2TB NVMe SSD (Samsung 980 Pro or similar) | | Network | Gigabit Ethernet + Optional Wi-Fi 6 | | Power Supply | 65W USB-C or barrel connector | | Cooling | Internal fan, well-ventilated location |
NUCs are also capable of dual-drive setups and support for Intel vPro for remote management on some models.
3. Operating System and Software Stack
3.1 Ubuntu Server LTS
- Version: Ubuntu Server 22.04 LTS
- Installation Method: Bootable USB (Rufus or Balena Etcher)
- Disk Partitioning: LVM with encryption recommended for full disk security
- Security:
- UFW (Uncomplicated Firewall)
- Fail2ban
- SSH hardened with key-only login
bash sudo apt update && sudo apt upgrade sudo ufw allow OpenSSH sudo ufw enable
4. Docker and System Services
Docker and Docker Compose streamline the deployment of isolated, reproducible environments.
4.1 Install Docker and Compose
bash sudo apt install docker.io docker-compose sudo systemctl enable docker
4.2 Common Services to Self-Host:
| Application | Description | Access Port | |--------------------|----------------------------------------|-------------| | Paperless NGX | Document archiving and OCR | 8000 | | Nextcloud | Personal cloud, contacts, calendar | 443 | | Gitea | Lightweight Git repository | 3000 | | Nginx Proxy Manager| SSL proxy for all services | 81, 443 | | Portainer | Docker container management GUI | 9000 | | Watchtower | Auto-update containers | - |
5. Network & Remote Access
5.1 Local IP & Static Assignment
- Set a static IP for consistent access (via router DHCP reservation or Netplan).
5.2 Access Options
- Local Only: VPN into local network (e.g., WireGuard, Tailscale)
- Remote Access:
- Reverse proxy via Nginx with Certbot for HTTPS
- Twin Gate or Tailscale for zero-trust remote access
- DNS via DuckDNS, Cloudflare
6. Performance Optimization
- Enable
zram
for compressed RAM swap - Trim SSDs weekly with
fstrim
- Use Docker volumes, not bind mounts for stability
- Set up unattended upgrades:
bash sudo apt install unattended-upgrades sudo dpkg-reconfigure --priority=low unattended-upgrades
7. Power and Environmental Considerations
- Idle Power Draw: ~7–12W (depending on configuration)
- UPS Recommended: e.g., APC Back-UPS 600VA
- Use BIOS Wake-on-LAN if remote booting is needed
8. Maintenance and Monitoring
- Monitoring: Glances, Netdata, or Prometheus + Grafana
- Backups:
- Use
rsync
to external drive or NAS - Cloud backup options: rclone to Google Drive, S3
- Paperless NGX backups:
docker compose exec -T web document-exporter ...
9. Consider
Running a personal server using an Intel NUC and Ubuntu offers a private, low-maintenance, and modular solution to digital infrastructure needs. It’s an ideal base for self-hosting services, offering superior control over data and strong security with the right setup. The NUC's small form factor and efficient power usage make it an optimal home server platform that scales well for many use cases.
- Text Extraction with OCR: Leveraging the
-
@ d34e832d:383f78d0
2025-04-24 05:14:14Idea
By instituting a robust network of conceptual entities, referred to as 'Obsidian nodes'—which are effectively discrete, idea-centric notes—researchers are empowered to establish a resilient and non-linear archival framework for knowledge accumulation.
These nodes, intricately connected via hyperlinks and systematically organized through the graphical interface of the Obsidian Canvas, facilitate profound intellectual exploration and the synthesis of disparate domains of knowledge.
Consequently, this innovative workflow paradigm emphasizes semantic precision and the interconnectedness of ideas, diverging from conventional, source-centric information architectures prevalent in traditional academic practices.
Traditional research workflows often emphasize organizing notes by source, resulting in static, siloed knowledge that resists integration and insight. With the rise of personal knowledge management (PKM) tools like Obsidian, it becomes possible to structure information in a way that mirrors the dynamic and interconnected nature of human thought.
At the heart of this approach are Obsidian nodes—atomic, standalone notes representing single ideas, arguments, or claims. These nodes form the basis of a semantic research network, made visible and manageable via Obsidian’s graph view and Canvas feature. This thesis outlines how such a framework enhances understanding, supports creativity, and aligns with best practices in information architecture.
Obsidian Nodes: Atomic Units of Thought
An Obsidian node is a note crafted to encapsulate one meaningful concept or question. It is:
- Atomic: Contains only one idea, making it easier to link and reuse.
- Context-Independent: Designed to stand on its own, without requiring the original source for meaning.
- Networked: Linked to other Obsidian nodes through backlinks and tags.
This system draws on the principles of the Zettelkasten method, but adapts them to the modern, markdown-based environment of Obsidian.
Benefits of Node-Based Note-Taking
- Improved Retrieval: Ideas can be surfaced based on content relevance, not source origin.
- Cross-Disciplinary Insight: Linking between concepts across fields becomes intuitive.
- Sustainable Growth: Each new node adds value to the network without redundancy.
Graph View: Visualizing Connections
Obsidian’s graph view offers a macro-level overview of the knowledge graph, showing how nodes interrelate. This encourages serendipitous discovery and identifies central or orphaned concepts that need further development.
- Clusters emerge around major themes.
- Hubs represent foundational ideas.
- Bridges between nodes show interdisciplinary links.
The graph view isn’t just a map—it’s an evolving reflection of intellectual progress.
Canvas: Thinking Spatially with Digital Notes
Obsidian Canvas acts as a digital thinking space. Unlike the abstract graph view, Canvas allows for spatial arrangement of Obsidian nodes, images, and ideas. This supports visual reasoning, ideation, and project planning.
Use Cases of Canvas
- Synthesizing Ideas: Group related nodes in physical proximity.
- Outlining Arguments: Arrange claims into narrative or logic flows.
- Designing Research Papers: Lay out structure and integrate supporting points visually.
Canvas brings a tactile quality to digital thinking, enabling workflows similar to sticky notes, mind maps, or corkboard pinning—but with markdown-based power and extensibility.
Template and Workflow
To simplify creation and encourage consistency, Obsidian nodes are generated using a templater plugin. Each node typically includes:
```markdown
{{title}}
Tags: #topic #field
Linked Nodes: [[Related Node]]
Summary: A 1-2 sentence idea explanation.
Source: [[Source Note]]
Date Created: {{date}}
```The Canvas workspace pulls these nodes as cards, allowing for arrangement, grouping, and visual tracing of arguments or research paths.
Discussion and Challenges
While this approach enhances creativity and research depth, challenges include:
- Initial Setup: Learning and configuring plugins like Templater, Dataview, and Canvas.
- Overlinking or Underlinking: Finding the right granularity in note-making takes practice.
- Scalability: As networks grow, maintaining structure and avoiding fragmentation becomes crucial.
- Team Collaboration: While Git can assist, Obsidian remains largely optimized for solo workflows.
Consider
Through the innovative employment of Obsidian's interconnected nodes and the Canvas feature, researchers are enabled to construct a meticulously engineered semantic architecture that reflects the intricate topology of their knowledge frameworks.
This paradigm shift facilitates a transformation of conventional note-taking, evolving this practice from a static, merely accumulative repository of information into a dynamic and adaptive cognitive ecosystem that actively engages with the user’s thought processes. With methodological rigor and a structured approach, Obsidian transcends its role as mere documentation software, evolving into both a secondary cognitive apparatus and a sophisticated digital writing infrastructure.
This dual functionality significantly empowers the long-term intellectual endeavors and creative pursuits of students, scholars, and lifelong learners, thereby enhancing their capacity for sustained engagement with complex ideas.
-
@ d34e832d:383f78d0
2025-04-24 05:04:55A Knowledge Management Framework for your Academic Writing
Idea Approach
The primary objective of this framework is to streamline and enhance the efficiency of several critical academic processes, namely the reading, annotation, synthesis, and writing stages inherent to doctoral studies.
By leveraging established best practices from various domains, including digital note-taking methodologies, sophisticated knowledge management techniques, and the scientifically-grounded principles of spaced repetition systems, this proposed workflow is adept at optimizing long-term retention of information, fostering the development of novel ideas, and facilitating the meticulous preparation of manuscripts. Furthermore, this integrated approach capitalizes on Zotero's robust annotation functionalities, harmoniously merged with Obsidian's Zettelkasten-inspired architecture, thereby enriching the depth and structural coherence of academic inquiry, ultimately leading to more impactful scholarly contributions.
Doctoral research demands a sophisticated approach to information management, critical thinking, and synthesis. Traditional systems of note-taking and bibliography management are often fragmented and inefficient, leading to cognitive overload and disorganized research outputs. This thesis proposes a workflow that leverages Zotero for reference management, Obsidian for networked note-taking, and Anki for spaced repetition learning—each component enhanced by a set of plugins, templates, and color-coded systems.
2. Literature Review and Context
2.1 Digital Research Workflows
Recent research in digital scholarship has highlighted the importance of structured knowledge environments. Tools like Roam Research, Obsidian, and Notion have gained traction among academics seeking flexibility and networked thinking. However, few workflows provide seamless interoperability between reference management, reading, and idea synthesis.
2.2 The Zettelkasten Method
Originally developed by sociologist Niklas Luhmann, the Zettelkasten ("slip-box") method emphasizes creating atomic notes—single ideas captured and linked through context. This approach fosters long-term idea development and is highly compatible with digital graph-based note systems like Obsidian.
3. Zotero Workflow: Structured Annotation and Tagging
Zotero serves as the foundational tool for ingesting and organizing academic materials. The built-in PDF reader is augmented through a color-coded annotation schema designed to categorize information efficiently:
- Red: Refuted or problematic claims requiring skepticism or clarification
- Yellow: Prominent claims, novel hypotheses, or insightful observations
- Green: Verified facts or claims that align with the research narrative
- Purple: Structural elements like chapter titles or section headers
- Blue: Inter-author references or connections to external ideas
- Pink: Unclear arguments, logical gaps, or questions for future inquiry
- Orange: Precise definitions and technical terminology
Annotations are accompanied by tags and notes in Zotero, allowing robust filtering and thematic grouping.
4. Obsidian Integration: Bridging Annotation and Synthesis
4.1 Plugin Architecture
Three key plugins optimize Obsidian’s role in the workflow:
- Zotero Integration (via
obsidian-citation-plugin
): Syncs annotated PDFs and metadata directly from Zotero - Highlighter: Enables color-coded highlights in Obsidian, mirroring Zotero's scheme
- Templater: Automates formatting and consistency using Nunjucks templates
A custom keyboard shortcut (e.g.,
Ctrl+Shift+Z
) is used to trigger the extraction of annotations into structured Obsidian notes.4.2 Custom Templating
The templating system ensures imported notes include:
- Citation metadata (title, author, year, journal)
- Full-color annotations with comments and page references
- Persistent notes for long-term synthesis
- An embedded bibtex citation key for seamless referencing
5. Zettelkasten and Atomic Note Generation
Obsidian’s networked note system supports idea-centered knowledge development. Each note captures a singular, discrete idea—independent of the source material—facilitating:
- Thematic convergence across disciplines
- Independent recombination of ideas
- Emergence of new questions and hypotheses
A standard atomic note template includes: - Note ID (timestamp or semantic UID) - Topic statement - Linked references - Associated atomic notes (via backlinks)
The Graph View provides a visual map of conceptual relationships, allowing researchers to track the evolution of their arguments.
6. Canvas for Spatial Organization
Obsidian’s Canvas plugin is used to mimic physical research boards: - Notes are arranged spatially to represent conceptual clusters or chapter structures - Embedded visual content enhances memory retention and creative thought - Notes and cards can be grouped by theme, timeline, or argumentative flow
This supports both granular research and holistic thesis design.
7. Flashcard Integration with Anki
Key insights, definitions, and questions are exported from Obsidian to Anki, enabling spaced repetition of core content. This supports: - Preparation for comprehensive exams - Retention of complex theories and definitions - Active recall training during literature reviews
Flashcards are automatically generated using Obsidian-to-Anki bridges, with tagging synced to Obsidian topics.
8. Word Processor Integration and Writing Stage
Zotero’s Word plugin simplifies: - In-text citation - Automatic bibliography generation - Switching between citation styles (APA, Chicago, MLA, etc.)
Drafts in Obsidian are later exported into formal academic writing environments such as Microsoft Word or LaTeX editors for formatting and submission.
9. Discussion and Evaluation
The proposed workflow significantly reduces friction in managing large volumes of information and promotes deep engagement with source material. Its modular nature allows adaptation for various disciplines and writing styles. Potential limitations include: - Initial learning curve - Reliance on plugin maintenance - Challenges in team-based collaboration
Nonetheless, the ability to unify reading, note-taking, synthesis, and writing into a seamless ecosystem offers clear benefits in focus, productivity, and academic rigor.
10. Consider
This idea demonstrates that a well-structured digital workflow using Zotero and Obsidian can transform the PhD research process. It empowers researchers to move beyond passive reading into active knowledge creation, aligned with the long-term demands of scholarly writing. Future iterations could include AI-assisted summarization, collaborative graph spaces, and greater mobile integration.
9. Evaluation Of The Approach
While this workflow offers significant advantages in clarity, synthesis, and long-term idea development, several limitations must be acknowledged:
-
Initial Learning Curve: New users may face a steep learning curve when setting up and mastering the integrated use of Zotero, Obsidian, and their associated plugins. Understanding markdown syntax, customizing templates in Templater, and configuring citation keys all require upfront time investment. However, this learning period can be offset by the long-term gains in productivity and mental clarity.
-
Plugin Ecosystem Volatility: Since both Obsidian and many of its key plugins are maintained by open-source communities or individual developers, updates can occasionally break workflows or require manual adjustments.
-
Interoperability Challenges: Synchronizing metadata, highlights, and notes between systems (especially on multiple devices or operating systems) may present issues if not managed carefully. This includes Zotero’s Better BibTeX keys, Obsidian sync, and Anki integration.
-
Limited Collaborative Features: This workflow is optimized for individual use. Real-time collaboration on notes or shared reference libraries may require alternative platforms or additional tooling.
Despite these constraints, the workflow remains highly adaptable and has proven effective across disciplines for researchers aiming to build a durable intellectual infrastructure over the course of a PhD.
9. Evaluation Of The Approach
While the Zotero–Obsidian workflow dramatically improves research organization and long-term knowledge retention, several caveats must be considered:
-
Initial Learning Curve: Mastery of this workflow requires technical setup and familiarity with markdown, citation keys, and plugin configuration. While challenging at first, the learning effort is front-loaded and pays off in efficiency over time.
-
Reliance on Plugin Maintenance: A key risk of this system is its dependence on community-maintained plugins. Tools like Zotero Integration, Templater, and Highlighter are not officially supported by Obsidian or Zotero core teams. This means updates or changes to the Obsidian API or plugin repository may break functionality or introduce bugs. Active plugin support is crucial to the system’s longevity.
-
Interoperability and Syncing Issues: Managing synchronization across Zotero, Obsidian, and Anki—especially across multiple devices—can lead to inconsistencies or data loss without careful setup. Users should ensure robust syncing solutions (e.g. Obsidian Sync, Zotero WebDAV, or GitHub backup).
-
Limited Collaboration Capabilities: This setup is designed for solo research workflows. Collaborative features (such as shared note-taking or group annotations) are limited and may require alternate solutions like Notion, Google Docs, or Overleaf when working in teams.
The integration of Zotero with Obsidian presents a notable advantage for individual researchers, exhibiting substantial efficiency in literature management and personal knowledge organization through its unique workflows. However, this model demonstrates significant deficiencies when evaluated in the context of collaborative research dynamics.
Specifically, while Zotero facilitates the creation and management of shared libraries, allowing for the aggregation of sources and references among users, Obsidian is fundamentally limited by its lack of intrinsic support for synchronous collaborative editing functionalities, thereby precluding simultaneous contributions from multiple users in real time. Although the application of version control systems such as Git has the potential to address this limitation, enabling a structured mechanism for tracking changes and managing contributions, the inherent complexity of such systems may pose a barrier to usability for team members who lack familiarity or comfort with version control protocols.
Furthermore, the nuances of color-coded annotation systems and bespoke personal note taxonomies utilized by individual researchers may present interoperability challenges when applied in a group setting, as these systems require rigorously defined conventions to ensure consistency and clarity in cross-collaborator communication and understanding. Thus, researchers should be cognizant of the challenges inherent in adapting tools designed for solitary workflows to the multifaceted requirements of collaborative research initiatives.
-
@ d34e832d:383f78d0
2025-04-24 02:56:591. The Ledger or Physical USD?
Bitcoin embodies a paradigmatic transformation in the foundational constructs of trust, ownership, and value preservation within the context of a digital economy. In stark contrast to conventional financial infrastructures that are predicated on centralized regulatory frameworks, Bitcoin operationalizes an intricate interplay of cryptographic techniques, consensus-driven algorithms, and incentivization structures to engender a decentralized and censorship-resistant paradigm for the transfer and safeguarding of digital assets. This conceptual framework elucidates the pivotal mechanisms underpinning Bitcoin's functional architecture, encompassing its distributed ledger technology (DLT) structure, robust security protocols, consensus algorithms such as Proof of Work (PoW), the intricacies of its monetary policy defined by the halving events and limited supply, as well as the broader implications these components have on stakeholder engagement and user agency.
2. The Core Functionality of Bitcoin
At its core, Bitcoin is a public ledger that records ownership and transfers of value. This ledger—called the blockchain—is maintained and verified by thousands of decentralized nodes across the globe.
2.1 Public Ledger
All Bitcoin transactions are stored in a transparent, append-only ledger. Each transaction includes: - A reference to prior ownership (input) - A transfer of value to a new owner (output) - A digital signature proving authorization
2.2 Ownership via Digital Signatures
Bitcoin uses asymmetric cryptography: - A private key is known only to the owner and is used to sign transactions. - A public key (or address) is used by the network to verify the authenticity of the transaction.
This system ensures that only the rightful owner can spend bitcoins, and that all network participants can independently verify that the transaction is valid.
3. Decentralization and Ledger Synchronization
Unlike traditional banking systems, which rely on a central institution, Bitcoin’s ledger is decentralized: - Every node keeps a copy of the blockchain. - No single party controls the system. - Updates to the ledger occur only through network consensus.
This decentralization ensures fault tolerance, censorship resistance, and transparency.
4. Preventing Double Spending
One of Bitcoin’s most critical innovations is solving the double-spending problem without a central authority.
4.1 Balance Validation
Before a transaction is accepted, nodes verify: - The digital signature is valid. - The input has not already been spent. - The sender has sufficient balance.
This is made possible by referencing previous transactions and ensuring the inputs match the unspent transaction outputs (UTXOs).
5. Blockchain and Proof-of-Work
To ensure consistency across the distributed network, Bitcoin uses a blockchain—a sequential chain of blocks containing batches of verified transactions.
5.1 Mining and Proof-of-Work
Adding a new block requires solving a cryptographic puzzle, known as Proof-of-Work (PoW): - The puzzle involves finding a hash value that meets network-defined difficulty. - This process requires computational power, which deters tampering. - Once a block is validated, it is propagated across the network.
5.2 Block Rewards and Incentives
Miners are incentivized to participate by: - Block rewards: New bitcoins issued with each block (initially 50 BTC, halved every ~4 years). - Transaction fees: Paid by users to prioritize their transactions.
6. Network Consensus and Security
Bitcoin relies on Nakamoto Consensus, which prioritizes the longest chain—the one with the most accumulated proof-of-work.
- In case of competing chains (forks), the network chooses the chain with the most computational effort.
- This mechanism makes rewriting history or creating fraudulent blocks extremely difficult, as it would require control of over 50% of the network's total hash power.
7. Transaction Throughput and Fees
Bitcoin’s average block time is 10 minutes, and each block can contain ~1MB of data, resulting in ~3–7 transactions per second.
- During periods of high demand, users compete by offering higher transaction fees to get included faster.
- Solutions like Lightning Network aim to scale transaction speed and lower costs by processing payments off-chain.
8. Monetary Policy and Scarcity
Bitcoin enforces a fixed supply cap of 21 million coins, making it deflationary by design.
- This limited supply contrasts with fiat currencies, which can be printed at will by central banks.
- The controlled issuance schedule and halving events contribute to Bitcoin’s store-of-value narrative, similar to digital gold.
9. Consider
Bitcoin integrates advanced cryptographic methodologies, including public-private key pairings and hashing algorithms, to establish a formidable framework of security that underpins its operation as a digital currency. The economic incentives are meticulously structured through mechanisms such as mining rewards and transaction fees, which not only incentivize network participation but also regulate the supply of Bitcoin through a halving schedule intrinsic to its decentralized protocol. This architecture manifests a paradigm wherein individual users can autonomously oversee their financial assets, authenticate transactions through a rigorously constructed consensus algorithm, specifically the Proof of Work mechanism, and engage with a borderless financial ecosystem devoid of traditional intermediaries such as banks. Despite the notable challenges pertaining to transaction throughput scalability and a complex regulatory landscape that intermittently threatens its proliferation, Bitcoin steadfastly persists as an archetype of decentralized trust, heralding a transformative shift in financial paradigms within the contemporary digital milieu.
10. References
- Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System.
- Antonopoulos, A. M. (2017). Mastering Bitcoin: Unlocking Digital Cryptocurrencies.
- Bitcoin.org. (n.d.). How Bitcoin Works
-
@ d34e832d:383f78d0
2025-04-24 00:56:03WebSocket communication is integral to modern real-time web applications, powering everything from chat apps and online gaming to collaborative editing tools and live dashboards. However, its persistent and event-driven nature introduces unique debugging challenges. Traditional browser developer tools provide limited insight into WebSocket message flows, especially in complex, asynchronous applications.
This thesis evaluates the use of Chrome-based browser extensions—specifically those designed to enhance WebSocket debugging—and explores how visual event tracing improves developer experience (DX). By profiling real-world applications and comparing built-in tools with popular WebSocket DevTools extensions, we analyze the impact of visual feedback, message inspection, and timeline tracing on debugging efficiency, code quality, and development speed.
The Idea
As front-end development evolves, WebSockets have become a foundational technology for building reactive user experiences. Debugging WebSocket behavior, however, remains a cumbersome task. Chrome DevTools offers a basic view of WebSocket frames, but lacks features such as message categorization, event correlation, or contextual logging. Developers often resort to
console.log
and custom logging systems, increasing friction and reducing productivity.This research investigates how browser extensions designed for WebSocket inspection—such as Smart WebSocket Client, WebSocket King Client, and WSDebugger—can enhance debugging workflows. We focus on features that provide visual structure to communication patterns, simplify message replay, and allow for real-time monitoring of state transitions.
Related Work
Chrome DevTools
While Chrome DevTools supports WebSocket inspection under the Network > Frames tab, its utility is limited: - Messages are displayed in a flat, unstructured stream. - No built-in timeline or replay mechanism. - Filtering and contextual debugging features are minimal.
WebSocket-Specific Extensions
Numerous browser extensions aim to fill this gap: - Smart WebSocket Client: Allows custom message sending, frame inspection, and saved session reuse. - WSDebugger: Offers structured logging and visualization of message flows. - WebSocket Monitor: Enables real-time monitoring of multiple connections with UI overlays.
Methodology
Tools Evaluated:
- Chrome DevTools (baseline)
- Smart WebSocket Client
- WSDebugger
- WebSocket King Client
Evaluation Criteria:
- Real-time message monitoring
- UI clarity and UX consistency
- Support for message replay and editing
- Message categorization and filtering
- Timeline-based visualization
Test Applications:
- A collaborative markdown editor
- A multiplayer drawing game (WebSocket over Node.js)
- A lightweight financial dashboard (stock ticker)
Findings
1. Enhanced Visibility
Extensions provide structured visual representations of WebSocket communication: - Grouped messages by type (e.g., chat, system, control) - Color-coded frames for quick scanning - Collapsible and expandable message trees
2. Real-Time Inspection and Replay
- Replaying previous messages with altered payloads accelerates bug reproduction.
- Message history can be annotated, aiding team collaboration during debugging.
3. Timeline-Based Analysis
- Extensions with timeline views help identify latency issues, bottlenecks, and inconsistent message pacing.
- Developers can correlate message sequences with UI events more intuitively.
4. Improved Debugging Flow
- Developers report reduced context-switching between source code and devtools.
- Some extensions allow breakpoints or watchers on WebSocket events, mimicking JavaScript debugging.
Consider
Visual debugging extensions represent a key advancement in tooling for real-time application development. By extending Chrome DevTools with features tailored for WebSocket tracing, developers gain actionable insights, faster debugging cycles, and a better understanding of application behavior. Future work should explore native integration of timeline and message tagging features into standard browser DevTools.
Developer Experience and Limitations
Visual tools significantly enhance the developer experience (DX) by reducing friction and offering cognitive support during debugging. Rather than parsing raw JSON blobs manually or tracing asynchronous behavior through logs, developers can rely on intuitive UI affordances such as real-time visualizations, message filtering, and replay features.
However, some limitations remain:
- Lack of binary frame support: Many extensions focus on text-based payloads and may not correctly parse or display binary frames.
- Non-standard encoding issues: Applications using custom serialization formats (e.g., Protocol Buffers, MsgPack) require external decoding tools or browser instrumentation.
- Extension compatibility: Some extensions may conflict with Content Security Policies (CSP) or have limited functionality when debugging production sites served over HTTPS.
- Performance overhead: Real-time visualization and logging can add browser CPU/memory overhead, particularly in high-frequency WebSocket environments.
Despite these drawbacks, the overall impact on debugging efficiency and developer comprehension remains highly positive.
Developer Experience and Limitations
Visual tools significantly enhance the developer experience (DX) by reducing friction and offering cognitive support during debugging. Rather than parsing raw JSON blobs manually or tracing asynchronous behavior through logs, developers can rely on intuitive UI affordances such as live message streams, structured views, and interactive inspection of frames.
However, some limitations exist:
- Security restrictions: Content Security Policy (CSP) and Cross-Origin Resource Sharing (CORS) can restrict browser extensions from accessing WebSocket frames in production environments.
- Binary and custom formats: Extensions may not handle binary frames or non-standard encodings (e.g., Protocol Buffers) without additional tooling.
- Limited protocol awareness: Generic tools may not fully interpret application-specific semantics, requiring context from the developer.
- Performance trade-offs: Logging and rendering large volumes of data can cause UI lag, especially in high-throughput WebSocket apps.
Despite these constraints, DevTools extensions continue to offer valuable insight during development and testing stages.
Applying this analysis to relays in the Nostr protocol surfaces some fascinating implications about traffic analysis, developer tooling, and privacy risks, even when data is cryptographically signed. Here's how the concepts relate:
🧠 What This Means for Nostr Relays
1. Traffic Analysis Still Applies
Even though Nostr events are cryptographically signed and, optionally, encrypted (e.g., DMs), relay communication is over plaintext WebSockets or WSS (WebSocket Secure). This means:
- IP addresses, packet size, and timing patterns are all visible to anyone on-path (e.g., ISPs, malicious actors).
- Client behavior can be inferred: Is someone posting, reading, or just idling?
- Frequent "kind" values (like
kind:1
for notes orkind:4
for encrypted DMs) produce recognizable traffic fingerprints.
🔍 Example:
A pattern like: -
client → relay
: small frame at intervals of 30s -relay → client
: burst of medium frames …could suggest someone is polling for new posts or using a chat app built on Nostr.
2. DevTools for Nostr Client Devs
For client developers (e.g., building on top of
nostr-tools
), browser DevTools and WebSocket inspection make debugging much easier:- You can trace real-time Nostr events without writing logging logic.
- You can verify frame integrity, event flow, and relay responses instantly.
- However, DevTools have limits when Nostr apps use:
- Binary payloads (e.g., zlib-compressed events)
- Custom encodings or protocol adaptations (e.g., for mobile)
3. Fingerprinting Relays and Clients
- Each relay has its own behavior: how fast it responds, whether it sends OKs, how it deals with malformed events.
- These can be fingerprinted by adversaries to identify which software is being used (e.g.,
nostr-rs-relay
,strfry
, etc.). - Similarly, client apps often emit predictable
REQ
,EVENT
,CLOSE
sequences that can be fingerprinted even over WSS.
4. Privacy Risks
Even if DMs are encrypted: - Message size and timing can hint at contents ("user is typing", long vs. short message, emoji burst, etc.) - Public relays might correlate patterns across multiple clients—even without payload access. - Side-channel analysis becomes viable against high-value targets.
5. Mitigation Strategies in Nostr
Borrowing from TLS and WebSocket security best practices:
| Strategy | Application to Nostr | |-----------------------------|----------------------------------------------------| | Padding messages | Normalize
EVENT
size, especially for DMs | | Batching requests | Send multipleREQ
subscriptions in one frame | | Randomize connection times | Avoid predictable connection schedules | | Use private relays / Tor| Obfuscate source IP and reduce metadata exposure | | Connection reuse | Avoid per-event relay opens, use persistent WSS |
TL;DR for Builders
If you're building on Nostr and care about privacy, WebSocket metadata is a leak. The payload isn't the only thing that matters. Be mindful of event timing, size, and structure, even over encrypted channels.
-
@ 6c67a3f3:b0ebd196
2025-04-29 11:28:01On Black-Starting the United Kingdom
In the event of a total failure of the electric grid, the United Kingdom would face a task at once technical and Sisyphean: the so-called black start — the reawakening of the nation’s darkened arteries without any external supply of power. In idealized manuals, the task is rendered brisk and clean, requiring but a few days' labor. In the world in which we live, it would be slower, more uncertain, and at times perilously close to impossible.
Let us unfold the matter layer by layer.
I. The Nature of the Undertaking
A black start is not a mere throwing of switches, but a sequential ballet. Small generating stations — diesel engines, hydro plants, gas turbines — must first breathe life into cold transmission lines. Substations must be coaxed into readiness. Load must be picked up cautiously, lest imbalance bring the whole effort to naught. Islands of power are stitched together, synchronized with exquisite care.
Each step is fraught with fragility. An unseen misalignment, an unsignaled overload, and hours of labor are lost.
II. The Dream of the Engineers
In theory, according to the National Grid Electricity System Operator (ESO), the sequence would unfold thus: within half a day, core transmission lines humming; within a day or two, hospitals lit and water flowing; within three days, cities reawakened; within a week, the nation, broadly speaking, restored to life.
This vision presupposes a fantasy of readiness: that black-start units are operational and plentiful; that communications systems, so delicately dependent on mobile networks and the internet, endure; that personnel, trained and coordinated, are on hand in sufficient numbers; and that no sabotage, no accident, no caprice of nature interrupts the dance.
III. The Real Order of Things
Reality is more obstinate. Many black-start capable plants have been shuttered in the name of efficiency. The financial incentives once offered to private generators for black-start readiness were judged insufficient; the providers withdrew.
Grid operations now rely on a lattice of private interests, demanding slow and complicated coordination. Telecommunications are vulnerable in a deep blackout. The old hands, steeped in the tacit lore of manual restoration, have retired, their knowledge scattered to the four winds. Cyber vulnerabilities have multiplied, and the grid’s physical inertia — the very thing that grants a system grace under perturbation — has grown thin, leaving the UK exposed to sudden collapses should synchronization falter.
Under such conditions, the best of hopes might yield five to ten days of partial recovery. Weeks would be required to restore the former web of normalcy. In certain cases — in the face of physical damage to high-voltage transformers, whose replacements take months if not years — black-start might founder altogether.
IV. The Quiet Admissions of Officialdom
In its polite documents, the National Grid ESO speaks carefully: essential services might see restoration within three days, but full public service would require "up to a week or longer." If designated black-start units were to fail — a real risk, given recent audits showing many unready — the timelines would stretch indefinitely.
In plain speech: in a true national blackout, the nation’s restoration would be a gamble.
V. The Forking Paths Ahead
If all proceeds well, Britain might stumble into light within three days. If the adversities accumulate — cyberattack, internal sabotage, simple human miscalculation — the process would stretch into weeks, even months. In the gravest scenarios, the nation would reconstitute not as one great engine, but as isolated islands of power, each jury-rigged and vulnerable.
Meanwhile, the paradoxical truth is that small and simple systems — the grids of Jersey, Malta, and the like — would outpace their mightier cousins, not despite their modest scale but because of it.
VI. Conclusion
The British grid, in short, is a triumph of late modernity — and like all such triumphs, it carries within itself the seeds of its own fragility. It works magnificently until the day it does not. When that day comes, recovery will be neither swift nor sure, but a slow, halting reweaving of threads too easily frayed.
-
@ 3c7dc2c5:805642a8
2025-04-23 21:50:33🧠Quote(s) of the week:
'The "Bitcoin Corporate Treasury" narrative is a foot gun if it's not accompanied by the sovereignty via self-custody narrative. Number Go Up folks are pitching companies to funnel their funds into a handful of trusted third parties. Systemic Risk Go Up.' - Jameson Lopp
Lopp is spot on!
The Bitcoin network is a fortress of digital power backed by 175 terawatt-hours (TWh)—equivalent to 20 full-scale nuclear reactors running continuously 24 hours per day, 365 days per year—making it nation-state-level resistant and growing stronger every day. - James Lavish
🧡Bitcoin news🧡
https://i.ibb.co/xSYWkJPC/Goqd-ERAXw-AEUTAo.jpg
Konsensus Network
On the 14th of April:
➡️ Bitcoin ETFs are bleeding out. Not a single inflow streak since March.
➡️'There are now just a bit less than 3 years left until the next halving. The block reward will drop from 3.125 BTC to 1.5625 BTC. Plan accordingly.' - Samson Mow
➡️Bitcoin is the new benchmark. Bitcoin has outperformed the S&P 500 over the past 1-, 2-, 3-, 4-, 5-, 6-, 7-, 8-, 9-, 10-, 11-, 12-, 13-, and 14-year periods. https://i.ibb.co/GfBK6n2Z/Gof6-Vwp-Ws-AA-g1-V-1.jpg
You cannot consider yourself a serious investor if you see this data and ignore it. Never been an asset like it in the history of mankind. But that is from an investor's perspective...
Alex Gladstein: "While only certain credentialed individuals can own US stocks (a tiny % of the world population) — anyone in the world, dissident or refugee, can own the true best-performing financial asset. "
➡️New record Bitcoin network hashrate 890,000,000,000,000,000,000x per second.
➡️The Korea Exchange has experienced its first bitcoin discount in South Korea since December 2024.
➡️Every government should be mining Bitcoin, say Bhutan's Prime Minister - Al Jazeera "It's a simple choice that's earned billions of dollars. Mining makes tremendous sense."
On the 15th of April:
➡️'Owning 1 Bitcoin isn’t a trade... - It’s a power move. - A geopolitical hedge. - A once-per-civilization bet on the next monetary regime. If you have the means to own one and don’t… You’re not managing risk. You’re misreading history.' -Alec Bakhouche
Great thread: https://x.com/Alec_Bitcoin/status/1912216075703607448
➡️The only thing that drops faster than new ETH narratives is the ETH price. Ethereum is down 74% against Bitcoin since switching from PoW to PoS in 2022.
https://i.ibb.co/bR3yjqZX/Gos0kc-HXs-AAP-9-X.jpg
Piere Rochard: "The theory was that on-chain utility would create a positive fly-wheel effect of demand for holding ETH. The reality is that even if (big if) you need its chain utility, you don’t actually need to hold ETH, you can use stablecoins or wBTC. There’s no real value accrual thesis."
If you're still holding ETH, you're in denial. You watched it slide from 0.05 to 0.035. Now it's circling 0.02 and you're still hoping? That's not a strategy—that's desperation. There is no bounce. No cavalry. Just a crowd of bagholders waiting to offload on the next fool. Don’t be that fool. Everyone’s waiting to dump, just like you.
For example. Galaxy Digital deposited another 12,500 $ETH($20.28M) to Binance 10 hours ago. Galaxy Digital has deposited 37,500 $ETH($60.4M) to Binance in the past 4 days. The institutional guys that were pushing this fraud coin like the Winklevoss brothers and Novogratz (remember the Luna fiasco?!) are ejecting. If you're still holding, no one to blame but yourself.
Take the loss. Rotate to BTC.
Later, you can lie and say you always believed in Bitcoin. But right now, stop the bleeding.
You missed it. Accept that. Figure out why.
P.S. Don’t do anything stupid. It’s just money. You’ll recover. Move smarter next time.
➡️And it is not only against ETH, every other asset is bleeding against BTC because every other asset is inferior to BTC. Did you know Bitcoin's 200-week moving average never declines? It always rises. What does this suggest? 'This is the most significant chart in financial markets. It's Bitcoin - measured with a 200-week moving average (aka 4 years at a time). Zoom out, and the truth becomes crystal clear: Bitcoin has never lost purchasing power. What does this hint at? Bitcoin is the most reliable savings technology on Earth.' - Cole Walmsley
Proof: https://x.com/Cole_Walmsley/status/1912545128826142963
➡️SPAR Switzerland Pilots Bitcoin and Lightning Network Payments Zurich, Switzerland – SPAR, one of the world’s largest grocery retail chains, has launched a pilot program to accept Bitcoin and Lightning Network payments at select locations in Switzerland.
With a global presence spanning 13,900 stores across 48 countries, this move signals a significant step toward mainstream adoption of Bitcoin in everyday commerce.
➡️$110 billion VanEck proposes BitBonds for the US to buy more Bitcoin and refinance its $14 trillion debt.
➡️'A peer-reviewed study forecasts $1M Bitcoin by early 2027—and up to $5M by 2031.' -Simply Bitcoin
On the 17th of April:
➡️ Every one of these dots is flaring gas into the atmosphere and could be mining Bitcoin instead of wasting the gas and polluting the air.
https://i.ibb.co/d4WBjXTX/Gos-OHm-Nb-IAAO8-VT.jpg
Thomas Jeegers: 'Each of these flare sites is a perfect candidate for Bitcoin mining, where wasted methane can be captured, converted into electricity, and monetized on the spot. No need for new pipelines. No need for subsidies. Just turning trash into treasure. Yes, other technologies can help reduce methane emissions. But only Bitcoin mining can do it profitably, consistently, at scale, and globally. And that’s exactly why it's already happening in Texas, Alberta, Oman, Argentina, and beyond. Methane is 84x more harmful than CO₂ over 20 years. Bitcoin is not just a monetary revolution, it's an environmental one.'
➡️Bitcoin market cap dominance hits a new 4-year high.
➡️BlackRock bought $30 million #Bitcoin for its spot Bitcoin ETF.
➡️Multiple countries and sovereign wealth funds are looking to establish Strategic Bitcoin Reserves - Financial Times
Remember, Gold's market cap is up $5.5 TRILLION in 2025. That's more than 3x of the total Bitcoin market cap. Nation-state adoption of Bitcoin is poised to be a pivotal development in monetary history...eventually.
➡️'In 2015, 1 BTC bought 57 steaks. Today, 7,568. Meanwhile, $100 bought 13 steaks in 2015. Now, just 9. Stack ₿, eat more steak.' Priced in Bitcoin
https://i.ibb.co/Pz5BtZYP/Gouxzda-Xo-AASSc-M.jpg
➡️'The Math:
At $91,150 Bitcoin flips Saudi Aramco
At $109,650 Bitcoin flips Amazon
At $107,280 Bitcoin flips Google
At $156,700 Bitcoin flips Microsoft
At $170,900 Bitcoin flips Apple
At $179,680 Bitcoin flips NVIDIA
Over time #Bitcoin flips everything.' -CarlBMenger
➡️Will $1 get you more or less than 1,000 sats by the fifth Halving? Act accordingly. - The rational root Great visual: https://i.ibb.co/tPxnXnL4/Gov1s-IRWEAAEXn3.jpg
➡️Bhutan’s Bitcoin Holdings Now Worth 30% of National GDP: A Bold Move in the Bitcoin Game Theory
In a stunning display of strategic foresight, Bhutan’s Bitcoin holdings are now valued at approximately 30% of the nation’s GDP. This positions the small Himalayan kingdom as a key player in the ongoing Bitcoin game theory that is unfolding across the world. This move also places Bhutan ahead of many larger nations, drawing attention to the idea that early Bitcoin adoption is not just about financial innovation, but also about securing future economic sovereignty and proof that Bitcoin has the power to lift nations out of poverty.
Bhutan explores using its hydropower for green Bitcoin mining, aiming to boost the economy while maintaining environmental standards. Druk Holding's CEO Ujjwal Deep Dahal says hydropower-based mining effectively "offsets" fossil fuel-powered bitcoin production, per Reuters.
➡️Barry Silbert, CEO of Digital Currency Group, admits buying Coinbase was great, but just holding Bitcoin would’ve been better. Silbert told Raoul Pal he bought BTC at $7–$8 and, "Had I just held the Bitcoin, I actually would have done better than making those investments."
He also called 99.9% of tokens “worthless,” stressing most have no reason to exist.
No shit Barry!
➡️Bitcoin hashrate hits a new ATH.
https://i.ibb.co/1pDj9Ch/Gor2-Dd2ac-AAk-KC5.jpg
Bitcoin hashrate hit 1ZH/s. That’s 1,000,000,000,000,000,000,000 hashes every second. Good luck stopping that! Bitcoin mining is the most competitive and decentralized industry in the world.
➡️¥10 Billion Japanese Fashion Retailer ANAP Adds Bitcoin to Corporate Treasury Tokyo, Japan – ANAP Inc., a publicly listed Japanese fashion retailer with a market capitalization of approximately ¥10 billion, has officially announced the purchase of Bitcoin as part of its corporate treasury strategy.
“The global trend of Bitcoin becoming a reserve asset is irreversible,” ANAP stated in its announcement.
➡️El Salvador just bought more Bitcoin for their Strategic Bitcoin Reserve.
➡️Only 9.6% of Bitcoin addresses are at a loss, a rare signal showing one of the healthiest market structures ever. Despite not being at all-time highs, nearly 90% of holders are in profit, hinting at strong accumulation and potential for further upside.
On the 18th of April:
➡️Swedish company Bitcoin Treasury AB announces IPO plans, aiming to become the 'European version of MicroStrategy'. The company clearly states: "Our goal is to fully acquire Bitcoin (BTC)."
➡️Relai app (unfortunately only available with full KYC) with some great Bitcoin marketing. https://i.ibb.co/G4szrqXj/Goz-Kh-Hh-WEAEo-VQr.jpg
➡️Arizona's Bitcoin Reserve Bill (SB 1373) has passed the House Committee and is advancing to the final floor vote.
➡️Simply Bitcoin: It will take 40 years to mine the last Bitcoin. If you're a whole corner, your grandchildren will inherit the equivalent of four decades of global energy. You're not bullish enough. https://i.ibb.co/Hpz2trvr/Go1-Uiu7a-MAI2g-VS.jpg
➡️Meanwhile in Slovenia: Slovenia's Finance Ministry proposes to introduce a 25% capital gains tax on bitcoin profits.
➡️ In one of my previous Weekly Recaps I already shared some news on Breez. Now imagine a world where everyone can implement lightning apps on browsers...with the latest 'Nodeless' release Breez is another step towards bringing Bitcoin payments to every app. Stellar work!
Breez: 'Breez SDK Now Supports WASM We’re excited to announce that Nodeless supports WebAssembly (WASM), so apps can now add Bitcoin payments directly into browsers and node.js environments. Pay anyone, anywhere, on any device with the Breez SDK.
Our new Nodeless release has even more big updates → Minimum payment amounts have been significantly reduced — send from 21 sats, receive from 100. Now live in Misty Breez (iOS + Android). → Users can now pay fees with non-BTC assets like USDT. Check the release notes for all the details on the 0.8 update.'
https://github.com/breez/breez-sdk-liquid/releases/tag/0.8.0
https://bitcoinmagazine.com/takes/embed-bitcoin-into-everything-everywhere Shinobi: Bitcoin needs to be everywhere, seamlessly, embedded into everything.
➡️Despite reaching a new all-time high of $872B, bitcoin's realized market cap monthly growth slowed to 0.9%, signaling continued risk-off sentiment, according to Glassnode.
On the 19th of April:
➡️Recently a gold bug, Jan Nieuwenhuijs (yeah he is Dutch, we are not perfect), stated the following: 'Bitcoin was created by mankind and can be destroyed by mankind. Gold cannot. It’s as simple as that.'
As a reply to a Saylor quote: 'Bitcoin has no counterparty risk. No company. No country. No creditor. No currency. No competitor. No culture. Not even chaos.'
Maybe Bitcoin can be destroyed by mankind, never say never, but what I do know is that at the moment Bitcoin is destroying gold like a manic.
https://i.ibb.co/ZR8ZB4d5/Go7d-KCqak-AA-D7-R.jpg
Oh and please do know. Gold may have the history, but Bitcoin has the scarcity. https://i.ibb.co/xt971vJc/Gp-FMy-AXQAAJzi-M.jpg
In 2013, you couldn't even buy 1 ounce of gold with 1 Bitcoin.
Then in 2017, you could buy 9 ounces of gold with 1 Bitcoin.
Today you can buy 25 ounces of gold with 1 Bitcoin.
At some point, you'll be able to buy 100 ounces of gold with 1 Bitcoin.
➡️Investment firm Abraxas Capital bought $250m Bitcoin in just 4 days.
On the 20th of April:
➡️The Bitcoin network is to be 70% powered by sustainable energy sources by 2030. https://i.ibb.co/nsqsfVY4/Go-r7-LEWo-AAt6-H2.jpg
➡️FORBES: "Converting existing assets like Fort Knox gold into bitcoin makes sense. It would be budget-neutral and an improvement since BTC does everything that gold can, but better" Go on Forbes, and say it louder for the people at the back!
On the 21st of April:
➡️Bitcoin has now recovered the full price dip from Trump's tariff announcement.
➡️Michael Saylor's STRATEGY just bought another 6,556 Bitcoin worth $555.8m. MicroStrategy now owns 2.7% of all Bitcoin in circulation. At what point do we stop celebrating Saylor stacking more?
On the same day, Metaplanet acquired 330 BTC for $28.2M, reaching 4,855 BTC in total holdings.
➡️'Northern Forum, a non-profit member organization of UNDP Climate Change Adaptation, just wrote a well-researched article on how Bitcoin mining is aiding climate objectives (stabilizing grids, aiding microgrids, stopping renewable waste)' -Daniel Batten
https://northernforum.net/how-bitcoin-mining-is-transforming-the-energy-production-game/
💸Traditional Finance / Macro:
On the 16th of April:
👉🏽The Nasdaq Composite is now on track for its 5th-largest daily point decline in history.
👉🏽'Foreign investors are dumping US stocks at a rapid pace: Investors from overseas withdrew ~$6.5 billion from US equity funds over the last week, the second-largest amount on record. Net outflows were only below the $7.5 billion seen during the March 2023 Banking Crisis. According to Apollo, foreigners own a massive $18.5 trillion of US stocks or 20% of the total US equity market. Moreover, foreign holdings of US Treasuries are at $7.2 trillion, or 30% of the total. Investors from abroad also hold 30% of the total corporate credit market, for a total of $4.6 trillion.' TKL
On the 17th of April:
👉🏽'Historically, the odds of a 10% correction are 40%, a 25% bear market 20%, and a 50% bear market 2%. That means that statistically speaking the further the market falls the more likely it is to recover. Yes, some 20% declines become 50% “super bears,” but more often than not the market has historically started to find its footing at -20%, as it appears to have done last week.' - Jurrien Timmer - Dir. of Global Macro at Fidelity
https://i.ibb.co/B2bVDLqM/Gow-Hn-PRWYAAX79z.jpg
👉🏽The S&P 500 is down 10.3% in the first 72 trading days of 2025, the 5th worst start to a year in history.
🏦Banks:
👉🏽 'Another amazing piece of reporting by Nic Carter on a truly sordid affair. Nic Carter is reporting that prominent Biden officials killed signature bank — though solvent — to expand Silvergate/SVB collapses into a national issue, allowing FDIC to invoke a “systemic risk exemption” to bail out SVB at Pelosi’s request.' - Alex Thorne
https://www.piratewires.com/p/signature-didnt-have-to-die-either-chokepoint-nic-carter
Signature, Silvergate, and SVB were attacked by Democrats to kneecap crypto and distance themselves from FTX. The chaos created unintended negative consequences. Signature was solvent but they forced a collapse to invoke powers which they used to clean up the mess they made.
🌎Macro/Geopolitics:
Every nation in the world is in debt and no one wants to say who the creditor is.
On the 14th of April:
👉🏽'US financial conditions are now their tightest since the 2020 pandemic, per ZeroHedge. Financial conditions are even tighter than during one of the most rapid Fed hike cycles of all time, in 2022. Conditions have tightened rapidly as stocks have pulled back, while credit spreads have risen. To put it differently, the availability and cost of financing for economic activity have worsened. That suggests the economy may slow even further in the upcoming months.' -TKL
👉🏽Global Repricing of Duration Risk... 'It opens the door to a global repricing of duration risk. This isn’t a blip. It’s a sovereign-level alarm bell.
"I find Japan fascinating on many levels not just its financial history. Just watch the Netflix documentary: Watch Age of Samurai: Battle for Japan 'I only discovered the other day that the Bank of Japan was the first to use Quantitative Easing. Perhaps that's when our global finance system was first broken & it's been sticking plasters ever since.' - Jane Williams
A must-read…the U.S. bond market is being driven down by Japanese selling and not because they want to…because they have to. It’s looking more and more dangerous.
BoJ lost its control over long-term bond yields. Since inflation broke out in Japan, BoJ can not suppress any longer...
EndGame Macro: "This is one of the clearest signals yet that the Bank of Japan has lost control of the long end of the curve. Japan’s 30-year yield hitting 2.845% its highest since 2004 isn’t just a local event. This has global knock-on effects: Japan is the largest foreign holder of U.S. Treasuries and a key player in the global carry trade. Rising JGB yields force Japanese institutions to repatriate capital, unwind overseas positions, and pull back on USD asset exposure adding pressure to U.S. yields and FX volatility. This spike also signals the end of the deflationary regime that underpinned global risk assets for decades. If Japan once the global anchor of low yields can’t suppress its bond market anymore, it opens the door to a global repricing of duration risk. This isn’t a blip. It’s a sovereign-level alarm bell."
https://i.ibb.co/LXsCDZMf/Gow-XGL8-Wk-AEDXg-P.jpg
You're distracted by China, but it's always been about Japan.
On the 16th of April:
👉🏽Over the last 20 years, gold has now outperformed stocks, up +620% compared to a +580% gain in the S&P 500 (dividends included). Over the last 9 months, gold has officially surged by over +$1,000/oz. Gold hit another all-time high and is now up over 27% in 2025. On pace for its best year since 1979.
https://i.ibb.co/9kZbtPVF/Goftd-OSW8-AA2a-9.png
Meanwhile, imports of physical gold have gotten so large that the Fed has released a new GDP metric. Their GDPNow tool now adjusts for gold imports. Q1 2025 GDP contraction including gold is expected to be -2.2%, and -0.1% net of gold. Gold buying is at recession levels.
👉🏽Von der Leyen: "The West as we knew it no longer exists. [..] We need another, new European Union ready to go out into the big wide world and play a very active role in shaping this new world order"
Her imperial aspirations have long been on display. Remember she was not elected.
👉🏽Fed Chair Jerome Powell says crypto is going mainstream, a legal framework for stablecoins is a good idea, and there will be loosening of bank rules on crypto.
On the 17th of April:
👉🏽The European Central Bank cuts interest rates by 25 bps for their 7th consecutive cut as tariffs threaten economic growth. ECB's focus shifted to 'downside risk to the growth outlook.' Markets price in the deposit rate will be at 1.58% in Dec, from 1.71% before the ECB's statement. Great work ECB, as inflation continues to decline and economic growth prospects worsen. Unemployment is also on the rise. Yes, the economy is doing just great!
On the same day, Turkey reversed course and hiked rates for the first time since March 2024, 350 bps move up to 46%.
👉🏽'Global investors have rarely been this bearish: A record ~50% of institutional investors intend to reduce US equity exposure, according to a Bank of America survey released Monday. Allocation to US stocks fell 13 percentage points over the last month, to a net 36% underweight, the lowest since the March 2023 Banking Crisis. Since February, investors' allocation to US equities has dropped by ~53 percentage points, marking the largest 2-month decline on record. Moreover, a record 82% of respondents are now expecting the world economy to weaken. As a result, global investor sentiment fell to just 1.8 points, the 4th-lowest reading since 2008. We have likely never seen such a rapid shift in sentiment.' -TKL
👉🏽'US large bankruptcies jumped 49 year-over-year in Q1 2025, to 188, the highest quarterly count since 2010. Even during the onset of the 2020 pandemic, the number of filings was lower at ~150. This comes after 694 large companies went bankrupt last year, the most in 14 years. The industrial sector recorded the highest number of bankruptcies in Q1 2025, at 32. This was followed by consumer discretionary and healthcare, at 24 and 13. Bankruptcies are rising.' -TKL
👉🏽DOGE‘s success is simply breathtaking.
https://i.ibb.co/zV45sYBn/Govy-G6e-XIAA8b-NI.jpg
Although it would've been worse without DOGE, nothing stops this train.
👉🏽GLD update: Custodian JPM added 4 tons, bringing their total to a new all-time high of 887 tons. JPM has added 50 tons in a month and is 163 tons from surpassing Switzerland to become the 7th largest gold holder in the world.
Now ask yourself and I quote Luke Gromen:
a) why JPM decided to become a GLD custodian after 18 years, & then in just over 2 years, shift 90%+ of GLD gold to its vaults, &;
b) why the Atlanta Fed has continued to report real GDP with- and without (~$500B of) gold imports YTD?
👉🏽Sam Callahan: After slowing the pace of QT twice—and now hinting at a potential return to QE—the Fed’s balance sheet is settling into a new higher plateau. "We’ve been very clear that this is a temporary measure...We’ll normalize the balance sheet and reduce the size of holdings...It would be quite a different matter if we were buying these assets and holding them indefinitely. It would be a monetization. We are not doing that." - Ben Bernake, Dec. 12, 2012
Temporary measures have a funny way of becoming structural features. The balance sheet didn’t ‘normalize’.. it evolved. What was once an ‘emergency’ is now a baseline. This isn’t QE or QT anymore. It’s a permanent intervention dressed as policy. Remember, 80% of all dollars were created in the last 5 years.
👉🏽Global Fiat Money Supply Is Exploding 🚨
The fiat system is on full tilt as central banks flood markets with unprecedented liquidity:
🇺🇸 U.S.: Money supply nearing new all-time highs
🇨🇳 China: At record levels
🇯🇵 Japan: Close to historic peaks
🇪🇺 EU: Printing into new ATHs https://i.ibb.co/Cp4z97Jx/Go4-Gx2-MXIAAzdi9.jpg
This isn’t growth—it’s monetary debasement. Governments aren’t solving problems; they’re papering over them with inflation.
Bitcoin doesn’t need bailouts. It doesn’t print. It doesn’t inflate.
As fiat currencies weaken under the weight of endless expansion, Bitcoin stands alone as a fixed-supply, incorruptible alternative.
👉🏽Fed Funds Rate: Market Expectations...
-May 2025: Hold
-Jun 2025: 25 bps cut to 4.00-4.25%
-July 2025: 25 bps cut to 3.75-4.00%
-Sep 2025: 25 bps cut to 3.50-3.75%
-Oct 2025: Hold
-Dec 2025: 25 bps cut to 3.25-3.50%
Anyway, shortterm fugazi....500 years of interest rates, visualized: https://i.ibb.co/84zs0yvD/Go7i7k-ZXYAAt-MRH.jpg
👉🏽'A net 49% of 164 investors with $386 billion in assets under management (AUM) believe a HARD LANDING is the most likely outcome for the world economy, according to a BofA survey. This is a MASSIVE shift in sentiment as 83% expected no recession in March.' -Global Markets Investor
👉🏽The EU’s New Role as Tax Collector: A Turning Point for Sovereignty
Beginning in 2027, a new chapter in the European Union’s influence over national life will begin. With the introduction of ETS2, the EU will extend its Emissions Trading System to include not just businesses, but private individuals as well. This means that CO₂ emissions from household gas consumption and vehicle fuel will be taxed—directly impacting the daily lives of citizens across the continent.
In practice, this shift transforms the EU into a (in)direct tax collector, without national consent, without a democratic mandate, and without the explicit approval of the people it will affect. The financial burden will be passed down through energy suppliers and fuel providers, but make no mistake: the cost will land squarely on the shoulders of European citizens, including millions of Dutch households.
This raises a fundamental question: What does sovereignty mean if a foreign or supranational entity can 'tax' your citizens? When a people are 'taxed' by a power beyond their borders—by an unelected body headquartered in Brussels—then either their nation is no longer sovereign, or that sovereignty has been surrendered or sold off by those in power.
There are only two conclusions to draw: We are living under soft occupation, with decisions made elsewhere that bind us at home. Or our sovereignty has been handed away voluntarily—a slow erosion facilitated by political elites who promised integration but delivered subordination. Either way, the more aggressively the EU enforces this trajectory, the more it reveals the futility of reforming the Union from within. The democratic deficit is not shrinking—it’s expanding. And with every new policy imposed without a national vote, the case for fundamental change grows stronger.
If Brussels continues down this path, there will come a point when only one option remains: A clean and decisive break. My view: NEXIT
Source: https://climate.ec.europa.eu/eu-action/eu-emissions-trading-system-eu-ets/ets2-buildings-road-transport-and-additional-sectors_en
For the Dutch readers:
https://www.businessinsider.nl/directe-co2-heffing-van-eu-op-gas-en-benzine-kan-huishoudens-honderden-euros-per-jaar-kosten-er-komt-ook-een-sociaal-klimaatfonds/?tid=TIDP10342314XEEB363B26CB34FB48054B929DB743E99YI5
On the 18th of April:
👉🏽'Two lost decades. Grotesque overregulation, bureaucracy, lack of innovation, and left redistribution mindset have their price. Europe is on its way to becoming an open-air museum. What a pity to watch.' -Michael A. Arouet
https://i.ibb.co/PsxdC3Kw/Goz-Ffwy-XMAAJk-Af.jpg
Looking at the chart, fun fact, the Lisbon Treaty was signed in 2007. That treaty greatly empowered the European bureaucracy. Reforms are needed in Europe, as soon as possible.
👉🏽CBP says latest tariffs have generated $500 million, well below Trump’s estimate — CNBC
Yikes! 'Probably one the biggest economic blunders in history. $500 million in 15 days means $12 billion a year in additional tax revenue. Literal trillions in wealth destroyed, interest on the debt increased, the dollar weakened, businesses wiped out all over the world, and literally every single country in the world antagonized... all for raising a yearly amount of taxes that can fund the US military budget for just 4 days. $500 million pays for exactly 37 minutes of the US budget.' - Arnaud Bertrand
YIKES!!
👉🏽Belarus to launch a "digital ruble" CBDC by the end of 2026.
On the 20th of April:
👉🏽'China’s central bank increased its gold holdings by 5 tonnes in March, posting its 5th consecutive monthly purchase. This brings total China’s gold reserves to a record 2,292 tonnes. Chinese gold holdings now reflect 6.5% of its total official reserve assets. According to Goldman Sachs, China purchased a whopping 50 tonnes of gold in February, or 10 times more than officially reported. Over the last 3 years, China's purchases of gold on the London OTC market have significantly surpassed officially reported numbers. China is accumulating gold at a rapid pace.' -TKL
On the 21st of April:
👉🏽Gold officially breaks above $3,400/oz for the first time in history. Gold funds attracted $8 BILLION in net inflows last week, the most EVER. This is more than DOUBLE the records seen during the 2020 CRISIS. Gold is up an impressive 29% year-to-date.
https://i.ibb.co/Nn7gr45q/Goq7km-PW8-AALegt.png
👉🏽I want to finish this segment and the weekly recap with a chart, a chart to think about! A chart to share with friends, family, and co-workers:
https://i.ibb.co/9xFxRrw/Gp-A3-ANWbs-AAO5nl.jpg
side note: 'Labeled periods like the "Era of Populism" (circa 2010-2020) suggest a link between growing wealth disparity and populist movements, supported by studies like those http://on.tandfonline.com, which note income inequality as a driver for populist party support in Europe due to economic insecurities and distrust in elites.'
The fiat system isn’t broken. It’s doing exactly what it was designed to do: Transfer wealth from savers to the state. This always ends in either default or debasement There’s one exit - Bitcoin.
Great thread: https://x.com/fitcoiner/status/1912932351677792703 https://i.ibb.co/84sFWW9q/Gow-Zv-SFa4-AAu2-Ag.jpg
🎁If you have made it this far I would like to give you a little gift, well in this case two gifts:
Bitcoin Nation State Adoption Paradox - A Trojan Horse with Alex Gladstein. Exploring the paradoxes of Bitcoin adoption in nation-states and its radical role in human rights, freedom, and financial sovereignty. https://youtu.be/pLIxmIMHL44
Credit: I have used multiple sources!
My savings account: Bitcoin The tool I recommend for setting up a Bitcoin savings plan: PocketBitcoin especially suited for beginners or people who want to invest in Bitcoin with an automated investment plan once a week or monthly.
Use the code SE3997
Get your Bitcoin out of exchanges. Save them on a hardware wallet, run your own node...be your own bank. Not your keys, not your coins. It's that simple. ⠀ ⠀
⠀⠀ ⠀ ⠀⠀⠀
Do you think this post is helpful to you?
If so, please share it and support my work with a zap.
▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃
⭐ Many thanks⭐
Felipe - Bitcoin Friday!
▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃
-
@ 975e4ad5:8d4847ce
2025-04-29 08:26:50With the advancement of quantum computers, a new threat emerges for the security of cryptocurrencies and blockchain technologies. These powerful machines have the potential to expose vulnerabilities in traditional cryptographic systems, which could jeopardize the safety of digital wallets. But don’t worry—modern wallets are already equipped to handle this threat with innovative solutions that make your funds nearly impossible to steal, even by a quantum computer. Let’s explore how this works and why you can rest easy.
The Threat of Quantum Computers
To understand how wallets protect us, we first need to grasp what makes quantum computers so dangerous. At the core of most cryptocurrencies, like Bitcoin, lies public and private key cryptography. The public key (or address) is like your bank account number—you share it to receive funds. The private key is like your PIN—it allows you to send funds and must remain secret.
Traditional cryptography, such as the ECDSA algorithm, relies on mathematical problems that are extremely difficult to solve with conventional computers. For instance, deriving a private key from a public key is practically impossible, as it would take millions of years of computation. However, quantum computers, thanks to algorithms like Shor’s, can significantly speed up this process. Theoretically, a sufficiently powerful quantum computer could uncover a private key from a public key in minutes or even seconds.
This is a problem because if someone gains access to your private key, they can send all your funds to their own address. But here’s the good news—modern wallets use a clever solution to render this threat powerless.
How Do Wallets Protect Us?
One of the most effective defenses against quantum computers is the use of one-time addresses in wallets. This means that for every transaction—whether receiving or sending funds—the wallet automatically generates a new public address. The old address, once used, remains in the transaction history but no longer holds any funds, as they are transferred to a new address.
Why Does This Work?
Imagine you’re sending or receiving cryptocurrency. Your wallet creates a new address for that transaction. After the funds are sent or received, that address becomes “used,” and the wallet automatically generates a new one for the next transaction. If a quantum computer manages to derive the private key from the public address of the used address, it will find nothing—because that address is already empty. Your funds are safely transferred to a new address, whose public key has not yet been exposed.
This strategy is known as HD (Hierarchical Deterministic) wallets. It allows the wallet to generate an infinite number of addresses from a single master key (seed) without compromising security. Each new address is unique and cannot be linked to the previous ones, making it impossible to trace your funds, even with a quantum computer.
Automation Makes It Effortless
The best part? You don’t need to worry about this process—it’s fully automated. When you use a modern wallet like MetaMask, Ledger, Trezor, or software wallets for Bitcoin, everything happens behind the scenes. You simply click “receive” or “send,” and the wallet takes care of generating new addresses. There’s no need to understand the complex technical details or manually manage your keys.
For example:
- You want to receive 0.1 BTC. Your wallet provides a new address, which you share with the sender.
- After receiving the funds, the wallet automatically prepares a new address for the next transaction.
- If you send some of the funds, the remaining amount (known as “change”) is sent to another new address generated by the wallet.
This system ensures that public addresses exposed on the blockchain no longer hold funds, making quantum attacks pointless.
Additional Protection: Toward Post-Quantum Cryptography
Beyond one-time addresses, blockchain developers are also working on post-quantum cryptography—algorithms that are resistant to quantum computers. Some blockchain networks are already experimenting with such solutions, like algorithms based on lattices (lattice-based cryptography). These methods don’t rely on the same mathematical problems that quantum computers can solve, offering long-term protection.
In the meantime, one-time addresses combined with current cryptographic standards provide enough security to safeguard your funds until post-quantum solutions become widely adopted.
Why You Shouldn’t Worry
Modern wallets are designed with the future in mind. They not only protect against today’s threats but also anticipate future risks, such as those posed by quantum computers. One-time addresses make exposed public keys useless to hackers, and automation ensures you don’t need to deal with the technicalities. HD wallets, which automatically generate new addresses, make the process seamless and secure for users.
Public key exposure only happens when necessary, reducing the risk of attacks, even from a quantum computer. In conclusion, while quantum computers pose a potential threat, modern wallets already offer effective solutions that make your cryptocurrencies nearly impossible to steal. With one-time addresses and the upcoming adoption of post-quantum cryptography, you can be confident that your funds are safe—today and tomorrow.
-
@ ebdee929:513adbad
2025-04-23 21:06:02Screen flicker is a subtle and often overlooked cause of eye strain that many of us deal with daily. We understand this issue firsthand and are working hard to solve it, which is why we build for a different, more caring screen technology. This guide will help you understand screen flicker, how it affects you, and why better screen technology can make a real difference.
A silent epidemic in a LED-driven world
Tired eyes and a drained mind are almost a universal feeling at the end of a work day. That is, if you work a job that requires you to be in front of a computer screen all day… which today is most of us.
Slow motion shows flicker: It's not just screens; nearly all LED environments could flicker. Reddit: PWM_Sensitive
"Digital eye strain" refers to the negative symptoms (dry eyes, blurred vision, headaches, eye fatigue, light sensitivity, neck pain, etc.) that arise from use of digital devices for a prolonged period of time. It is also known as computer vision syndrome. Numbers are hard to pin down for such a commonly occurring issue, but pre COVID (2020) researchers estimated up to 70% prevalence in modern society.
Since COVID-19, things have gotten much worse.
"Digital eye strain has been on the rise since the beginning of the COVID-19 pandemic. An augmented growth pattern has been experienced with prevalence ranging from 5 to 65% in pre-COVID-19 studies to 80–94% in the COVID-19 era. The sudden steep increase in screen and chair time has led way to other silent pandemics like digital eye strain, myopia, musculoskeletal problems, obesity, diabetes etc."
The most common cause outlined by the researchers compiling these digital eye strain reviews is excessive screen time. And they outline the reason for screen time being an issue for the following reasons:
- Technological devices being in a short field of vision
- Devices causing a reduced blink rate
- Poor ergonomics
These are certainly all reasonable causes to highlight, but from our perspective two other key potential causes of digital eye strain are missing: screen flicker and blue light.
Multiple studies show that blue light in isolation can cause mitochondrial dysfunction and oxidative stress in the retina. To learn more about blue light, its potentially harmful effects, and how to mitigate them, read our "Definitive Guide on Blue Light".
In this discussion we are going to focus on screen flicker only.
FLICKER: AN INVISIBLE ISSUE
Flicker could be one of the most underrated stressors to our biology, as it is something we are exposed to constantly due to the nature of modern lighting and screens. It is widely agreed upon by both electrical/electronic engineers and scientific researchers that light flicker can cause:
- Headaches, eye strain, blurred vision and migraines
- Aggravation of autism symptoms in children
- Photo epilepsy
This is documented in the Institute of Electrical and Electronics Engineers (IEEE) 1789 standard for best practice in LED lighting applications, amongst other scientific reviews.
The P1789 committee from IEEE identified the following major effects of flicker:
- Photo epilepsy
- Increased repetitive behaviour among people suffering from autism
- Migraine or intense paroxysmal headache
- Asthenopia (eye strain); including fatigue, blurred vision, headache and diminished sight-related task performance
- Anxiety, panic attacks
- Vertigo
Light flicker is pervasive, mainly due to the ubiquitous nature of LEDs in our modern indoor work environments. We are being exposed to light flicker constantly from both light bulb sources and the screens that we stare at all day. This is a main reason why indoor, screen based work seems so draining. The good news is that this can be avoided (from an engineering perspective).
What is flicker?
We must first understand what "flicker actually is" before we can discuss how to avoid it or how to engineer flicker free light solutions.
In its most simple form, flicker can be defined as "a rapid and repeated change in the brightness of light over time (IEEE - PAR1789)".
Flicker can be easily conceptualized when it is visible, however the flicker we are talking about in regards to modern lighting & LEDs is unfortunately invisible to the human eye…which is part of the problem.
Most humans are unable to perceive flicker in oscillation rates above 60-90Hz (60-90 cycles per second). When we can't see something, we have a much more challenging time as a species grasping its effect on how we feel. The above mentioned health effects are directly related to the invisible flicker in terms of its effects on our biology. We can't see it, but our eyes and our brains react to it.
Slow-motion footage comparing DC-1's DC Dimming versus regular PWM Dimming.
For this article, we want to focus specifically on the flicker coming from LEDs used in modern personal electronics. This type of flicker can be shown in the above video of multiple smartphones being filmed with a slow motion camera.
What causes flicker in smartphones and computers?
There are a few different characteristics of a modern electronic display that cause flicker, but the main culprit is something called "PWM dimming".
PWM (Pulse Width Modulation) is an electronics control mechanism that uses pulsed signals as the LED driver function to control the brightness of the device display.
PWM dimming has become the standard way to drive LEDs because it has specific advantages when it comes to retaining color consistency at lower brightness, and is also typically more power efficient. In a PWM dimming application, the diodes are being modulated to turn on and off very rapidly (faster than our eyes can perceive) to reduce the overall appearance of brightness of the light emission of the LEDs (aka luminance).
Brightness control in regular devices is just rapid flickering that looks steady to our eyes.
The lower the brightness setting, the longer the "off time". The "duty cycle" refers to the ratio of the LED being modulated "on" vs the total period of the cycle. Higher screen brightness setting = higher % duty cycle = more "time on" for the LED. This can be visualized in the graphic below.
PWM dimming controls brightness by quickly pulsing the backlight on and off.
PWM dimming has been chosen as the industry standard because of the intrinsic characteristics of the semiconductors in a light-emitting diode (LED) making it challenging to retain color consistency when modulating output illuminance with direct current, also known as Constant Current Reduction (CCR). CCR or "DC dimming" can utilize simpler control circuitry, but at the cost of less precision over the LED performance, especially at low brightness/luminance settings. PWM dimming can also save on overall power consumption.
DC Dimming maintains consistent light output by adjusting direct electrical current.
The downside of PWM dimming is obvious when you see the slow motion videos of the implementation in smartphone displays. The less obvious downside is that a PWM dimmed light means that we are consuming light at its peak output no matter the brightness setting. Because PWM is turning the light on/off constantly, the "ON" portion is always at peak intensity. This combined with the imbalanced light spectrum (blue heavy) can further exacerbate potential concerns of negatively affecting eye health and sleep quality.
The question we must ask then: is it more important for better LED and electronics performance, or is it more important to have screens that are not causing immense stress to our biology?
PWM Flicker on OLED screens vs LCD screens
Not all PWM flicker is created equal. The flicker frequency used for PWM dimming is directly related to how potentially stressful it can be to our eyes and brains. It is well agreed upon that the lower the frequency is, the more it can stress us out and cause eye strain. This is because at a high enough frequency, the oscillations are happening so rapidly that your brain basically perceives them as a continuous signal.
The "risk factor" of flicker is also dependent on the modulation % (similar to duty cycle) of the flicker as well, but since we all use our devices across different brightness settings and modulation % 's, it is best to focus on the frequency as the independent variable in our control.
Left: Non-PWM Flicker Device | Right: PWM Dimming Device. Nick Sutrich YouTube
Up to and including the iPhone 11, liquid crystal displays (LCD) were the standard for smartphones. A big switch was made to OLED display technology and the tech giants have never looked back. When it comes to PWM dimming frequency, there was a big shift when this swap occurred:
- Most LCD display use a PWM frequency of 1000Hz+ or no PWM at all.
- Nearly all OLED smartphone use a PWM frequency of 240Hz or 480Hz.
THE HEALTH RISK OF FLICKERING DEVICES
So why don't OLED screens use higher PWM frequencies? Because of the nature of OLEDs being controlled as singular pixels, they need the lower PWM frequency to maintain that extremely precise color consistency at low brightness settings. This is of course why they use PWM in the first place.
According to the IEEE1789 flicker risk chart for negative health effects, a 480Hz PWM smartphone (iPhone 15 Pro) would be high risk at any level above 40% modulation and a 240Hz PWM phone (Google Pixel 7) would be high risk above 20%. Whereas a phone that used 1000Hz-2000Hz PWM frequency (Nothing, Xiaomi 15) would only be "low risk".
- California law (Title 24), requires that LEDs used in certain applications have a "reduced flicker operation," meaning the percent amplitude modulation (flicker) must be less than 30% at frequencies below 200 Hz → The Google Pixel 7, Galaxy S23 and many iPhones operate at 240Hz and and 60-95% flicker...just above the legal limit!
- The report that recommended these levels states that: "Excessive flicker, even imperceptible flicker, can have deleterious health effects, and lesser amounts can be annoying or impact productivity."
For PWM frequencies above 3000Hz, there is "no risk" according to IEEE1789. If you have ever felt that staring at your iPhone is far more "straining on the eyes" compared to your MacBook, the PWM flicker is likely a large reason for that (alongside the size of the display itself and distance held from the eyes)...because MacBooks have an LCD display and a PWM flicker frequency of 10-20kHz. At that PWM frequency, your brain is perceiving the oscillating light as a continuous signal.
Other causes of flicker
Although PWM dimming is widely agreed upon as the main cause of light flicker in modern consumer electronics displays, it is not the only cause. There are two other potential causes of light flicker we are aware of:
TEMPORAL DITHERING (AKA FRAME RATE CONTROL)
- "Pixel" dithering is a technique used to produce more colors than what a display's panel is capable of by rapidly changing between two different pixel colors. This technique unlocks a tremendous amount of more color possibilities - for example showing colors with 10 bit color depth results in billions of colors vs an 8 bit color depth results in millions of colors. Temporal dithering helps bridge the gap for 8 bit color depth displays.
- OLED displays are more likely to have better (10-bit) color depth vs LCD displays but use of temporal dithering can certainly vary across display technologies.
- Temporal dithering example (video)
AMORPHOUS SILICON (A-SI) THIN FILM TRANSISTOR (TFT) BACKPLANES
- Most commercial displays use a-Si TFT semiconductor technology in their backplanes of their LCD panels.
- This technology works well, but can have a high amount of photo-induced leakage current under back light illumination conditions, which can cause non uniformity of the light output and flicker.
- In simple language, the standard a-Si transistors are less "efficient" in a backlight application…which can lead to inconsistent light output and thus flicker.
The Daylight Computer: 100% Flicker Free
The DC-1 was designed and built purposefully to be flicker free. We wanted to provide a solution both for those suffering with severe eye strain and also to prevent negative optical and cognitive repercussions of flicker for any end consumer.
### HOW THE DC-1 ACHIEVES A FLICKER FREE DISPLAY:
- Using DC dimming instead of PWM dimming
- The most deliberate change made in our electrical design was centered around using a DC/CCR LED driver (aka Constant Current Reduction) instead of a PWM driver. This means that there is no pulsed circuit control around our LED backlight, and therefore no flicker from PWM lightning control.
- Has zero temporal dithering, as is a monochrome display
- The benefit of being black and white is there is no need to have intense pixel switching to create the mirage of billions of different color combinations.
- Uses Indium Gallium Zinc Oxide (IGZO) TFT Technology
- New semiconductor technology that provides better and more efficient performance vs a-Si TFT panels. Results in no flicker at the transistor level.
- Verified by light experts to be flicker free
- "Flicker testing yielded a perfect result using my highly sensitive audio-based flicker meter and the photodiode based FFT testing method: not even a trace of light modulation could be demonstrated with both methods!" — Dr. Alexander Wunsch (M.D., P.hD), Light Scientist
This commitment to a flicker-free experience isn't just theoretical; it's changing lives. We're incredibly moved by stories from users like Tiffany and Juan Diego, who found relief and regained possibilities with the DC-1:
For someone with eye disability, the DC-1 is a dream device. The display is so soft and smooth on my eyes that I was able to take my life back off of hold and return to medical school after a multi year absence.
— Tiffany Yang, Medical student
It took a couple of weeks to transition all my work screen time to the DC-1, but when I did, my eye strain completely went away. Plus, it let me work outside on my terrace.
— Juan Diego
Our eye-strain pilot study
Here at Daylight, we are all about proof of work. That is why we have already kicked off an initial pilot study to see if the DC-1 is actually more "eye friendly" than standard consumer electronic devices…specifically for those suffering from severe digital eye strain.
We have partnered with Dr. Michael Destefano, a neuro-optometrist at the Visual Symptoms Treatment Center in Illinois, to coordinate this pilot study.
MORE PARTICIPANTS NEEDED
Do you suffer from severe digital eye strain, computer vision syndrome, or visual snow syndrome? If you are interested in trying a DC-1 for 30 days as part of the Eye Strain Pilot Study, please send an email to drdestefanoOD@gmail.com with a background on your visual affliction.
Our favorite ways to reduce digital eye strain
Cutting screen time is not always possible, so here are some options that can help:
- Use DC dimming devices whenever possible
- Try minimizing screen time on your smartphone, utilizing a PWM laptop instead
- Try switching to an LCD smartphone or OLED smartphone with a high PWM frequency
- Turn "White Point" mode ON on your smartphone to increase the duty cycle and reduce the PWM dimming effect
Dive deeper with our curated resources
#### Potential Biological and Ecological Effects of Flickering Artificial Light - PMC
Light Emitting Diode Lighting Flicker, its Impact on Health and the Need to Minimise it
Digital Eye Strain- A Comprehensive Review
Nick Sutrich (Youtube) - Screen PWM Testing and Reviews
Eye Phone Review - Screen Health Reviews
Flicker Measurement NEMA77 and IEEE1789 White Paper
-
@ a296b972:e5a7a2e8
2025-04-29 07:24:4928.04.2025, 16.17 Uhr:
Russische Hobby-Flieger konnten mit ihrem Sportflugzeug namens "Andromeda" unter dem Radar bleiben und haben entlang der Hauptstromtrassen mit einem ukrainischen Zwiebelmesser die Stromleitungen gekappt. In einer scharfen Rechtskurve muss wohl eine Flugzeugtür aufgegangen sein und der Pass des Piloten fiel unbemerkt heraus. Die Identität der Täter konnte so schnell festgestellt werden.
28.04.2025, 16.43 Uhr:
Trump hat das europäische Stromnetz gekauft und die in den Umspannwerken eingebauten US-amerikanischen Chips deaktiviert. Es gibt erst wieder Strom, wenn sich die Koalition der Willigen den Friedensverhandlungen für die Ukraine anschließt. Trump hat bewusst in den sonnenreichen Ländern Spanien und Portugal begonnen, das soll als Warnung für ganz Europa gelten. Frau von der Leyen hat bereits scharfen Protest eingelegt, doch Trump hat die SMS sofort gelöscht.
28.04.2025, 17.12 Uhr:
Selensky hat einen Cyber-Angriff auf das europäische Stromnetz gestartet. Er ist wütend, weil sich Macron in dem 15-Minuten-Gespräch mit Trump am Rande der Beerdigung des Papstes in Rom hat abwimmeln lassen. Er beendet die Strom-Blockade erst, wenn Spanien, Portugal und Frankreich Deutschland dazu zwingen, endlich Taurus zu liefern. Auf die Frage, wie Selensky das angestellt hat, soll er geantwortet haben: "Sie sehen ja, wir können es."
Ist natürlich alles nur Joke! Es sollte nur einmal in Bezug auf die Sprengung der Nordstream 2 Pipelines aufgezeigt werden, wie schnell auch unsereins abstruse Erklärungen liefern kann, die vorne und hinten nicht stimmen können.
Dieser Beitrag wurde mit dem Pareto-Client geschrieben.
(Bild von pixabay)
-
@ d34e832d:383f78d0
2025-04-23 20:19:15A Look into Traffic Analysis and What WebSocket Patterns Reveal at the Network Level
While WebSocket encryption (typically via WSS) is essential for protecting data in transit, traffic analysis remains a potent method of uncovering behavioral patterns, data structure inference, and protocol usage—even when payloads are unreadable. This idea investigates the visibility of encrypted WebSocket communications using Wireshark and similar packet inspection tools. We explore what metadata remains visible, how traffic flow can be modeled, and what risks and opportunities exist for developers, penetration testers, and network analysts. The study concludes by discussing mitigation strategies and the implications for privacy, application security, and protocol design.
Consider
In the age of real-time web applications, WebSockets have emerged as a powerful protocol enabling low-latency, bidirectional communication. From collaborative tools and chat applications to financial trading platforms and IoT dashboards, WebSockets have become foundational for interactive user experiences.
However, encryption via WSS (WebSocket Secure, running over TLS) gives developers and users a sense of security. The payload may be unreadable, but what about the rest of the connection? Can patterns, metadata, and traffic characteristics still leak critical information?
This thesis seeks to answer those questions by leveraging Wireshark, the de facto tool for packet inspection, and exploring the world of traffic analysis at the network level.
Background and Related Work
The WebSocket Protocol
Defined in RFC 6455, WebSocket operates over TCP and provides a persistent, full-duplex connection. The protocol upgrades an HTTP connection, then communicates through a simple frame-based structure.
Encryption with WSS
WSS connections use TLS (usually on port 443), making them indistinguishable from HTTPS traffic at the packet level. Payloads are encrypted, but metadata such as IP addresses, timing, packet size, and connection duration remain visible.
Traffic Analysis
Traffic analysis—despite encryption—has long been a technique used in network forensics, surveillance, and malware detection. Prior studies have shown that encrypted protocols like HTTPS, TLS, and SSH still reveal behavioral information through patterns.
Methodology
Tools Used:
- Wireshark (latest stable version)
- TLS decryption with local keys (when permitted)
- Simulated and real-world WebSocket apps (chat, games, IoT dashboards)
- Scripts to generate traffic patterns (Python using websockets and aiohttp)
Test Environments:
- Controlled LAN environments with known server and client
- Live observation of open-source WebSocket platforms (e.g., Matrix clients)
Data Points Captured:
- Packet timing and size
- TLS handshake details
- IP/TCP headers
- Frame burst patterns
- Message rate and directionality
Findings
1. Metadata Leaks
Even without payload access, the following data is visible: - Source/destination IP - Port numbers (typically 443) - Server certificate info - Packet sizes and intervals - TLS handshake fingerprinting (e.g., JA3 hashes)
2. Behavioral Patterns
- Chat apps show consistent message frequency and short message sizes.
- Multiplayer games exhibit rapid bursts of small packets.
- IoT devices often maintain idle connections with periodic keepalives.
- Typing indicators, heartbeats, or "ping/pong" mechanisms are visible even under encryption.
3. Timing and Packet Size Fingerprinting
Even encrypted payloads can be fingerprinted by: - Regularity in payload size (e.g., 92 bytes every 15s) - Distinct bidirectional patterns (e.g., send/ack/send per user action) - TLS record sizes which may indirectly hint at message length
Side-Channel Risks in Encrypted WebSocket Communication
Although WebSocket payloads transmitted over WSS (WebSocket Secure) are encrypted, they remain susceptible to side-channel analysis, a class of attacks that exploit observable characteristics of the communication channel rather than its content.
Side-Channel Risks Include:
1. User Behavior Inference
Adversaries can analyze packet timing and frequency to infer user behavior. For example, typing indicators in chat applications often trigger short, regular packets. Even without payload visibility, a passive observer may identify when a user is typing, idle, or has closed the application. Session duration, message frequency, and bursts of activity can be linked to specific user actions.2. Application Fingerprinting
TLS handshake metadata and consistent traffic patterns can allow an observer to identify specific client libraries or platforms. For example, the sequence and structure of TLS extensions (via JA3 fingerprinting) can differentiate between browsers, SDKs, or WebSocket frameworks. Application behavior—such as timing of keepalives or frequency of updates—can further reinforce these fingerprints.3. Usage Pattern Recognition
Over time, recurring patterns in packet flow may reveal application logic. For instance, multiplayer game sessions often involve predictable synchronization intervals. Financial dashboards may show bursts at fixed polling intervals. This allows for profiling of application type, logic loops, or even user roles.4. Leakage Through Timing
Time-based attacks can be surprisingly revealing. Regular intervals between message bursts can disclose structured interactions—such as polling, pings, or scheduled updates. Fine-grained timing analysis may even infer when individual keystrokes occur, especially in sparse channels where interactivity is high and payloads are short.5. Content Length Correlation
While encrypted, the size of a TLS record often correlates closely to the plaintext message length. This enables attackers to estimate the size of messages, which can be linked to known commands or data structures. Repeated message sizes (e.g., 112 bytes every 30s) may suggest state synchronization or batched updates.6. Session Correlation Across Time
Using IP, JA3 fingerprints, and behavioral metrics, it’s possible to link multiple sessions back to the same client. This weakens anonymity, especially when combined with data from DNS logs, TLS SNI fields (if exposed), or consistent traffic habits. In anonymized systems, this can be particularly damaging.Side-Channel Risks in Encrypted WebSocket Communication
Although WebSocket payloads transmitted over WSS (WebSocket Secure) are encrypted, they remain susceptible to side-channel analysis, a class of attacks that exploit observable characteristics of the communication channel rather than its content.
1. Behavior Inference
Even with end-to-end encryption, adversaries can make educated guesses about user actions based on traffic patterns:
- Typing detection: In chat applications, short, repeated packets every few hundred milliseconds may indicate a user typing.
- Voice activity: In VoIP apps using WebSockets, a series of consistent-size packets followed by silence can reveal when someone starts and stops speaking.
- Gaming actions: Packet bursts at high frequency may correlate with real-time game movement or input actions.
2. Session Duration
WebSocket connections are persistent by design. This characteristic allows attackers to:
- Measure session duration: Knowing how long a user stays connected to a WebSocket server can infer usage patterns (e.g., average chat duration, work hours).
- Identify session boundaries: Connection start and end timestamps may be enough to correlate with user login/logout behavior.
3. Usage Patterns
Over time, traffic analysis may reveal consistent behavioral traits tied to specific users or devices:
- Time-of-day activity: Regular connection intervals can point to habitual usage, ideal for profiling or surveillance.
- Burst frequency and timing: Distinct intervals of high or low traffic volume can hint at backend logic or user engagement models.
Example Scenario: Encrypted Chat App
Even though a chat application uses end-to-end encryption and transports data over WSS:
- A passive observer sees:
- TLS handshake metadata
- IPs and SNI (Server Name Indication)
- Packet sizes and timings
- They might then infer:
- When a user is online or actively chatting
- Whether a user is typing, idle, or receiving messages
- Usage patterns that match a specific user fingerprint
This kind of intelligence can be used for traffic correlation attacks, profiling, or deanonymization — particularly dangerous in regimes or situations where privacy is critical (e.g., journalists, whistleblowers, activists).
Fingerprinting Encrypted WebSocket Applications via Traffic Signatures
Even when payloads are encrypted, adversaries can leverage fingerprinting techniques to identify the specific WebSocket libraries, frameworks, or applications in use based on unique traffic signatures. This is a critical vector in traffic analysis, especially when full encryption lulls developers into a false sense of security.
1. Library and Framework Fingerprints
Different WebSocket implementations generate traffic patterns that can be used to infer what tool or framework is being used, such as:
- Handshake patterns: The WebSocket upgrade request often includes headers that differ subtly between:
- Browsers (Chrome, Firefox, Safari)
- Python libs (
websockets
,aiohttp
,Autobahn
) - Node.js clients (
ws
,socket.io
) - Mobile SDKs (Android’s
okhttp
, iOSStarscream
) - Heartbeat intervals: Some libraries implement default ping/pong intervals (e.g., every 20s in
socket.io
) that can be measured and traced back to the source.
2. Payload Size and Frequency Patterns
Even with encryption, metadata is exposed:
- Frame sizes: Libraries often chunk or batch messages differently.
- Initial message burst: Some apps send a known sequence of messages on connection (e.g., auth token → subscribe → sync events).
- Message intervals: Unique to libraries using structured pub/sub or event-driven APIs.
These observable patterns can allow a passive observer to identify not only the app but potentially which feature is being used, such as messaging, location tracking, or media playback.
3. Case Study: Identifying Socket.IO vs Raw WebSocket
Socket.IO, although layered on top of WebSockets, introduces a handshake sequence of HTTP polling → upgrade → packetized structured messaging with preamble bytes (even in encrypted form, the size and frequency of these frames is recognizable). A well-equipped observer can differentiate it from a raw WebSocket exchange using only timing and packet length metrics.
Security Implications
- Targeted exploitation: Knowing the backend framework (e.g.,
Django Channels
orFastAPI + websockets
) allows attackers to narrow down known CVEs or misconfigurations. - De-anonymization: Apps that are widely used in specific demographics (e.g., Signal clones, activist chat apps) become fingerprintable even behind HTTPS or WSS.
- Nation-state surveillance: Traffic fingerprinting lets governments block or monitor traffic associated with specific technologies, even without decrypting the data.
Leakage Through Timing: Inferring Behavior in Encrypted WebSocket Channels
Encrypted WebSocket communication does not prevent timing-based side-channel attacks, where an adversary can deduce sensitive information purely from the timing, size, and frequency of encrypted packets. These micro-behavioral signals, though not revealing actual content, can still disclose high-level user actions — sometimes with alarming precision.
1. Typing Detection and Keystroke Inference
Many real-time chat applications (Matrix, Signal, Rocket.Chat, custom WebSocket apps) implement "user is typing..." features. These generate recognizable message bursts even when encrypted:
- Small, frequent packets sent at irregular intervals often correspond to individual keystrokes.
- Inter-keystroke timing analysis — often accurate to within tens of milliseconds — can help reconstruct typed messages’ length or even guess content using language models (e.g., inferring "hello" vs "hey").
2. Session Activity Leaks
WebSocket sessions are long-lived and often signal usage states by packet rhythm:
- Idle vs active user patterns become apparent through heartbeat frequency and packet gaps.
- Transitions — like joining or leaving a chatroom, starting a video, or activating a voice stream — often result in bursts of packet activity.
- Even without payload access, adversaries can profile session structure, determining which features are being used and when.
3. Case Study: Real-Time Editors
Collaborative editing tools (e.g., Etherpad, CryptPad) leak structure:
- When a user edits, each keystroke or operation may result in a burst of 1–3 WebSocket frames.
- Over time, a passive observer could infer:
- Whether one or multiple users are active
- Who is currently typing
- The pace of typing
- Collaborative vs solo editing behavior
4. Attack Vectors Enabled by Timing Leaks
- Target tracking: Identify active users in a room, even on anonymized or end-to-end encrypted platforms.
- Session replay: Attackers can simulate usage patterns for further behavioral fingerprinting.
- Network censorship: Governments may block traffic based on WebSocket behavior patterns suggestive of forbidden apps (e.g., chat tools, Tor bridges).
Mitigations and Countermeasures
While timing leakage cannot be entirely eliminated, several techniques can obfuscate or dampen signal strength:
- Uniform packet sizing (padding to fixed lengths)
- Traffic shaping (constant-time message dispatch)
- Dummy traffic injection (noise during idle states)
- Multiplexing WebSocket streams with unrelated activity
Excellent point — let’s weave that into the conclusion of the thesis to emphasize the dual nature of WebSocket visibility:
Visibility Without Clarity — Privacy Risks in Encrypted WebSocket Traffic**
This thesis demonstrates that while encryption secures the contents of WebSocket payloads, it does not conceal behavioral patterns. Through tools like Wireshark, analysts — and adversaries alike — can inspect traffic flows to deduce session metadata, fingerprint applications, and infer user activity, even without decrypting a single byte.
The paradox of encrypted WebSockets is thus revealed:
They offer confidentiality, but not invisibility.As shown through timing analysis, fingerprinting, and side-channel observation, encrypted WebSocket streams can still leak valuable information. These findings underscore the importance of privacy-aware design choices in real-time systems:
- Padding variable-size messages to fixed-length formats
- Randomizing or shaping packet timing
- Mixing in dummy traffic during idle states
- Multiplexing unrelated data streams to obscure intent
Without such obfuscation strategies, encrypted WebSocket traffic — though unreadable — remains interpretable.
In closing, developers, privacy researchers, and protocol designers must recognize that encryption is necessary but not sufficient. To build truly private real-time systems, we must move beyond content confidentiality and address the metadata and side-channel exposures that lie beneath the surface.
Absolutely! Here's a full thesis-style writeup titled “Mitigation Strategies: Reducing Metadata Leakage in Encrypted WebSocket Traffic”, focusing on countermeasures to side-channel risks in real-time encrypted communication:
Mitigation Strategies: Reducing Metadata Leakage in Encrypted WebSocket Traffic
Abstract
While WebSocket traffic is often encrypted using TLS, it remains vulnerable to metadata-based side-channel attacks. Adversaries can infer behavioral patterns, session timing, and even the identity of applications through passive traffic analysis. This thesis explores four key mitigation strategies—message padding, batching and jitter, TLS fingerprint randomization, and connection multiplexing—that aim to reduce the efficacy of such analysis. We present practical implementations, limitations, and trade-offs associated with each method and advocate for layered, privacy-preserving protocol design.
1. Consider
The rise of WebSockets in real-time applications has improved interactivity but also exposed new privacy attack surfaces. Even when encrypted, WebSocket traffic leaks observable metadata—packet sizes, timing intervals, handshake properties, and connection counts—that can be exploited for fingerprinting, behavioral inference, and usage profiling.
This Idea focuses on mitigation rather than detection. The core question addressed is: How can we reduce the information available to adversaries from metadata alone?
2. Threat Model and Metadata Exposure
Passive attackers situated at any point between client and server can: - Identify application behavior via timing and message frequency - Infer keystrokes or user interaction states ("user typing", "user joined", etc.) - Perform fingerprinting via TLS handshake characteristics - Link separate sessions from the same user by recognizing traffic patterns
Thus, we must treat metadata as a leaky abstraction layer, requiring proactive obfuscation even in fully encrypted sessions.
3. Mitigation Techniques
3.1 Message Padding
Variable-sized messages create unique traffic signatures. Message padding involves standardizing the frame length of WebSocket messages to a fixed or randomly chosen size within a predefined envelope.
- Pro: Hides exact payload size, making compression side-channel and length-based analysis ineffective.
- Con: Increases bandwidth usage; not ideal for mobile/low-bandwidth scenarios.
Implementation: Client libraries can pad all outbound messages to, for example, 512 bytes or the next power of two above the actual message length.
3.2 Batching and Jitter
Packet timing is often the most revealing metric. Delaying messages to create jitter and batching multiple events into a single transmission breaks correlation patterns.
- Pro: Prevents timing attacks, typing inference, and pattern recognition.
- Con: Increases latency, possibly degrading UX in real-time apps.
Implementation: Use an event queue with randomized intervals for dispatching messages (e.g., 100–300ms jitter windows).
3.3 TLS Fingerprint Randomization
TLS fingerprints—determined by the ordering of cipher suites, extensions, and fields—can uniquely identify client libraries and platforms. Randomizing these fields on the client side prevents reliable fingerprinting.
- Pro: Reduces ability to correlate sessions or identify tools/libraries used.
- Con: Requires deeper control of the TLS stack, often unavailable in browsers.
Implementation: Modify or wrap lower-level TLS clients (e.g., via OpenSSL or rustls) to introduce randomized handshakes in custom apps.
3.4 Connection Reuse or Multiplexing
Opening multiple connections creates identifiable patterns. By reusing a single persistent connection for multiple data streams or users (in proxies or edge nodes), the visibility of unique flows is reduced.
- Pro: Aggregates traffic, preventing per-user or per-feature traffic separation.
- Con: More complex server-side logic; harder to debug.
Implementation: Use multiplexing protocols (e.g., WebSocket subprotocols or application-level routing) to share connections across users or components.
4. Combined Strategy and Defense-in-Depth
No single strategy suffices. A layered mitigation approach—combining padding, jitter, fingerprint randomization, and multiplexing—provides defense-in-depth against multiple classes of metadata leakage.
The recommended implementation pipeline: 1. Pad all outbound messages to a fixed size 2. Introduce random batching and delay intervals 3. Obfuscate TLS fingerprints using low-level TLS stack configuration 4. Route data over multiplexed WebSocket connections via reverse proxies or edge routers
This creates a high-noise communication channel that significantly impairs passive traffic analysis.
5. Limitations and Future Work
Mitigations come with trade-offs: latency, bandwidth overhead, and implementation complexity. Additionally, some techniques (e.g., TLS randomization) are hard to apply in browser-based environments due to API constraints.
Future work includes: - Standardizing privacy-enhancing WebSocket subprotocols - Integrating these mitigations into mainstream libraries (e.g., Socket.IO, Phoenix) - Using machine learning to auto-tune mitigation levels based on threat environment
6. Case In Point
Encrypted WebSocket traffic is not inherently private. Without explicit mitigation, metadata alone is sufficient for behavioral profiling and application fingerprinting. This thesis has outlined practical strategies for obfuscating traffic patterns at various protocol layers. Implementing these defenses can significantly improve user privacy in real-time systems and should become a standard part of secure WebSocket deployments.
-
@ 4c96d763:80c3ee30
2025-04-23 19:43:04Changes
William Casarin (28):
- dave: constrain power for now
- ci: bump ubuntu runner
- dave: initial note rendering
- note: fix from_hex crash on bad note ids
- dave: improve multi-note display
- dave: cleanly separate ui from logic
- dave: add a few docs
- dave: add readme
- dave: improve docs with ai
- docs: add some ui-related guides
- docs: remove test hallucination
- docs: add tokenator docs
- docs: add notedeck docs
- docs: add notedeck_columns readme
- docs: add notedeck_chrome docs
- docs: improve top-level docs
- dave: add new chat button
- dave: ensure system prompt is included when reset
- enostr: rename to_bech to npub
- name: display_name before name in NostrName
- ui: add note truncation
- ui: add ProfilePic::from_profile_or_default
- dave: add query rendering, fix author queries
- dave: return tool errors back to the ai
- dave: give present notes a proper tool response
- dave: more flexible env config
- dave: bubble note actions to chrome
- chrome: use actual columns noteaction executor
kernelkind (13):
- remove unnecessary
#[allow(dead_code)]
- extend
ZapAction
- UserAccount use builder pattern
Wallet
token parser shouldn't parse all- move
WalletState
to UI - add default zap
- introduce
ZapWallet
- use
ZapWallet
- propagate
DefaultZapState
to wallet ui - wallet: helper method to get current wallet
- accounts: check if selected account has wallet
- ui: show default zap amount in wallet view
- use default zap amount for zap
pushed to notedeck:refs/heads/master
-
@ 000002de:c05780a7
2025-04-23 16:27:56Natalie Brunell had comedian T.J. Miller known for Silicon Valley on her show Coin Stories. I was kinda surprised. Not sure why but I recognized his voice because that's my brain I can forget a face but never a voice.
So what person that has fame also secretly is a bitcoiner. Not has the ETF or whatever but actually gets it and has for a while.
I think this is pretty widely believed that Mark Zuckerburg is a bitcoiner so that would be the person I'd list. No clue beyond that. There has to be quite a few well known people that also get bitcoin and just don't talk about it.
SO, who do you think is in the club?
originally posted at https://stacker.news/items/955179
-
@ 8f69ac99:4f92f5fd
2025-04-23 14:39:01Dizem-nos que a inflação é necessária. Mas e se for, afinal, a raiz da disfunção económica que enfrentamos?
A crença mainstream é clara: para estimular o crescimento, os governos devem poder desvalorizar a sua moeda — essencialmente, criar dinheiro do nada. Supostamente, isso incentiva o investimento, aumenta o consumo e permite responder a crises económicas. Esta narrativa foi repetida tantas vezes que se tornou quase um axioma — raramente questionado.
No centro desta visão está a lógica fiat-keynesiana: uma economia estável exige um banco central disposto a manipular o valor do dinheiro para alcançar certos objectivos políticos. Esta abordagem, inspirada por John Maynard Keynes, defende a intervenção estatal como forma de estabilizar a economia durante recessões. Na teoria, os investidores e consumidores beneficiam de taxas de juro artificiais e de maior poder de compra — um suposto ganho para todos.
Mas há outra perspectiva: a visão do dinheiro sólido (sound money, em inglês). Enraizada na escola austríaca e nos princípios da liberdade individual, esta defende que a manipulação monetária não é apenas desnecessária — é prejudicial. Uma moeda estável, não sujeita à depreciação arbitrária, é essencial para promover trocas voluntárias, empreendedorismo e crescimento económico genuíno.
Está na hora de desafiar esta sabedoria convencional. Ao longo dos próximos capítulos, vamos analisar os pressupostos errados que sustentam a lógica fiat-keynesiana e explorar os benefícios de um sistema baseado em dinheiro sólido — como Bitcoin. Vamos mostrar por que desvalorizar a moeda é moralmente questionável e economicamente prejudicial, e propor alternativas mais éticas e eficazes.
Este artigo (que surge em resposta ao "guru" Miguel Milhões) pretende iluminar as diferenças entre estas duas visões opostas e apresentar uma abordagem mais sólida e justa para a política económica — centrada na liberdade pessoal, na responsabilidade individual e na preservação de instituições financeiras saudáveis.
O Argumento Fiat: Por que Dizem que é Preciso Desvalorizar a Moeda
Este argumento parte geralmente de uma visão económica keynesiana e/ou estatista e assenta em duas ideias principais: o incentivo ao investimento e a necessidade de resposta a emergências.
Incentivo ao Investimento
Segundo os defensores do sistema fiat, se uma moeda como o ouro ou bitcoin valorizar ao longo do tempo, as pessoas tenderão a "acumular" essa riqueza em vez de investir em negócios produtivos. O receio é que, se guardar dinheiro se torna mais rentável do que investir, a economia entre em estagnação.
Esta ideia parte de uma visão simplista do comportamento humano. Na realidade, as pessoas tomam decisões financeiras com base em múltiplos factores. Embora seja verdade que activos valorizáveis são atractivos, isso não significa que os investimentos desapareçam. Pelo contrário, o surgimento de activos como bitcoin cria novas oportunidades de inovação e investimento.
Historicamente, houve crescimento económico em períodos de moeda sólida — como no padrão-ouro. Uma moeda estável e previsível pode incentivar o investimento, ao dar confiança nos retornos futuros.
Resposta a Emergências
A segunda tese é que os governos precisam de imprimir dinheiro rapidamente em tempos de crise — pandemias, guerras ou recessões. Esta capacidade de intervenção é vista como essencial para "salvar" a economia.
De acordo com economistas keynesianos, uma injecção rápida de liquidez pode estabilizar a economia e evitar colapsos sociais. No entanto, este argumento ignora vários pontos fundamentais:
- A política monetária não substitui a responsabilidade fiscal: A capacidade de imprimir dinheiro não torna automaticamente eficaz o estímulo económico.
- A inflação é uma consequência provável: A impressão de dinheiro pode levar a pressões inflacionistas, reduzindo o poder de compra dos consumidores e minando o próprio estímulo pretendido. Estamos agora a colher os "frutos" da impressão de dinheiro durante a pandemia.
- O timing é crítico: Intervenções mal cronometradas podem agravar a situação.
Veremos em seguida porque estes argumentos não se sustentam.
Rebatendo os Argumentos
O Investimento Não Morre num Sistema de Dinheiro Sólido
O argumento de que o dinheiro sólido mata o investimento falha em compreender a ligação entre poupança e capital. Num sistema sólido, a poupança não é apenas acumulação — é capital disponível para financiar novos projectos. Isso conduz a um crescimento mais sustentável, baseado na qualidade e não na especulação.
Em contraste, o sistema fiat, com crédito barato, gera bolhas e colapsos — como vimos em 2008 ou na bolha dot-com. Estes exemplos ilustram os perigos da especulação facilitada por políticas monetárias artificiais.
Já num sistema de dinheiro sólido, como o que cresce em torno de Bitcoin, vemos investimentos em mineração, startups, educação e arte. Os investidores continuam activos — mas fazem escolhas mais responsáveis e de longo prazo.
Imprimir Dinheiro Não Resolve Crises
A ideia de que imprimir dinheiro é essencial em tempos de crise parte de uma ilusão perigosa. A inflação que se segue reduz o poder de compra e afecta especialmente os mais pobres — é uma forma oculta de imposto.
Além disso, soluções descentralizadas — como os mercados, redes comunitárias e poupança — são frequentemente mais eficazes. A resposta à COVID-19 ilustra isso: grandes empresas foram salvas, mas pequenos negócios e famílias ficaram para trás. Os últimos receberam um amuse-bouche, enquanto os primeiros comeram o prato principal, sopa, sobremesa e ainda levaram os restos.
A verdade é que imprimir dinheiro não cria valor — apenas o redistribui injustamente. A verdadeira resiliência nasce de comunidades organizadas e de uma base económica saudável, não de decretos políticos.
Dois Mundos: Fiat vs. Dinheiro Sólido
| Dimensão | Sistema Fiat-Keynesiano | Sistema de Dinheiro Sólido | |----------|--------------------------|-----------------------------| | Investimento | Estimulado por crédito fácil, alimentando bolhas | Baseado em poupança real e oportunidades sustentáveis | | Resposta a crises | Centralizada, via impressão de moeda | Descentralizada, baseada em poupança e solidariedade | | Preferência temporal | Alta: foco no consumo imediato | Baixa: foco na poupança e no futuro | | Distribuição de riqueza | Favorece os próximos ao poder (Efeito Cantillon) | Benefícios da deflação são distribuídos de forma mais justa | | Fundamento moral | Coercivo e redistributivo | Voluntário e baseado na liberdade individual |
Estes contrastes mostram que a escolha entre os dois sistemas vai muito além da economia — é também uma questão ética.
Consequências de Cada Sistema
O Mundo Fiat
Num mundo dominado pelo sistema fiat, os ciclos de euforia e colapso são a norma. A desigualdade aumenta, com os mais próximos ao poder a lucrar com a inflação e a impressão de dinheiro. A poupança perde valor, e a autonomia financeira das pessoas diminui.
À medida que o Estado ganha mais controlo sobre a economia, os cidadãos perdem capacidade de escolha e dependem cada vez mais de apoios governamentais. Esta dependência destrói o espírito de iniciativa e promove o conformismo.
O resultado? Estagnação, conflitos sociais e perda de liberdade.
O Mundo com Dinheiro Sólido
Com uma moeda sólida, o crescimento é baseado em valor real. As pessoas poupam mais, investem melhor e tornam-se mais independentes financeiramente. As comunidades tornam-se mais resilientes, e a cooperação substitui a dependência estatal.
Benefícios chave:
- Poupança real: A moeda não perde valor, e a riqueza pode ser construída com estabilidade.
- Resiliência descentralizada: Apoio mútuo entre indivíduos e comunidades em tempos difíceis.
- Liberdade económica: Menor interferência política e mais espaço para inovação e iniciativa pessoal.
Conclusão
A desvalorização da moeda não é uma solução — é um problema. Os sistemas fiat estão desenhados para transferir riqueza e poder de forma opaca, perpetuando injustiças e instabilidade.
Por outro lado, o dinheiro sólido — como Bitcoin — oferece uma alternativa credível e ética. Promove liberdade, responsabilidade e transparência. Impede abusos de poder e expõe os verdadeiros custos da má governação.
Não precisamos de mais inflação — precisamos de mais integridade.
Está na hora de recuperarmos o controlo sobre a nossa vida financeira. De rejeitarmos os sistemas que nos empobrecem lentamente e de construirmos um futuro em que o dinheiro serve as pessoas — e não os interesses políticos.
O futuro do dinheiro pode e deve ser diferente. Juntos, podemos criar uma economia mais justa, livre e resiliente — onde a prosperidade é partilhada e a dignidade individual respeitada.
Photo by rc.xyz NFT gallery on Unsplash
-
@ 7d33ba57:1b82db35
2025-04-23 13:51:02You don’t need a fancy camera to dive into the miniature universe—your phone + a few tricks are all it takes! Macro photography with a smartphone can reveal incredible textures, patterns, insects, flowers, and everyday details most people miss. Here’s how to get the best out of it:
🔧 1. Use a Macro Lens Attachment (If Possible)
- Clip-on macro lenses are affordable and boost your phone's close-up power
- Look for 10x–20x lenses for best results
- Make sure it’s aligned perfectly with your phone’s lens
✨ 2. Get Really Close (but Not Too Close)
- Phones typically focus best at 2–5 cm in macro mode
- Slowly move your phone toward the subject until it comes into sharp focus
- If it blurs, back off slightly—tiny shifts matter a lot!
📸 3. Tap to Focus & Adjust Exposure
- Tap on your subject to lock focus
- Adjust brightness manually if your phone allows—slightly underexposed often looks better in macro
🌤️ 4. Use Natural Light or a Diffused Flash
- Soft natural light (like early morning or cloudy day) gives the best macro results
- Use white paper to bounce light or your hand to gently shade direct sun
- If using flash, try diffusing it with a tissue or tape for a softer effect
🧍♂️ 5. Steady Yourself
- Use both hands, brace against something, or use a tripod for stability
- Try your phone’s timer or remote shutter (via headphones or Bluetooth) to avoid shake
🧽 6. Clean Your Lens
- Macro shows everything—including dust and fingerprints
- Always wipe your lens gently before shooting
🌀 7. Explore Textures & Patterns
- Get creative with leaves, feathers, skin, fabrics, ice, fruit, rust, insects—anything with rich texture
- Look for symmetry, contrast, or repetition in tiny subjects
🧑🎨 8. Edit Smart
- Use apps like Snapseed, Lightroom Mobile, or your built-in editor
- Adjust sharpness, contrast, warmth, and cropping carefully
- Avoid over-sharpening—it can make things look unnatural
🎯 Bonus Tip: Try Manual Camera Apps
- Apps like Halide (iOS), ProCamera, or Camera+ 2 let you control ISO, shutter speed, and focus manually
- Great for getting extra precision
Macro phoneography is about patience and curiosity. Once you start noticing the tiny wonders around you, you’ll see the world a little differently.
-
@ 7d33ba57:1b82db35
2025-04-23 12:54:11Texel, the largest of the Dutch Wadden Islands, is a peaceful, windswept escape in the North Sea, known for its wide sandy beaches, unique landscapes, and laid-back island vibe. Just a short ferry ride from the mainland, Texel offers a perfect mix of nature, wildlife, local culture, and coastal relaxation.
🏖️ Top Things to Do on Texel
🚲 Cycle Across the Island
- With over 140 km of bike paths, cycling is the best way to explore
- Ride through dunes, forests, sheep pastures, and cute villages like De Koog and Oudeschild
🌾 Explore Dunes & Beaches
- Visit Dunes of Texel National Park—a coastal dream with rolling dunes, hiking trails, and wildflowers
- Relax on vast, quiet beaches perfect for swimming, kite flying, or just soaking up sea air
🐑 Texel Sheep & Local Farms
- Meet the island’s famous Texel sheep—known for their wool and adorable lambs
- Stop by local farms for cheese tastings, ice cream, or a farm tour
🐦 Spot Wildlife at De Slufter
- A unique salt marsh where tidal water flows in naturally
- Great for birdwatching—home to spoonbills, geese, and waders
- Beautiful walking trails with views over the dunes and out to sea
🐋 Ecomare Marine Center
- Learn about Texel’s marine life, see seals and seabirds, and explore interactive exhibits
- A hit for families and nature lovers alike
🍺 Taste Texel
- Try local specialties like Texels beer, lamb dishes, cranberry treats, and fresh seafood
- Cozy beach pavilions and harbor-side restaurants offer stunning sunset views
🚢 Getting to Texel
- Take the ferry from Den Helder (crossing time: ~20 minutes)
- Cars, bikes, and pedestrians all welcome
- Once on the island, biking or local buses make getting around easy
🏡 Where to Stay
- Choose from beachside hotels, charming B&Bs, cozy cabins, or campsites
- Many places offer serene views of dunes, fields, or sea
-
@ 6ad3e2a3:c90b7740
2025-04-23 12:31:54There’s an annoying trend on Twitter wherein the algorithm feeds you a lot of threads like “five keys to gaining wealth” or “10 mistakes to avoid in relationships” that list a bunch of hacks for some ostensibly desirable state of affairs which for you is presumably lacking. It’s not that the hacks are wrong per se, more that the medium is the message. Reading threads about hacks on social media is almost surely not the path toward whatever is promised by them.
. . .
I’ve tried a lot of health supplements over the years. These days creatine is trendy, and of course Vitamin D (which I still take.) I don’t know if this is helping me, though it surely helps me pass my blood tests with robust levels. The more I learn about health and nutrition, the less I’m sure of anything beyond a few basics. Yes, replacing processed food with real food, moving your body and getting some sun are almost certainly good, but it’s harder to know how particular interventions affect me.
Maybe some of them work in the short term then lose their effect, Maybe some work better for particular phenotypes, but not for mine. Maybe my timing in the day is off, or I’m not combining them correctly for my lifestyle and circumstances. The body is a complex system, and complex systems are characterized by having unpredictable outputs given changes to initial conditions (inputs).
. . .
I started getting into Padel recently — a mini-tennis-like game where you can hit the ball off the back walls. I’d much rather chase a ball around for exercise than run or work out, and there’s a social aspect I enjoy. (By “social aspect”, I don’t really mean getting to know the people with whom I’m playing, but just the incidental interactions you get during the game, joking about it, for example, when you nearly impale someone at the net with a hard forehand.)
A few months ago, I was playing with some friends, and I was a little off. It’s embarrassing to play poorly at a sport, especially when (as is always the case in Padel) you have a doubles partner you’re letting down. Normally I’d be excoriating myself for my poor play, coaching myself to bend my knees more, not go for winners so much. But that day, I was tired — for some reason I hadn’t slept well — and I didn’t have the energy for much internal monologue. I just mishit a few balls, felt stupid about it and kept playing.
After a few games, my fortunes reversed. I was hitting the ball cleanly, smashing winners, rarely making errors. My partner and I started winning games and then sets. I was enjoying myself. In the midst of it I remember hitting an easy ball into the net and reflexively wanting to self-coach again. I wondered, “What tips did I give to right the ship when I had been playing poorly at the outset?” I racked my brain as I waited for the serve and realized, to my surprise, there had been none. The turnaround in my play was not due to self-coaching but its absence. I had started playing better because my mind had finally shut the fuck up for once.
Now when I’m not playing well, I resist, to the extent I’m capable, the urge to meddle. I intend to be more mind-less. Not so much telling the interior coach to shut up but not buying into the premise there is a problem to be solved at all. The coach isn’t just ignored, he’s fired. And he’s not just fired, his role was obsoleted.
You blew the point, you’re embarrassed about it and there’s nothing that needs to be done about it. Or that you started coaching yourself like a fool and made things worse. No matter how much you are doing the wrong thing nothing needs to be done about any of it whatsoever. There is always another ball coming across the net that needs to be struck until the game is over.
. . .
Most of the hacks, habits and heuristics we pick up to manage our lives only serve as yet more inputs in unfathomably complex systems whose outputs rarely track as we’d like. There are some basic ones that are now obvious to everyone like not injecting yourself with heroin (or mRNA boosters), but for the most part we just create more baggage for ourselves which justifies ever more hacks. It’s like taking medication for one problem that causes side effects, and then you need another medicine for that side effect, rinse and repeat, ad infinitum.
But this process can be reverse-engineered too. For every heuristic you drop, the problem it was put into place to solve re-emerges and has a chance to be observed. Observing won’t solve it, it’ll just bring it into the fold, give the complex system of which it is a part a chance to achieve an equilibrium with respect to it on its own.
You might still be embarrassed when you mishit the ball, but embarrassment is not a problem. And if embarrassment is not a problem, then mishitting a ball isn’t that bad. And if mishitting a ball isn’t that bad, then maybe you’re not worrying about what happens if you botch the next shot, instead fixing your attention on the ball. And so you disappear a little bit into the game, and it’s more fun as a result.
I honestly wish there were a hack for this — being more mindless — but I don’t know of any. And in any event, hack Substacks won’t get you any farther than hack Twitter threads.
-
@ c3b2802b:4850599c
2025-04-29 06:48:44Haben Sie schon einmal einen Intelligenztest gemacht? Da geht es darum, in beschränkter Zeit eine Vielfalt rationaler Denkprozesse so präzise wie möglich korrekt zu absolvieren, wenn man gern einen hohen Intelligenzquotienten bescheinigt haben mag. Unsere gesamte Schulbildung ist überwiegend auf die Entfaltung dieser Art kognitiver Potentiale ausgerichtet. Allerdings ist es weniger eigenständiges Denken, das im Fokus steht. Meist handelt es sich um "betreutes" Denken in vorgegebenen begrifflichen Kategorien und mit vorgegebenen Frage-Antwort Schemata, welche vorab zu pauken sind, wenn man bestehen möchte.
Und wie steht es um das Mitfühlen mit anderen Geschöpfen - mit Grashüpfern, Ameisen, Pflanzen, Fischen, Schweinen oder Rindern? Und wie ist es um das Mitfühlen mit Menschen bestellt, welche infolge von Kriegshandlungen schwerste körperliche und seelische Schäden davontragen oder sterben? Wurden solche Fragen in Ihrer Ausbildung thematisiert? Meine Erinnerung bringt das Sezieren von Fröschen und Fischen im Biologieunterricht hervor, wobei Fragen an die Lehrer nach dem Leiden der Tiere als unpassend abgetan wurden. Die Überlebenden unserer Vorfahren in den Weltkriegen nach den erlittenen Leiden zu fragen: Hat Sie dazu einer Ihrer Lehrer oder Professoren ermutigt?
Sollten Sie bis hierher mitschwingen und ähnlich empfinden, dass unser Potential zum Mitfühlen in unserer Gesellschaft gegenüber kognitiven Potentialen wenig kultiviert und gepflegt wird, dann könnte sich ein Pfad auftun, unsere entstehende Regionalgesellschaft zu stärken.
Wenn wir unsere Sinneskanäle öffnen für das Befinden der Schöpfung, in der wir uns gerade bewegen, erschließen sich große Energiequellen. Diese können uns dienen, aus der vom hypertrophierten Denken entstellten Welt zurück in unsere Mitte zu finden. Zum einen können wir wahrnehmen, wie das Leben in und um uns pulsiert, können mitfühlen in dem zauberhaften Geschehen, das wir in besonderer Weise im gerade erwachenden Frühjahr erleben dürfen. Zum anderen werden wir Geschöpfe in Not oder Bedrängnis wahrnehmen und können so erfahren, dass es auch uns selbst wohl tut, wenn wir uns für diese engagieren. Wer schon einmal ein Tier geheilt hat oder sich für Opfer von Gewalthandlungen eingesetzt hat, kennt das.
Diese unmittelbaren Energiequellen geben uns Kraft, Bestätigung und Befriedigung - und diese Energie kann jeder Mensch zu jeder Zeit seines Lebens anzapfen.
Voraussetzung ist lediglich, dass wir die gewohnten "betreuten" Denkprozesse, auf die wir programmiert wurden, lernen zu unterbrechen. Dass wir die begriffliche Diarrhoe und Bedeutungsverdrehung seitens gefühlsarmer oder gefühlloser Menschen und ihrer Lakaien, welche uns aktuell zu Kriegshandlungen animieren wollen, hinter uns lassen. Dass wir uns nicht wie in den 1930er Jahren mit Worten wie "Kriegstüchtigkeit" gegen andere Völker aufhetzen lassen. Wenn wir solchen verbalen Entgleisungen nicht länger Aufmerksamkeit schenken, schaffen wir Zeit und Raum für das Mitfühlen.
Das Mitfühlen gibt uns ethische Orientierung, zeigt Wege auf, unsere Bestimmung zu finden und gibt uns die Kraft, unser Leben nach unseren Wertvorstellungen zu gestalten. Es beantwortet uns Fragen nach dem WOHIN und WOZU.
Das selbst Denken ist ein Werkzeug, welches uns, nachdem wir in unsere Mitte gefunden haben, dazu dienen kann, die Frage nach dem WIE zu klären.
Mit gleichgesinnten mitfühlenden und selbst denkenden Menschen ist es möglich, an unserer Regionalgesellschaft zu werkeln und funktionierende, auf Vertrauen basierende Initiativen und Gemeinschaften zum Blühen zu bringen. In meinen Blogbeiträgen sind seit 2018 einige davon beschrieben. Und ein psychologischer Zusammenhang scheint bemerkenswert: In solchen Gemeinschaften fühlen sich auch die Beteiligten wohl. Dazu liegen wissenschaftliche Studien vor, welche Sie in meinem Publikationsverzeichnis finden.
Dieser Beitrag wurde mit dem Pareto-Client geschrieben.
-
@ 7d33ba57:1b82db35
2025-04-23 11:40:24Perched on the northern coast of Poland, Gdańsk is a stunning port city with a unique blend of Hanseatic charm, maritime heritage, and resilience through centuries of dramatic history. With its colorful façades, cobbled streets, and strong cultural identity, Gdańsk is one of Poland’s most compelling cities—perfect for history buffs, architecture lovers, and coastal wanderers.
🏛️ What to See & Do in Gdańsk
🌈 Stroll Down Długi Targ (Long Market)
- The heart of Gdańsk’s Old Town, lined with beautifully restored colorful merchant houses
- Admire the Neptune Fountain, Artus Court, and the grand Main Town Hall
⚓ The Crane (Żuraw) & Motława River
- Gdańsk’s medieval port crane is an iconic symbol of its maritime past
- Walk along the Motława River promenade, with boats, cafés, and views of historic granaries
⛪ St. Mary’s Church (Bazylika Mariacka)
- One of the largest brick churches in the world
- Climb the tower for panoramic views of the city and harbor
🕊️ Learn Gdańsk’s Layers of History
🏰 Westerplatte
- The site where World War II began in 1939
- A powerful memorial and museum amid coastal nature
🛠️ European Solidarity Centre
- A striking modern museum dedicated to the Solidarity movement that helped end communism in Poland
- Insightful, moving, and highly interactive
🏖️ Relax by the Baltic Sea
- Head to Brzeźno Beach or nearby Sopot for golden sands, seaside promenades, and beach cafés
- In summer, the Baltic vibes are strong—swimming, sunsets, and pier strolls
🍽️ Tastes of Gdańsk
- Try pierogi, fresh Baltic fish, golden smoked cheese, and żurek soup
- Visit a local milk bar or enjoy a craft beer at one of Gdańsk’s buzzing breweries
- Don’t miss the amber jewelry shops—Gdańsk is known as the Amber Capital of the World
🚆 Getting There
- Easily reached by train or plane from Warsaw and other major European cities
- Compact city center—walkable and scenic
-
@ 7d33ba57:1b82db35
2025-04-23 10:28:49Lake Bled is straight out of a storybook—an emerald alpine lake with a tiny island crowned by a church, surrounded by forested hills and overlooked by a clifftop castle. Just an hour from Ljubljana, this Slovenian gem is perfect for romantic getaways, outdoor adventures, or a peaceful escape into nature.
🌊 Top Things to Do in Bled
🛶 Bled Island & Church of the Assumption
- Take a traditional pletna boat or rent a rowboat to reach the only natural island in Slovenia
- Ring the church bell and make a wish—it’s a local tradition!
- Enjoy serene lake views from the island’s stone steps
🏰 Bled Castle (Blejski Grad)
- Perched on a cliff 130 meters above the lake
- Explore the medieval halls, museum, and wine cellar
- The terrace views? Absolutely unforgettable—especially at sunset
🚶♂️ Walk or Cycle the Lakeside Path
- A 6 km flat path circles the lake—perfect for a leisurely stroll or bike ride
- Stop for lakeside cafés, photo ops, or a quick swim in summer
🌄 Outdoor Adventures Beyond the Lake
- Hike to Mala Osojnica Viewpoint for the most iconic panoramic view of Lake Bled
- Go paddleboarding, kayaking, or swimming in warmer months
- Nearby Vintgar Gorge offers a stunning wooden path through a narrow, turquoise canyon
🍰 Try the Famous Bled Cream Cake (Kremšnita)
- A must-try dessert with layers of vanilla custard, cream, and crispy pastry
- Best enjoyed with a coffee on a terrace overlooking the lake
🏡 Where to Stay
- Lakeside hotels, cozy guesthouses, or charming Alpine-style B&Bs
- Some even offer views of the lake, castle, or Triglav National Park
🚗 Getting There
- Around 1 hour from Ljubljana by car, bus, or train
- Easy to combine with stops like Lake Bohinj or Triglav National Park
-
@ 83279ad2:bd49240d
2025-04-29 05:53:52test
-
@ d34e832d:383f78d0
2025-04-22 23:35:05For Secure Inheritance Planning and Offline Signing
The setup described ensures that any 2 out of 3 participants (hardware wallets) must sign a transaction before it can be broadcast, offering robust protection against theft, accidental loss, or mismanagement of funds.
1. Preparation: Tools and Requirements
Hardware Required
- 3× COLDCARD Mk4 hardware wallets (or newer)
- 3× MicroSD cards (one per COLDCARD)
- MicroSD card reader (for your computer)
- Optional: USB data blocker (for safe COLDCARD connection)
Software Required
- Sparrow Wallet: Version 1.7.1 or later
Download: https://sparrowwallet.com/ - COLDCARD Firmware: Version 5.1.2 or later
Update guide: https://coldcard.com/docs/upgrade
Other Essentials
- Durable paper or steel backup tools for seed phrases
- Secure physical storage for backups and devices
- Optional: encrypted external storage for Sparrow wallet backups
Security Tip:
Always verify software signatures before installation. Keep your COLDCARDs air-gapped (no USB data transfer) whenever possible.
2. Initializing Each COLDCARD Wallet
- Power on each COLDCARD and choose “New Wallet”.
- Write down the 24-word seed phrase (DO NOT photograph or store digitally).
- Confirm the seed and choose a strong PIN code (both prefix and suffix).
- (Optional) Enable BIP39 Passphrase for additional entropy.
- Save an encrypted backup to the MicroSD card:
Go to Advanced > Danger Zone > Backup. - Repeat steps 1–5 for all three COLDCARDs.
Best Practice:
Store each seed phrase securely and in separate physical locations. Test wallet recovery before storing real funds.
3. Exporting XPUBs from COLDCARD
Each hardware wallet must export its extended public key (XPUB) for multisig setup:
- Insert MicroSD card into a COLDCARD.
- Navigate to:
Settings > Multisig Wallets > Export XPUB. - Select the appropriate derivation path. Recommended:
- Native SegWit:
m/84'/0'/0'
(bc1 addresses) - Alternatively: Nested SegWit
m/49'/0'/0'
(starts with 3) - Save the XPUB file to the MicroSD card.
- Insert MicroSD into your computer and transfer XPUB files to Sparrow Wallet.
- Repeat for the remaining COLDCARDs.
4. Creating the 2-of-3 Multisig Wallet in Sparrow
- Launch Sparrow Wallet.
- Click File > New Wallet and name your wallet.
- In the Keystore tab, choose Multisig.
- Select 2-of-3 as your multisig policy.
- For each cosigner:
- Choose Add cosigner > Import XPUB from file.
- Load XPUBs exported from each COLDCARD.
- Once all 3 cosigners are added, confirm the configuration.
- Click Apply, then Create Wallet.
- Sparrow will display a receive address. Fund the wallet using this.
Tip:
You can export the multisig policy (wallet descriptor) as a backup and share it among cosigners.
5. Saving and Verifying the Wallet Configuration
- After creating the wallet, click Wallet > Export > Export Wallet File (.json).
- Save this file securely and distribute to all participants.
- Verify that the addresses match on each COLDCARD using the wallet descriptor file (optional but recommended).
6. Creating and Exporting a PSBT (Partially Signed Bitcoin Transaction)
- In Sparrow, click Send, fill out recipient details, and click Create Transaction.
- Click Finalize > Save PSBT to MicroSD card.
- The file will be saved as a
.psbt
file.
Note: No funds are moved until 2 signatures are added and the transaction is broadcast.
7. Signing the PSBT with COLDCARD (Offline)
- Insert the MicroSD with the PSBT into COLDCARD.
- From the main menu:
Ready To Sign > Select PSBT File. - Verify transaction details and approve.
- COLDCARD will create a signed version of the PSBT (
signed.psbt
). - Repeat the signing process with a second COLDCARD (different signer).
8. Finalizing and Broadcasting the Transaction
- Load the signed PSBT files back into Sparrow.
- Sparrow will detect two valid signatures.
- Click Finalize Transaction > Broadcast.
- Your Bitcoin transaction will be sent to the network.
9. Inheritance Planning with Multisig
Multisig is ideal for inheritance scenarios:
Example Inheritance Setup
- Signer 1: Yourself (active user)
- Signer 2: Trusted family member or executor
- Signer 3: Lawyer, notary, or secure backup
Only 2 signatures are needed. If one party loses access or passes away, the other two can recover the funds.
Best Practices for Inheritance
- Store each seed phrase in separate, tamper-proof, waterproof containers.
- Record clear instructions for heirs (without compromising seed security).
- Periodically test recovery with cosigners.
- Consider time-locked wallets or third-party escrow if needed.
Security Tips and Warnings
- Never store seed phrases digitally or online.
- Always verify addresses and signatures on the COLDCARD screen.
- Use Sparrow only on secure, malware-free computers.
- Physically secure your COLDCARDs from unauthorized access.
- Practice recovery procedures before storing real value.
Consider
A 2-of-3 multisignature wallet using COLDCARD and Sparrow Wallet offers a highly secure, flexible, and transparent Bitcoin custody model. Whether for inheritance planning or high-security storage, it mitigates risks associated with single points of failure while maintaining usability and privacy.
By following this guide, Bitcoin users can significantly increase the resilience of their holdings while enabling thoughtful succession strategies.
-
@ d34e832d:383f78d0
2025-04-22 22:48:30What is pfSense?
pfSense is a free, open-source firewall and router software distribution based on FreeBSD. It includes a web-based GUI and supports advanced features like:
- Stateful packet inspection (SPI)
- Virtual Private Network (VPN) support (OpenVPN, WireGuard, IPSec)
- Dynamic and static routing
- Traffic shaping and QoS
- Load balancing and failover
- VLANs and captive portals
- Intrusion Detection/Prevention (Snort, Suricata)
- DNS, DHCP, and more
Use Cases
- Home networks with multiple devices
- Small to medium businesses
- Remote work VPN gateway
- IoT segmentation
- Homelab firewalls
- Wi-Fi network segmentation
2. Essential Hardware Components
When building a pfSense router, you must match your hardware to your use case. The system needs at least two network interfaces—one for WAN, one for LAN.
Core Components
| Component | Requirement | Budget-Friendly Example | |---------------|------------------------------------|----------------------------------------------| | CPU | Dual-core 64-bit x86 (AES-NI support recommended) | Intel Celeron J4105, AMD GX-412HC, or Intel i3 6100T | | Motherboard | Mini-ITX or Micro-ATX with support for selected CPU | ASRock J4105-ITX (includes CPU) | | RAM | Minimum 4GB (8GB preferred) | Crucial 4GB DDR4 | | Storage | 16GB+ SSD or mSATA/NVMe (for longevity and speed) | Kingston A400 120GB SSD | | NICs | At least two Intel gigabit ports (Intel NICs preferred) | Intel PRO/1000 Dual-Port PCIe or onboard | | Power Supply | 80+ Bronze rated or PicoPSU for SBCs | EVGA 400W or PicoPSU 90W | | Case | Depends on form factor | Mini-ITX case (e.g., InWin Chopin) | | Cooling | Passive or low-noise | Stock heatsink or case fan |
3. Recommended Affordable Hardware Builds
Build 1: Super Budget (Fanless)
- Motherboard/CPU: ASRock J4105-ITX (quad-core, passive cooling, AES-NI)
- RAM: 4GB DDR4 SO-DIMM
- Storage: 120GB SATA SSD
- NICs: 1 onboard + 1 PCIe Intel Dual Port NIC
- Power Supply: PicoPSU with 60W adapter
- Case: Mini-ITX fanless enclosure
- Estimated Cost: ~$150–180
Build 2: Performance on a Budget
- CPU: Intel i3-6100T (low power, AES-NI support)
- Motherboard: ASUS H110M-A/M.2 (Micro-ATX)
- RAM: 8GB DDR4
- Storage: 120GB SSD
- NICs: 2-port Intel PCIe NIC
- Case: Compact ATX case
- Power Supply: 400W Bronze-rated PSU
- Estimated Cost: ~$200–250
4. Assembling the Hardware
Step-by-Step Instructions
- Prepare the Workspace:
- Anti-static mat or surface
- Philips screwdriver
- Install CPU (if required):
- Align and seat CPU into socket
- Apply thermal paste and attach cooler
- Insert RAM into DIMM slots
- Install SSD and connect to SATA port
- Install NIC into PCIe slot
- Connect power supply to motherboard, SSD
- Place system in case and secure all components
- Plug in power and monitor
5. Installing pfSense Software
What You'll Need
- A 1GB+ USB flash drive
- A separate computer with internet access
Step-by-Step Guide
- Download pfSense ISO:
- Visit: https://www.pfsense.org/download/
- Choose AMD64, USB Memstick Installer, and mirror site
- Create Bootable USB:
- Use tools like balenaEtcher or Rufus to write ISO to USB
- Boot the Router from USB:
- Enter BIOS → Set USB as primary boot
- Save and reboot
- Install pfSense:
- Accept defaults during installation
- Choose ZFS or UFS (UFS is simpler for small SSDs)
- Install to SSD, remove USB post-installation
6. Basic Configuration Settings
After the initial boot, pfSense will assign: - WAN to one interface (via DHCP) - LAN to another (default IP: 192.168.1.1)
Access WebGUI
- Connect a PC to LAN port
- Open browser → Navigate to
http://192.168.1.1
- Default login: admin / pfsense
Initial Setup Wizard
- Change admin password
- Set hostname and DNS
- Set time zone
- Confirm WAN/LAN settings
- Enable DHCP server for LAN
- Optional: Enable SSH
7. Tips and Best Practices
Security Best Practices
- Change default password immediately
- Block all inbound traffic by default
- Enable DNS over TLS (with Unbound)
- Regularly update pfSense firmware and packages
- Use strong encryption for VPNs
- Limit admin access to specific IPs
Performance Optimization
- Use Intel NICs for reliable throughput
- Offload DNS, VPN, and DHCP to dedicated packages
- Disable unnecessary services to reduce CPU load
- Monitor system logs for errors and misuse
- Enable traffic shaping if managing VoIP or streaming
Useful Add-ons
- pfBlockerNG: Ad-blocking and geo-blocking
- Suricata: Intrusion Detection System
- OpenVPN/WireGuard: VPN server setup
- Zabbix Agent: External monitoring
8. Consider
With a modest investment and basic technical skills, anyone can build a powerful, flexible, and secure pfSense router. Choosing the right hardware for your needs ensures a smooth experience without overpaying or underbuilding. Whether you're enhancing your home network, setting up a secure remote office, or learning network administration, a custom pfSense router is a versatile, long-term solution.
Appendix: Example Hardware Component List
| Component | Item | Price (Approx.) | |------------------|--------------------------|------------------| | Motherboard/CPU | ASRock J4105-ITX | $90 | | RAM | Crucial 4GB DDR4 | $15 | | Storage | Kingston A400 120GB SSD | $15 | | NIC | Intel PRO/1000 Dual PCIe | $20 | | Case | Mini-ITX InWin Chopin | $40 | | Power Supply | PicoPSU 60W + Adapter | $25 | | Total | | ~$205 |
-
@ d34e832d:383f78d0
2025-04-22 21:32:40The Domain Name System (DNS) is a foundational component of the internet. It translates human-readable domain names into IP addresses, enabling the functionality of websites, email, and services. However, traditional DNS is inherently insecure—queries are typically sent in plaintext, making them vulnerable to interception, spoofing, and censorship.
DNSCrypt is a protocol designed to authenticate communications between a DNS client and a DNS resolver. By encrypting DNS traffic and validating the source of responses, it thwarts man-in-the-middle attacks and DNS poisoning. Despite its security advantages, widespread adoption remains limited due to usability and deployment complexity.
This idea introduces an affordable, lightweight DNSCrypt proxy server capable of providing secure DNS resolution in both home and enterprise environments. Our goal is to democratize secure DNS through low-cost infrastructure and transparent architecture.
2. Background
2.1 Traditional DNS Vulnerabilities
- Lack of Encryption: DNS queries are typically unencrypted (UDP port 53), exposing user activity.
- Spoofing and Cache Poisoning: Attackers can forge DNS responses to redirect users to malicious websites.
- Censorship: Governments and ISPs can block or alter DNS responses to control access.
2.2 Introduction to DNSCrypt
DNSCrypt mitigates these problems by: - Encrypting DNS queries using X25519 + XSalsa20-Poly1305 or X25519 + ChaCha20-Poly1305 - Authenticating resolvers via public key infrastructure (PKI) - Supporting relay servers and anonymized DNS, enhancing metadata protection
2.3 Current Landscape
DNSCrypt proxies are available in commercial routers and services (e.g., Cloudflare DNS over HTTPS), but full control remains in the hands of centralized entities. Additionally, hardware requirements and setup complexity can be barriers to entry.
3. System Architecture
3.1 Overview
Our system is designed around the following components: - Client Devices: Use DNSCrypt-enabled stub resolvers (e.g., dnscrypt-proxy) - DNSCrypt Proxy Server: Accepts DNSCrypt queries, decrypts and validates them, then forwards to recursive resolvers (e.g., Unbound) - Recursive Resolver (Optional): Provides DNS resolution without reliance on upstream services - Relay Support: Adds anonymization via DNSCrypt relays
3.2 Protocols and Technologies
- DNSCrypt v2: Core encrypted DNS protocol
- X25519 Key Exchange: Lightweight elliptic curve cryptography
- Poly1305 AEAD Encryption: Fast and secure authenticated encryption
- UDP/TCP Fallback: Supports both transport protocols to bypass filtering
- DoH Fallback: Optional integration with DNS over HTTPS
3.3 Hardware Configuration
- Platform: Raspberry Pi 4B or x86 mini-PC (e.g., Lenovo M710q)
- Cost: Under $75 total (device + SD card or SSD)
- Operating System: Debian 12 or Ubuntu Server 24.04
- Memory Footprint: <100MB RAM idle
- Power Consumption: ~3-5W idle
4. Design Considerations
4.1 Affordability
- Hardware Sourcing: Use refurbished or SBCs to cut costs
- Software Stack: Entirely open source (dnscrypt-proxy, Unbound)
- No Licensing Fees: FOSS-friendly deployment for communities
4.2 Security
- Ephemeral Key Pairs: New keypairs every session prevent replay attacks
- Public Key Verification: Resolver keys are pre-published and verified
- No Logging: DNSCrypt proxies are configured to avoid retaining user metadata
- Anonymization Support: With relay chaining for metadata privacy
4.3 Maintainability
- Containerization (Optional): Docker-compatible setup for simple updates
- Remote Management: Secure shell access with fail2ban and SSH keys
- Auto-Updating Scripts: Systemd timers to refresh certificates and relay lists
5. Implementation
5.1 Installation Steps
- Install OS and dependencies:
bash sudo apt update && sudo apt install dnscrypt-proxy unbound
- Configure
dnscrypt-proxy.toml
: - Define listening port, relay list, and trusted resolvers
- Enable Anonymized DNS, fallback to DoH
- Configure Unbound (optional):
- Run as recursive backend
- Firewall hardening:
- Allow only DNSCrypt port (default: 443 or 5353)
- Block all inbound traffic except SSH (optional via Tailscale)
5.2 Challenges
- Relay Performance Variability: Some relays introduce latency; solution: geo-filtering
- Certificate Refresh: Mitigated with daily cron jobs
- IP Rate-Limiting: Mitigated with DNS load balancing
6. Evaluation
6.1 Performance Benchmarks
- Query Resolution Time (mean):
- Local resolver: 12–18ms
- Upstream via DoH: 25–35ms
- Concurrent Users Supported: 100+ without degradation
- Memory Usage: ~60MB (dnscrypt-proxy + Unbound)
- CPU Load: <5% idle on ARM Cortex-A72
6.2 Security Audits
- Verified with dnsleaktest.com and
tcpdump
- No plaintext DNS observed over interface
- Verified resolver keys via DNSCrypt community registry
7. Use Cases
7.1 Personal/Home Use
- Secure DNS for all home devices via router or Pi-hole integration
7.2 Educational Institutions
- Provide students with censorship-free DNS in oppressive environments
7.3 Community Mesh Networks
- Integrate DNSCrypt into decentralized networks (e.g., Nostr over Mesh)
7.4 Business VPNs
- Secure internal DNS without relying on third-party resolvers
8. Consider
This idea has presented a practical, affordable approach to deploying a secure DNSCrypt proxy server. By leveraging open-source tools, minimalist hardware, and careful design choices, it is possible to democratize access to encrypted DNS. Our implementation meets the growing need for privacy-preserving infrastructure without introducing prohibitive costs.
We demonstrated that even modest devices can sustain dozens of encrypted DNS sessions concurrently while maintaining low latency. Beyond privacy, this system empowers individuals and communities to control their own DNS without corporate intermediaries.
9. Future Work
- Relay Discovery Automation: Dynamic quality-of-service scoring for relays
- Web GUI for Management: Simplified frontend for non-technical users
- IPv6 and Tor Integration: Expanding availability and censorship resistance
- Federated Resolver Registry: Trust-minimized alternative to current resolver key lists
References
- DNSCrypt Protocol Specification v2 – https://dnscrypt.info/protocol
- dnscrypt-proxy GitHub Repository – https://github.com/DNSCrypt/dnscrypt-proxy
- Unbound Recursive Resolver – https://nlnetlabs.nl/projects/unbound/about/
- DNS Security Extensions (DNSSEC) – IETF RFCs 4033, 4034, 4035
- Bernstein, D.J. – Cryptographic Protocols using Curve25519 and Poly1305
- DNS over HTTPS (DoH) – RFC 8484
-
@ d34e832d:383f78d0
2025-04-22 21:14:46Minecraft remains one of the most popular sandbox games in the world. For players who wish to host private or community-based servers, monthly hosting fees can quickly add up. Furthermore, setting up a server from scratch often requires technical knowledge in networking, system administration, and Linux.
This idea explores a do-it-yourself (DIY) method for deploying a low-cost Minecraft server using common secondhand hardware and a simple software stack, with a focus on energy efficiency, ease of use, and full control over the server environment.
2. Objective
To build and deploy a dedicated Minecraft server that:
- Costs less than $75 in total
- Consumes minimal electricity (<10W idle)
- Is manageable via a graphical user interface (GUI)
- Supports full server management including backups, restarts, and plugin control
- Requires no port forwarding or complex network configuration
- Delivers performance suitable for a small-to-medium number of concurrent players
3. Hardware Overview
3.1 Lenovo M710Q Mini-PC (~$55 used)
- Intel Core i5 (6th/7th Gen)
- 8GB DDR4 RAM
- Compact size and low power usage
- Widely available refurbished
3.2 ID Sonics 512GB NVMe SSD (~$20)
- Fast storage with sufficient capacity for multiple Minecraft server instances
- SSDs reduce world loading lag and improve backup performance
Total Hardware Cost: ~$75
4. Software Stack
4.1 Ubuntu Server 24.04
- Stable, secure, and efficient operating system
- Headless installation, ideal for server use
- Supports automated updates and system management via CLI
4.2 CasaOS
- A lightweight operating system layer and GUI on top of Ubuntu
- Built for managing Docker containers with a clean web interface
- Allows app store-like deployment of various services
4.3 Crafty Controller (via Docker)
- Web-based server manager for Minecraft
- Features include:
- Automatic backups and restore
- Scheduled server restarts
- Plugin management
- Server import/export
- Server logs and console access
5. Network and Remote Access
5.1 PlayIt.gg Integration
PlayIt.gg creates a secure tunnel to your server via a relay node, removing the need for traditional port forwarding.
Benefits: - Works even behind Carrier-Grade NAT (common on mobile or fiber ISPs) - Ideal for users with no access to router settings - Ensures privacy by hiding IP address from public exposure
6. Setup Process Summary
- Install Ubuntu Server 24.04 on the M710Q
- Install CasaOS via script provided by the project
- Use CasaOS to deploy Crafty Controller in a Docker container
- Configure Minecraft server inside Crafty (Vanilla, Paper, Spigot, etc.)
- Integrate PlayIt.gg to expose the server to friends
- Access Crafty via browser for daily management
7. Power Consumption and Performance
- Idle Power Draw: ~7.5W
- Load Power Draw (2–5 players): ~15W
- M710Q fan runs quiet and rarely under load
- Performance sufficient for:
- Vanilla or optimized Paper server
- Up to 10 concurrent players with light mods
8. Cost Analysis vs Hosted Services
| Solution | Monthly Cost | Annual Cost | Control Level | Mods Support | |-----------------------|--------------|-------------|----------------|---------------| | Commercial Hosting | $5–$15 | $60–$180 | Limited | Yes | | This Build (One-Time) | $75 | $0 | Full | Yes |
Return on Investment (ROI):
Break-even point reached in 6 to 8 months compared to lowest hosting tiers.
9. Advantages
- No Subscription: Single upfront investment
- Local Control: Full access to server files and environment
- Privacy Respecting: No third-party data mining
- Modular: Can add mods, backups, maps with full access
- Low Energy Use: Ideal for 24/7 uptime
10. Limitations
- Not Ideal for >20 players: CPU and RAM constraints
- Local Hardware Dependency: Physical failure risk
- Requires Basic Setup Time: CLI familiarity useful but not required
11. Future Enhancements
- Add Dynmap with reverse proxy and TLS via CasaOS
- Integrate Nextcloud for managing world backups
- Use Watchtower for automated container updates
- Schedule daily email logs using system cron
12. Consider
This idea presents a practical and sustainable approach to self-hosting Minecraft servers using open-source software and refurbished hardware. With a modest upfront cost and minimal maintenance, users can enjoy full control over their game worlds without recurring fees or technical hassle. This method democratizes game hosting and aligns well with educational environments, small communities, and privacy-conscious users.