-
@ 42342239:1d80db24
2024-10-23 12:28:41TL;DR: The mathematics of trust says that news reporting will fall flat when the population becomes suspicious of the media. Which is now the case for growing subgroups in the U.S. as well as in Sweden.
A recent wedding celebration for Sweden Democrats leader Jimmie Åkesson resulted in controversy, as one of the guests in attendance was reportedly linked to organized crime. Following this “wedding scandal”, a columnist noted that the party’s voters had not been significantly affected. Instead of a decrease in trust - which one might have expected - 10% of them stated that their confidence in the party had actually increased. “Over the years, the Sweden Democrats have surprisingly emerged unscathed from their numerous scandals,” she wrote. But is this really so surprising?
In mathematics, a probability is expressed as the likelihood of something occurring given one or more conditions. For example, one can express a probability as “the likelihood that a certain stock will rise in price, given that the company has presented a positive quarterly report.” In this case, the company’s quarterly report is the basis for the assessment. If we add more information, such as the company’s strong market position and a large order from an important customer, the probability increases further. The more information we have to go on, the more precise we can be in our assessment.
From this perspective, the Sweden Democrats’ “numerous scandals” should lead to a more negative assessment of the party. But this perspective omits something important.
A couple of years ago, the term “gaslighting” was chosen as the word of the year in the US. The term comes from a 1944 film of the same name and refers to a type of psychological manipulation, as applied to the lovely Ingrid Bergman. Today, the term is used in politics, for example, when a large group of people is misled to achieve political goals. The techniques used can be very effective but have a limitation. When the target becomes aware of what is happening, everything changes. Then the target becomes vigilant and views all new information with great suspicion.
The Sweden Democrats’ “numerous scandals” should lead to a more negative assessment of the party. But if SD voters to a greater extent than others believe that the source of the information is unreliable, for example, by omitting information or adding unnecessary information, the conclusion is different. The Swedish SOM survey shows that these voters have lower trust in journalists and also lower confidence in the objectivity of the news. Like a victim of gaslighting, they view negative reporting with suspicion. The arguments can no longer get through. A kind of immunity has developed.
In the US, trust in the media is at an all-time low. So when American media writes that “Trump speaks like Hitler, Stalin, and Mussolini,” that his idea of deporting illegal immigrants would cost hundreds of billions of dollars, or gets worked up over his soda consumption, the consequence is likely to be similar to here at home.
The mathematics of trust says that reporting will fall flat when the population becomes suspicious of the media. Or as the Swedish columnist put it: like water off a duck’s back.
Cover image: Ingrid Bergman 1946. RKO Radio Pictures - eBay, Public Domain, Wikimedia Commons
-
@ 42342239:1d80db24
2024-10-22 07:57:17It was recently reported that Sweden's Minister for Culture, Parisa Liljestrand, wishes to put an end to anonymous accounts on social media. The issue has been at the forefront following revelations of political parties using pseudonymous accounts on social media platforms earlier this year.
The importance of the internet is also well-known. As early as 2015, Roberta Alenius, who was then the press secretary for Fredrik Reinfeldt (Moderate Party), openly spoke about her experiences with the Social Democrats' and Moderates' internet activists: Twitter actually set the agenda for journalism at the time.
The Minister for Culture now claims, amongst other things, that anonymous accounts pose a threat to democracy, that they deceive people, and that they can be used to mislead, etc. It is indeed easy to find arguments against anonymity; perhaps the most common one is the 'nothing to hide, nothing to fear' argument.
One of the many problems with this argument is that it assumes that abuse of power never occurs. History has much to teach us here. Sometimes, authorities can act in an arbitrary, discriminatory, or even oppressive manner, at least in hindsight. Take, for instance, the struggles of the homosexual community, the courageous dissidents who defied communist regimes, or the women who fought for their right to vote in the suffragette movement.
It was difficult for homosexuals to be open about their sexuality in Sweden in the 1970s. Many risked losing their jobs, being ostracised, or harassed. Anonymity was therefore a necessity for many. Homosexuality was actually classified as a mental illness in Sweden until 1979.
A couple of decades earlier, dissidents in communist regimes in Europe used pseudonyms when publishing samizdat magazines. The Czech author and dissident Václav Havel, who later became the President of the Czech Republic, used a pseudonym when publishing his texts. The same was true for the Russian author and literary prize winner Alexander Solzhenitsyn. Indeed, in Central and Eastern Europe, anonymity was of the utmost importance.
One hundred years ago, women all over the world fought for the right to vote and to be treated as equals. Many were open in their struggle, but for others, anonymity was a necessity as they risked being socially ostracised, losing their jobs, or even being arrested.
Full transparency is not always possible or desirable. Anonymity can promote creativity and innovation as it gives people the opportunity to experiment and try out new ideas without fear of being judged or criticised. This applies not only to individuals but also to our society, in terms of ideas, laws, norms, and culture.
It is also a strange paradox that those who wish to limit freedom of speech and abolish anonymity simultaneously claim to be concerned about the possible return of fascism. The solutions they advocate are, in fact, precisely what would make it easier for a tyrannical regime to maintain its power. To advocate for the abolition of anonymity, one must also be of the (absurd) opinion that the development of history has now reached its definitive end.
-
@ 42342239:1d80db24
2024-09-26 07:57:04The boiling frog is a simple tale that illustrates the danger of gradual change: if you put a frog in boiling water, it will quickly jump out to escape the heat. But if you place a frog in warm water and gradually increase the temperature, it won't notice the change and will eventually cook itself. Might the decline in cash usage be construed as an example of this tale?
As long as individuals can freely transact with each other and conduct purchases and sales without intermediaries[^1] such as with cash, our freedoms and rights remain secure from potential threats posed by the payment system. However, as we have seen in several countries such as Sweden over the past 15 years, the use of cash and the amount of banknotes and coins in circulation have decreased. All to the benefit of various intermediated[^1] electronic alternatives.
The reasons for this trend include: - The costs associated with cash usage has been increasing. - Increased regulatory burdens due to stricter anti-money laundering regulations. - Closed bank branches and fewer ATMs. - The Riksbank's aggressive note switches resulted in a situation where they were no longer recognized.
Market forces or "market forces"?
Some may argue that the "de-cashing" of society is a consequence of market forces. But does this hold true? Leading economists at times recommend interventions with the express purpose to mislead the public, such as proposing measures who are "opaque to most voters."
In a working paper on de-cashing by the International Monetary Fund (IMF) from 2017, such thought processes, even recommendations, can be found. IMF economist Alexei Kireyev, formerly a professor at an institute associated with the Soviet Union's KGB (MGIMO) and economic adviser to Michail Gorbachov 1989-91, wrote that:
- "Social conventions may also be disrupted as de-cashing may be viewed as a violation of fundamental rights, including freedom of contract and freedom of ownership."
- Letting the private sector lead "the de-cashing" is preferable, as it will seem "almost entirely benign". The "tempting attempts to impose de-cashing by a decree should be avoided"
- "A targeted outreach program is needed to alleviate suspicions related to de-cashing"
In the text, he also offered suggestions on the most effective approach to diminish the use of cash:
- The de-cashing process could build on the initial and largely uncontested steps, such as the phasing out of large denomination bills, the placement of ceilings on cash transactions, and the reporting of cash moves across the borders.
- Include creating economic incentives to reduce the use of cash in transactions
- Simplify "the opening and use of transferrable deposits, and further computerizing the financial system."
As is customary in such a context, it is noted that the article only describes research and does not necessarily reflect IMF's views. However, isn't it remarkable that all of these proposals have come to fruition and the process continues? Central banks have phased out banknotes with higher denominations. Banks' regulatory complexity seemingly increase by the day (try to get a bank to handle any larger amounts of cash). The transfer of cash from one nation to another has become increasingly burdensome. The European Union has recently introduced restrictions on cash transactions. Even the law governing the Swedish central bank is written so as to guarantee a further undermining of cash. All while the market share is growing for alternatives such as transferable deposits[^1].
The old European disease
The Czech Republic's former president Václav Havel, who played a key role in advocating for human rights during the communist repression, was once asked what the new member states in the EU could do to pay back for all the economic support they had received from older member states. He replied that the European Union still suffers from the old European disease, namely the tendency to compromise with evil. And that the new members, who have a recent experience of totalitarianism, are obliged to take a more principled stance - sometimes necessary - and to monitor the European Union in this regard, and educate it.
The American computer scientist and cryptographer David Chaum said in 1996 that "[t]he difference between a bad electronic cash system and well-developed digital cash will determine whether we will have a dictatorship or a real democracy". If Václav Havel were alive today, he would likely share Chaum's sentiment. Indeed, on the current path of "de-cashing", we risk abolishing or limiting our liberties and rights, "including freedom of contract and freedom of ownership" - and this according to an economist at the IMF(!).
As the frog was unwittingly boiled alive, our freedoms are quietly being undermined. The temperature is rising. Will people take notice before our liberties are irreparably damaged?
[^1]: Transferable deposits are intermediated. Intermediated means payments involving one or several intermediares, like a bank, a card issuer or a payment processor. In contrast, a disintermediated payment would entail a direct transactions between parties without go-betweens, such as with cash.
-
@ ee11a5df:b76c4e49
2024-09-11 08:16:37Bye-Bye Reply Guy
There is a camp of nostr developers that believe spam filtering needs to be done by relays. Or at the very least by DVMs. I concur. In this way, once you configure what you want to see, it applies to all nostr clients.
But we are not there yet.
In the mean time we have ReplyGuy, and gossip needed some changes to deal with it.
Strategies in Short
- WEB OF TRUST: Only accept events from people you follow, or people they follow - this avoids new people entirely until somebody else that you follow friends them first, which is too restrictive for some people.
- TRUSTED RELAYS: Allow every post from relays that you trust to do good spam filtering.
- REJECT FRESH PUBKEYS: Only accept events from people you have seen before - this allows you to find new people, but you will miss their very first post (their second post must count as someone you have seen before, even if you discarded the first post)
- PATTERN MATCHING: Scan for known spam phrases and words and block those events, either on content or metadata or both or more.
- TIE-IN TO EXTERNAL SYSTEMS: Require a valid NIP-05, or other nostr event binding their identity to some external identity
- PROOF OF WORK: Require a minimum proof-of-work
All of these strategies are useful, but they have to be combined properly.
filter.rhai
Gossip loads a file called "filter.rhai" in your gossip directory if it exists. It must be a Rhai language script that meets certain requirements (see the example in the gossip source code directory). Then it applies it to filter spam.
This spam filtering code is being updated currently. It is not even on unstable yet, but it will be there probably tomorrow sometime. Then to master. Eventually to a release.
Here is an example using all of the techniques listed above:
```rhai // This is a sample spam filtering script for the gossip nostr // client. The language is called Rhai, details are at: // https://rhai.rs/book/ // // For gossip to find your spam filtering script, put it in // your gossip profile directory. See // https://docs.rs/dirs/latest/dirs/fn.data_dir.html // to find the base directory. A subdirectory "gossip" is your // gossip data directory which for most people is their profile // directory too. (Note: if you use a GOSSIP_PROFILE, you'll // need to put it one directory deeper into that profile // directory). // // This filter is used to filter out and refuse to process // incoming events as they flow in from relays, and also to // filter which events get/ displayed in certain circumstances. // It is only run on feed-displayable event kinds, and only by // authors you are not following. In case of error, nothing is // filtered. // // You must define a function called 'filter' which returns one // of these constant values: // DENY (the event is filtered out) // ALLOW (the event is allowed through) // MUTE (the event is filtered out, and the author is // automatically muted) // // Your script will be provided the following global variables: // 'caller' - a string that is one of "Process", // "Thread", "Inbox" or "Global" indicating // which part of the code is running your // script // 'content' - the event content as a string // 'id' - the event ID, as a hex string // 'kind' - the event kind as an integer // 'muted' - if the author is in your mute list // 'name' - if we have it, the name of the author // (or your petname), else an empty string // 'nip05valid' - whether nip05 is valid for the author, // as a boolean // 'pow' - the Proof of Work on the event // 'pubkey' - the event author public key, as a hex // string // 'seconds_known' - the number of seconds that the author // of the event has been known to gossip // 'spamsafe' - true only if the event came in from a // relay marked as SpamSafe during Process // (even if the global setting for SpamSafe // is off)
fn filter() {
// Show spam on global // (global events are ephemeral; these won't grow the // database) if caller=="Global" { return ALLOW; } // Block ReplyGuy if name.contains("ReplyGuy") || name.contains("ReplyGal") { return DENY; } // Block known DM spam // (giftwraps are unwrapped before the content is passed to // this script) if content.to_lower().contains( "Mr. Gift and Mrs. Wrap under the tree, KISSING!" ) { return DENY; } // Reject events from new pubkeys, unless they have a high // PoW or we somehow already have a nip05valid for them // // If this turns out to be a legit person, we will start // hearing their events 2 seconds from now, so we will // only miss their very first event. if seconds_known <= 2 && pow < 25 && !nip05valid { return DENY; } // Mute offensive people if content.to_lower().contains(" kike") || content.to_lower().contains("kike ") || content.to_lower().contains(" nigger") || content.to_lower().contains("nigger ") { return MUTE; } // Reject events from muted people // // Gossip already does this internally, and since we are // not Process, this is rather redundant. But this works // as an example. if muted { return DENY; } // Accept if the PoW is large enough if pow >= 25 { return ALLOW; } // Accept if their NIP-05 is valid if nip05valid { return ALLOW; } // Accept if the event came through a spamsafe relay if spamsafe { return ALLOW; } // Reject the rest DENY
} ```
-
@ 42342239:1d80db24
2024-09-02 12:08:29The ongoing debate surrounding freedom of expression may revolve more around determining who gets to control the dissemination of information rather than any claimed notion of safeguarding democracy. Similarities can be identified from 500 years ago, following the invention of the printing press.
What has been will be again, what has been done will be done again; there is nothing new under the sun.
-- Ecclesiastes 1:9
The debate over freedom of expression and its limits continues to rage on. In the UK, citizens are being arrested for sharing humouristic images. In Ireland, it may soon become illegal to possess "reckless" memes. Australia is trying to get X to hide information. Venezuela's Maduro blocked X earlier this year, as did a judge on Brazil's Supreme Court. In the US, a citizen has been imprisoned for spreading misleading material following a controversial court ruling. In Germany, the police are searching for a social media user who called a politician overweight. Many are also expressing concerns about deep fakes (AI-generated videos, images, or audio that are designed to deceive).
These questions are not new, however. What we perceive as new questions are often just a reflection of earlier times. After Gutenberg invented the printing press in the 15th century, there were soon hundreds of printing presses across Europe. The Church began using printing presses to mass-produce indulgences. "As soon as the coin in the coffer rings, the soul from purgatory springs" was a phrase used by a traveling monk who sold such indulgences at the time. Martin Luther questioned the reasonableness of this practice. Eventually, he posted the 95 theses on the church door in Wittenberg. He also translated the Bible into German. A short time later, his works, also mass-produced, accounted for a third of all books sold in Germany. Luther refused to recant his provocations as then determined by the Church's central authority. He was excommunicated in 1520 by the Pope and soon declared an outlaw by the Holy Roman Emperor.
This did not stop him. Instead, Luther referred to the Pope as "Pope Fart-Ass" and as the "Ass-God in Rome)". He also commissioned caricatures, such as woodcuts showing a female demon giving birth to the Pope and cardinals, of German peasants responding to a papal edict by showing the Pope their backsides and breaking wind, and more.
Gutenberg's printing presses contributed to the spread of information in a way similar to how the internet does in today's society. The Church's ability to control the flow of information was undermined, much like how newspapers, radio, and TV have partially lost this power today. The Pope excommunicated Luther, which is reminiscent of those who are de-platformed or banned from various platforms today. The Emperor declared Luther an outlaw, which is similar to how the UK's Prime Minister is imprisoning British citizens today. Luther called the Pope derogatory names, which is reminiscent of the individual who recently had the audacity to call an overweight German minister overweight.
Freedom of expression must be curtailed to combat the spread of false or harmful information in order to protect democracy, or so it is claimed. But perhaps it is more about who gets to control the flow of information?
As is often the case, there is nothing new under the sun.
-
@ eac63075:b4988b48
2024-10-21 08:11:11Imagine sending a private message to a friend, only to learn that authorities could be scanning its contents without your knowledge. This isn't a scene from a dystopian novel but a potential reality under the European Union's proposed "Chat Control" measures. Aimed at combating serious crimes like child exploitation and terrorism, these proposals could significantly impact the privacy of everyday internet users. As encrypted messaging services become the norm for personal and professional communication, understanding Chat Control is essential. This article delves into what Chat Control entails, why it's being considered, and how it could affect your right to private communication.
https://www.fountain.fm/episode/coOFsst7r7mO1EP1kSzV
https://open.spotify.com/episode/0IZ6kMExfxFm4FHg5DAWT8?si=e139033865e045de
Sections:
- Introduction
- What Is Chat Control?
- Why Is the EU Pushing for Chat Control?
- The Privacy Concerns and Risks
- The Technical Debate: Encryption and Backdoors
- Global Reactions and the Debate in Europe
- Possible Consequences for Messaging Services
- What Happens Next? The Future of Chat Control
- Conclusion
What Is Chat Control?
"Chat Control" refers to a set of proposed measures by the European Union aimed at monitoring and scanning private communications on messaging platforms. The primary goal is to detect and prevent the spread of illegal content, such as child sexual abuse material (CSAM) and to combat terrorism. While the intention is to enhance security and protect vulnerable populations, these proposals have raised significant privacy concerns.
At its core, Chat Control would require messaging services to implement automated scanning technologies that can analyze the content of messages—even those that are end-to-end encrypted. This means that the private messages you send to friends, family, or colleagues could be subject to inspection by algorithms designed to detect prohibited content.
Origins of the Proposal
The initiative for Chat Control emerged from the EU's desire to strengthen its digital security infrastructure. High-profile cases of online abuse and the use of encrypted platforms by criminal organizations have prompted lawmakers to consider more invasive surveillance tactics. The European Commission has been exploring legislation that would make it mandatory for service providers to monitor communications on their platforms.
How Messaging Services Work
Most modern messaging apps, like Signal, Session, SimpleX, Veilid, Protonmail and Tutanota (among others), use end-to-end encryption (E2EE). This encryption ensures that only the sender and the recipient can read the messages being exchanged. Not even the service providers can access the content. This level of security is crucial for maintaining privacy in digital communications, protecting users from hackers, identity thieves, and other malicious actors.
Key Elements of Chat Control
- Automated Content Scanning: Service providers would use algorithms to scan messages for illegal content.
- Circumvention of Encryption: To scan encrypted messages, providers might need to alter their encryption methods, potentially weakening security.
- Mandatory Reporting: If illegal content is detected, providers would be required to report it to authorities.
- Broad Applicability: The measures could apply to all messaging services operating within the EU, affecting both European companies and international platforms.
Why It Matters
Understanding Chat Control is essential because it represents a significant shift in how digital privacy is handled. While combating illegal activities online is crucial, the methods proposed could set a precedent for mass surveillance and the erosion of privacy rights. Everyday users who rely on encrypted messaging for personal and professional communication might find their conversations are no longer as private as they once thought.
Why Is the EU Pushing for Chat Control?
The European Union's push for Chat Control stems from a pressing concern to protect its citizens, particularly children, from online exploitation and criminal activities. With the digital landscape becoming increasingly integral to daily life, the EU aims to strengthen its ability to combat serious crimes facilitated through online platforms.
Protecting Children and Preventing Crime
One of the primary motivations behind Chat Control is the prevention of child sexual abuse material (CSAM) circulating on the internet. Law enforcement agencies have reported a significant increase in the sharing of illegal content through private messaging services. By implementing Chat Control, the EU believes it can more effectively identify and stop perpetrators, rescue victims, and deter future crimes.
Terrorism is another critical concern. Encrypted messaging apps can be used by terrorist groups to plan and coordinate attacks without detection. The EU argues that accessing these communications could be vital in preventing such threats and ensuring public safety.
Legal Context and Legislative Drivers
The push for Chat Control is rooted in several legislative initiatives:
-
ePrivacy Directive: This directive regulates the processing of personal data and the protection of privacy in electronic communications. The EU is considering amendments that would allow for the scanning of private messages under specific circumstances.
-
Temporary Derogation: In 2021, the EU adopted a temporary regulation permitting voluntary detection of CSAM by communication services. The current proposals aim to make such measures mandatory and more comprehensive.
-
Regulation Proposals: The European Commission has proposed regulations that would require service providers to detect, report, and remove illegal content proactively. This would include the use of technologies to scan private communications.
Balancing Security and Privacy
EU officials argue that the proposed measures are a necessary response to evolving digital threats. They emphasize the importance of staying ahead of criminals who exploit technology to harm others. By implementing Chat Control, they believe law enforcement can be more effective without entirely dismantling privacy protections.
However, the EU also acknowledges the need to balance security with fundamental rights. The proposals include provisions intended to limit the scope of surveillance, such as:
-
Targeted Scanning: Focusing on specific threats rather than broad, indiscriminate monitoring.
-
Judicial Oversight: Requiring court orders or oversight for accessing private communications.
-
Data Protection Safeguards: Implementing measures to ensure that data collected is handled securely and deleted when no longer needed.
The Urgency Behind the Push
High-profile cases of online abuse and terrorism have heightened the sense of urgency among EU policymakers. Reports of increasing online grooming and the widespread distribution of illegal content have prompted calls for immediate action. The EU posits that without measures like Chat Control, these problems will continue to escalate unchecked.
Criticism and Controversy
Despite the stated intentions, the push for Chat Control has been met with significant criticism. Opponents argue that the measures could be ineffective against savvy criminals who can find alternative ways to communicate. There is also concern that such surveillance could be misused or extended beyond its original purpose.
The Privacy Concerns and Risks
While the intentions behind Chat Control focus on enhancing security and protecting vulnerable groups, the proposed measures raise significant privacy concerns. Critics argue that implementing such surveillance could infringe on fundamental rights and set a dangerous precedent for mass monitoring of private communications.
Infringement on Privacy Rights
At the heart of the debate is the right to privacy. By scanning private messages, even with automated tools, the confidentiality of personal communications is compromised. Users may no longer feel secure sharing sensitive information, fearing that their messages could be intercepted or misinterpreted by algorithms.
Erosion of End-to-End Encryption
End-to-end encryption (E2EE) is a cornerstone of digital security, ensuring that only the sender and recipient can read the messages exchanged. Chat Control could necessitate the introduction of "backdoors" or weaken encryption protocols, making it easier for unauthorized parties to access private data. This not only affects individual privacy but also exposes communications to potential cyber threats.
Concerns from Privacy Advocates
Organizations like Signal and Tutanota, which offer encrypted messaging services, have voiced strong opposition to Chat Control. They warn that undermining encryption could have far-reaching consequences:
- Security Risks: Weakening encryption makes systems more vulnerable to hacking, espionage, and cybercrime.
- Global Implications: Changes in EU regulations could influence policies worldwide, leading to a broader erosion of digital privacy.
- Ineffectiveness Against Crime: Determined criminals might resort to other, less detectable means of communication, rendering the measures ineffective while still compromising the privacy of law-abiding citizens.
Potential for Government Overreach
There is a fear that Chat Control could lead to increased surveillance beyond its original scope. Once the infrastructure for scanning private messages is in place, it could be repurposed or expanded to monitor other types of content, stifling free expression and dissent.
Real-World Implications for Users
- False Positives: Automated scanning technologies are not infallible and could mistakenly flag innocent content, leading to unwarranted scrutiny or legal consequences for users.
- Chilling Effect: Knowing that messages could be monitored might discourage people from expressing themselves freely, impacting personal relationships and societal discourse.
- Data Misuse: Collected data could be vulnerable to leaks or misuse, compromising personal and sensitive information.
Legal and Ethical Concerns
Privacy advocates also highlight potential conflicts with existing laws and ethical standards:
- Violation of Fundamental Rights: The European Convention on Human Rights and other international agreements protect the right to privacy and freedom of expression.
- Questionable Effectiveness: The ethical justification for such invasive measures is challenged if they do not significantly improve safety or if they disproportionately impact innocent users.
Opposition from Member States and Organizations
Countries like Germany and organizations such as the European Digital Rights (EDRi) have expressed opposition to Chat Control. They emphasize the need to protect digital privacy and caution against hasty legislation that could have unintended consequences.
The Technical Debate: Encryption and Backdoors
The discussion around Chat Control inevitably leads to a complex technical debate centered on encryption and the potential introduction of backdoors into secure communication systems. Understanding these concepts is crucial to grasping the full implications of the proposed measures.
What Is End-to-End Encryption (E2EE)?
End-to-end encryption is a method of secure communication that prevents third parties from accessing data while it's transferred from one end system to another. In simpler terms, only the sender and the recipient can read the messages. Even the service providers operating the messaging platforms cannot decrypt the content.
- Security Assurance: E2EE ensures that sensitive information—be it personal messages, financial details, or confidential business communications—remains private.
- Widespread Use: Popular messaging apps like Signal, Session, SimpleX, Veilid, Protonmail and Tutanota (among others) rely on E2EE to protect user data.
How Chat Control Affects Encryption
Implementing Chat Control as proposed would require messaging services to scan the content of messages for illegal material. To do this on encrypted platforms, providers might have to:
- Introduce Backdoors: Create a means for third parties (including the service provider or authorities) to access encrypted messages.
- Client-Side Scanning: Install software on users' devices that scans messages before they are encrypted and sent, effectively bypassing E2EE.
The Risks of Weakening Encryption
1. Compromised Security for All Users
Introducing backdoors or client-side scanning tools can create vulnerabilities:
- Exploitable Gaps: If a backdoor exists, malicious actors might find and exploit it, leading to data breaches.
- Universal Impact: Weakening encryption doesn't just affect targeted individuals; it potentially exposes all users to increased risk.
2. Undermining Trust in Digital Services
- User Confidence: Knowing that private communications could be accessed might deter people from using digital services or push them toward unregulated platforms.
- Business Implications: Companies relying on secure communications might face increased risks, affecting economic activities.
3. Ineffectiveness Against Skilled Adversaries
- Alternative Methods: Criminals might shift to other encrypted channels or develop new ways to avoid detection.
- False Sense of Security: Weakening encryption could give the impression of increased safety while adversaries adapt and continue their activities undetected.
Signal’s Response and Stance
Signal, a leading encrypted messaging service, has been vocal in its opposition to the EU's proposals:
- Refusal to Weaken Encryption: Signal's CEO Meredith Whittaker has stated that the company would rather cease operations in the EU than compromise its encryption standards.
- Advocacy for Privacy: Signal emphasizes that strong encryption is essential for protecting human rights and freedoms in the digital age.
Understanding Backdoors
A "backdoor" in encryption is an intentional weakness inserted into a system to allow authorized access to encrypted data. While intended for legitimate use by authorities, backdoors pose several problems:
- Security Vulnerabilities: They can be discovered and exploited by unauthorized parties, including hackers and foreign governments.
- Ethical Concerns: The existence of backdoors raises questions about consent and the extent to which governments should be able to access private communications.
The Slippery Slope Argument
Privacy advocates warn that introducing backdoors or mandatory scanning sets a precedent:
- Expanded Surveillance: Once in place, these measures could be extended to monitor other types of content beyond the original scope.
- Erosion of Rights: Gradual acceptance of surveillance can lead to a significant reduction in personal freedoms over time.
Potential Technological Alternatives
Some suggest that it's possible to fight illegal content without undermining encryption:
- Metadata Analysis: Focusing on patterns of communication rather than content.
- Enhanced Reporting Mechanisms: Encouraging users to report illegal content voluntarily.
- Investing in Law Enforcement Capabilities: Strengthening traditional investigative methods without compromising digital security.
The technical community largely agrees that weakening encryption is not the solution:
- Consensus on Security: Strong encryption is essential for the safety and privacy of all internet users.
- Call for Dialogue: Technologists and privacy experts advocate for collaborative approaches that address security concerns without sacrificing fundamental rights.
Global Reactions and the Debate in Europe
The proposal for Chat Control has ignited a heated debate across Europe and beyond, with various stakeholders weighing in on the potential implications for privacy, security, and fundamental rights. The reactions are mixed, reflecting differing national perspectives, political priorities, and societal values.
Support for Chat Control
Some EU member states and officials support the initiative, emphasizing the need for robust measures to combat online crime and protect citizens, especially children. They argue that:
- Enhanced Security: Mandatory scanning can help law enforcement agencies detect and prevent serious crimes.
- Responsibility of Service Providers: Companies offering communication services should play an active role in preventing their platforms from being used for illegal activities.
- Public Safety Priorities: The protection of vulnerable populations justifies the implementation of such measures, even if it means compromising some aspects of privacy.
Opposition within the EU
Several countries and organizations have voiced strong opposition to Chat Control, citing concerns over privacy rights and the potential for government overreach.
Germany
- Stance: Germany has been one of the most vocal opponents of the proposed measures.
- Reasons:
- Constitutional Concerns: The German government argues that Chat Control could violate constitutional protections of privacy and confidentiality of communications.
- Security Risks: Weakening encryption is seen as a threat to cybersecurity.
- Legal Challenges: Potential conflicts with national laws protecting personal data and communication secrecy.
Netherlands
- Recent Developments: The Dutch government decided against supporting Chat Control, emphasizing the importance of encryption for security and privacy.
- Arguments:
- Effectiveness Doubts: Skepticism about the actual effectiveness of the measures in combating crime.
- Negative Impact on Privacy: Concerns about mass surveillance and the infringement of citizens' rights.
Table reference: Patrick Breyer - Chat Control in 23 September 2024
Privacy Advocacy Groups
European Digital Rights (EDRi)
- Role: A network of civil and human rights organizations working to defend rights and freedoms in the digital environment.
- Position:
- Strong Opposition: EDRi argues that Chat Control is incompatible with fundamental rights.
- Awareness Campaigns: Engaging in public campaigns to inform citizens about the potential risks.
- Policy Engagement: Lobbying policymakers to consider alternative approaches that respect privacy.
Politicians and Activists
Patrick Breyer
- Background: A Member of the European Parliament (MEP) from Germany, representing the Pirate Party.
- Actions:
- Advocacy: Actively campaigning against Chat Control through speeches, articles, and legislative efforts.
- Public Outreach: Using social media and public events to raise awareness.
- Legal Expertise: Highlighting the legal inconsistencies and potential violations of EU law.
Global Reactions
International Organizations
- Human Rights Watch and Amnesty International: These organizations have expressed concerns about the implications for human rights, urging the EU to reconsider.
Technology Companies
- Global Tech Firms: Companies like Apple and Microsoft are monitoring the situation, as EU regulations could affect their operations and user trust.
- Industry Associations: Groups representing tech companies have issued statements highlighting the risks to innovation and competitiveness.
The Broader Debate
The controversy over Chat Control reflects a broader struggle between security interests and privacy rights in the digital age. Key points in the debate include:
- Legal Precedents: How the EU's decision might influence laws and regulations in other countries.
- Digital Sovereignty: The desire of nations to control digital spaces within their borders.
- Civil Liberties: The importance of protecting freedoms in the face of technological advancements.
Public Opinion
- Diverse Views: Surveys and public forums show a range of opinions, with some citizens prioritizing security and others valuing privacy above all.
- Awareness Levels: Many people are still unaware of the potential changes, highlighting the need for public education on the issue.
The EU is at a crossroads, facing the challenge of addressing legitimate security concerns without undermining the fundamental rights that are central to its values. The outcome of this debate will have significant implications for the future of digital privacy and the balance between security and freedom in society.
Possible Consequences for Messaging Services
The implementation of Chat Control could have significant implications for messaging services operating within the European Union. Both large platforms and smaller providers might need to adapt their technologies and policies to comply with the new regulations, potentially altering the landscape of digital communication.
Impact on Encrypted Messaging Services
Signal and Similar Platforms
-
Compliance Challenges: Encrypted messaging services like Signal rely on end-to-end encryption to secure user communications. Complying with Chat Control could force them to weaken their encryption protocols or implement client-side scanning, conflicting with their core privacy principles.
-
Operational Decisions: Some platforms may choose to limit their services in the EU or cease operations altogether rather than compromise on encryption. Signal, for instance, has indicated that it would prefer to withdraw from European markets than undermine its security features.
Potential Blocking or Limiting of Services
-
Regulatory Enforcement: Messaging services that do not comply with Chat Control regulations could face fines, legal action, or even be blocked within the EU.
-
Access Restrictions: Users in Europe might find certain services unavailable or limited in functionality if providers decide not to meet the regulatory requirements.
Effects on Smaller Providers
-
Resource Constraints: Smaller messaging services and startups may lack the resources to implement the required scanning technologies, leading to increased operational costs or forcing them out of the market.
-
Innovation Stifling: The added regulatory burden could deter new entrants, reducing competition and innovation in the messaging service sector.
User Experience and Trust
-
Privacy Concerns: Users may lose trust in messaging platforms if they know their communications are subject to scanning, leading to a decline in user engagement.
-
Migration to Unregulated Platforms: There is a risk that users might shift to less secure or unregulated services, including those operated outside the EU or on the dark web, potentially exposing them to greater risks.
Technical and Security Implications
-
Increased Vulnerabilities: Modifying encryption protocols to comply with Chat Control could introduce security flaws, making platforms more susceptible to hacking and data breaches.
-
Global Security Risks: Changes made to accommodate EU regulations might affect the global user base of these services, extending security risks beyond European borders.
Impact on Businesses and Professional Communications
-
Confidentiality Issues: Businesses that rely on secure messaging for sensitive communications may face challenges in ensuring confidentiality, affecting sectors like finance, healthcare, and legal services.
-
Compliance Complexity: Companies operating internationally will need to navigate a complex landscape of differing regulations, increasing administrative burdens.
Economic Consequences
-
Market Fragmentation: Divergent regulations could lead to a fragmented market, with different versions of services for different regions.
-
Loss of Revenue: Messaging services might experience reduced revenue due to decreased user trust and engagement or the costs associated with compliance.
Responses from Service Providers
-
Legal Challenges: Companies might pursue legal action against the regulations, citing conflicts with privacy laws and user rights.
-
Policy Advocacy: Service providers may increase lobbying efforts to influence policy decisions and promote alternatives to Chat Control.
Possible Adaptations
-
Technological Innovation: Some providers might invest in developing new technologies that can detect illegal content without compromising encryption, though the feasibility remains uncertain.
-
Transparency Measures: To maintain user trust, companies might enhance transparency about how data is handled and what measures are in place to protect privacy.
The potential consequences of Chat Control for messaging services are profound, affecting not only the companies that provide these services but also the users who rely on them daily. The balance between complying with legal requirements and maintaining user privacy and security presents a significant challenge that could reshape the digital communication landscape.
What Happens Next? The Future of Chat Control
The future of Chat Control remains uncertain as the debate continues among EU member states, policymakers, technology companies, and civil society organizations. Several factors will influence the outcome of this contentious proposal, each carrying significant implications for digital privacy, security, and the regulatory environment within the European Union.
Current Status of Legislation
-
Ongoing Negotiations: The proposed Chat Control measures are still under discussion within the European Parliament and the Council of the European Union. Amendments and revisions are being considered in response to the feedback from various stakeholders.
-
Timeline: While there is no fixed date for the final decision, the EU aims to reach a consensus to implement effective measures against online crime without undue delay.
Key Influencing Factors
1. Legal Challenges and Compliance with EU Law
-
Fundamental Rights Assessment: The proposals must be evaluated against the Charter of Fundamental Rights of the European Union, ensuring that any measures comply with rights to privacy, data protection, and freedom of expression.
-
Court Scrutiny: Potential legal challenges could arise, leading to scrutiny by the European Court of Justice (ECJ), which may impact the feasibility and legality of Chat Control.
2. Technological Feasibility
-
Development of Privacy-Preserving Technologies: Research into methods that can detect illegal content without compromising encryption is ongoing. Advances in this area could provide alternative solutions acceptable to both privacy advocates and security agencies.
-
Implementation Challenges: The practical aspects of deploying scanning technologies across various platforms and services remain complex, and technical hurdles could delay or alter the proposed measures.
3. Political Dynamics
-
Member State Positions: The differing stances of EU countries, such as Germany's opposition, play a significant role in shaping the final outcome. Consensus among member states is crucial for adopting EU-wide regulations.
-
Public Opinion and Advocacy: Growing awareness and activism around digital privacy can influence policymakers. Public campaigns and lobbying efforts may sway decisions in favor of stronger privacy protections.
4. Industry Responses
-
Negotiations with Service Providers: Ongoing dialogues between EU authorities and technology companies may lead to compromises or collaborative efforts to address concerns without fully implementing Chat Control as initially proposed.
-
Potential for Self-Regulation: Messaging services might propose self-regulatory measures to combat illegal content, aiming to demonstrate effectiveness without the need for mandatory scanning.
Possible Scenarios
Optimistic Outcome:
- Balanced Regulation: A revised proposal emerges that effectively addresses security concerns while upholding strong encryption and privacy rights, possibly through innovative technologies or targeted measures with robust oversight.
Pessimistic Outcome:
- Adoption of Strict Measures: Chat Control is implemented as initially proposed, leading to weakened encryption, reduced privacy, and potential withdrawal of services like Signal from the EU market.
Middle Ground:
- Incremental Implementation: Partial measures are adopted, focusing on voluntary cooperation with service providers and emphasizing transparency and user consent, with ongoing evaluations to assess effectiveness and impact.
How to Stay Informed and Protect Your Privacy
-
Follow Reputable Sources: Keep up with news from reliable outlets, official EU communications, and statements from privacy organizations to stay informed about developments.
-
Engage in the Dialogue: Participate in public consultations, sign petitions, or contact representatives to express your views on Chat Control and digital privacy.
-
Utilize Secure Practices: Regardless of legislative outcomes, adopting good digital hygiene—such as using strong passwords and being cautious with personal information—can enhance your online security.
The Global Perspective
-
International Implications: The EU's decision may influence global policies on encryption and surveillance, setting precedents that other countries might follow or react against.
-
Collaboration Opportunities: International cooperation on developing solutions that protect both security and privacy could emerge, fostering a more unified approach to addressing online threats.
Looking Ahead
The future of Chat Control is a critical issue that underscores the challenges of governing in the digital age. Balancing the need for security with the protection of fundamental rights is a complex task that requires careful consideration, open dialogue, and collaboration among all stakeholders.
As the situation evolves, staying informed and engaged is essential. The decisions made in the coming months will shape the digital landscape for years to come, affecting how we communicate, conduct business, and exercise our rights in an increasingly connected world.
Conclusion
The debate over Chat Control highlights a fundamental challenge in our increasingly digital world: how to protect society from genuine threats without eroding the very rights and freedoms that define it. While the intention to safeguard children and prevent crime is undeniably important, the means of achieving this through intrusive surveillance measures raise critical concerns.
Privacy is not just a personal preference but a cornerstone of democratic societies. End-to-end encryption has become an essential tool for ensuring that our personal conversations, professional communications, and sensitive data remain secure from unwanted intrusion. Weakening these protections could expose individuals and organizations to risks that far outweigh the proposed benefits.
The potential consequences of implementing Chat Control are far-reaching:
- Erosion of Trust: Users may lose confidence in digital platforms, impacting how we communicate and conduct business online.
- Security Vulnerabilities: Introducing backdoors or weakening encryption can make systems more susceptible to cyberattacks.
- Stifling Innovation: Regulatory burdens may hinder technological advancement and competitiveness in the tech industry.
- Global Implications: The EU's decisions could set precedents that influence digital policies worldwide, for better or worse.
As citizens, it's crucial to stay informed about these developments. Engage in conversations, reach out to your representatives, and advocate for solutions that respect both security needs and fundamental rights. Technology and policy can evolve together to address challenges without compromising core values.
The future of Chat Control is not yet decided, and public input can make a significant difference. By promoting open dialogue, supporting privacy-preserving innovations, and emphasizing the importance of human rights in legislation, we can work towards a digital landscape that is both safe and free.
In a world where digital communication is integral to daily life, striking the right balance between security and privacy is more important than ever. The choices made today will shape the digital environment for generations to come, determining not just how we communicate, but how we live and interact in an interconnected world.
Thank you for reading this article. We hope it has provided you with a clear understanding of Chat Control and its potential impact on your privacy and digital rights. Stay informed, stay engaged, and let's work together towards a secure and open digital future.
Read more:
- https://www.patrick-breyer.de/en/posts/chat-control/
- https://www.patrick-breyer.de/en/new-eu-push-for-chat-control-will-messenger-services-be-blocked-in-europe/
- https://edri.org/our-work/dutch-decision-puts-brakes-on-chat-control/
- https://signal.org/blog/pdfs/ndss-keynote.pdf
- https://tuta.com/blog/germany-stop-chat-control
- https://cointelegraph.com/news/signal-president-slams-revised-eu-encryption-proposal
- https://mullvad.net/en/why-privacy-matters
-
@ c3f12a9a:06c21301
2024-10-23 17:59:45Destination: Medieval Venice, 1100 AD
How did Venice build its trade empire through collaboration, innovation, and trust in decentralized networks?
Venice: A City of Merchants and Innovators
In medieval times, Venice was a symbol of freedom and prosperity. Unlike the monarchies that dominated Europe, Venice was a republic – a city-state where important decisions were made by the Council of Nobles, and (the Doge, the elected chief magistrate of Venice who served as a ceremonial head of state with limited power), not an absolute ruler. Venice became synonymous with innovative approaches to governance, banking, and trade networks, which led to its rise as one of the most powerful trading centers of its time.
Decentralized Trade Networks
The success of Venice lay in its trust in decentralized trade networks. Every merchant, known as a patrician, could freely develop their trade activities and connect with others. While elsewhere trade was often controlled by kings and local lords, Venetians believed that prosperity would come from a free market and collaboration between people.
Unlike feudal models based on hierarchy and absolute power, Venetian trade networks were open and based on mutual trust. Every merchant could own ships and trade with the Middle East, and this decentralized ownership of trade routes led to unprecedented prosperity.
Story: The Secret Venetian Alliance
A young merchant named Marco, who inherited a shipyard from his father, was trying to make a name for himself in the bustling Venetian spice market. In Venice, there were many competitors, and competition was fierce. Marco, however, learned that an opportunity lay outside the traditional trade networks – among small merchants who were trying to maintain their independence from the larger Venetian patricians.
Marco decided to form an alliance with several other small merchants. Together, they began to share ships, crew costs, and information about trade routes. By creating this informal network, they were able to compete with the larger patricians who controlled the major trade routes. Through collaboration and shared resources, they began to achieve profits they would never have achieved alone.
In the end, Marco and his fellow merchants succeeded, not because they had the most wealth or influence, but because they trusted each other and worked together. They proved that small players could thrive, even in a market dominated by powerful patricians.
Satoshi ends his journey here, enlightened by the lesson that even in a world where big players exist, trust and collaboration can ensure that the market remains free and open for everyone.
Venice and Trust in Decentralized Systems
Venice was a symbol of how decentralization could lead to prosperity. There was no need for kings or powerful rulers, but instead, trust and collaboration among merchants led to the creation of a wealthy city-state.
Venice demonstrated that when people collaborate and share resources, they can achieve greater success than in a hierarchical system with a single central ruler.
A Lesson from Venice for Today's World
Today, as we think about the world of Bitcoin and decentralized finance (DeFi), Venice reminds us that trust among individuals and collaboration are key to maintaining independence and freedom. Just as in Venice, where smaller merchants found strength in collaboration, we can also find ways to keep the crypto world decentralized, open, and fair.
Key
| Term | Explanation | |------|-------------| | Doge | The elected chief magistrate of Venice who served as a ceremonial head of state with limited power. | | Patrician | A member of the noble class in Venice, typically involved in trade and governance. | | Decentralized Finance (DeFi) | A financial system that does not rely on central financial intermediaries, instead using blockchain technology and smart contracts. |
originally posted at https://stacker.news/items/737232
-
@ 42342239:1d80db24
2024-08-30 06:26:21Quis custodiet ipsos custodes?
-- Juvenal (Who will watch the watchmen?)
In mid-July, numerous media outlets reported on the assassination attempt on Donald Trump. FBI Director Christopher Wray stated later that same month that what hit the former president Trump was a bullet. A few days later, it was reported from various sources that search engines no longer acknowledged that an assassination attempt on ex-President Trump had taken place. When users used automatic completion in Google and Bing (91% respectively 4% market share), these search engines only suggested earlier presidents such as Harry Truman and Theodore Roosevelt, along with Russian President Vladimir Putin as people who could have been subjected to assassination attempts.
The reports were comprehensive enough for the Republican district attorney of Missouri to say that he would investigate matter. The senator from Kansas - also a Republican - planned to make an official request to Google. Google has responded through a spokesman to the New York Post that the company had not "manually changed" search results, but its system includes "protection" against search results "connected to political violence."
A similar phenomenon occurred during the 2016 presidential election. At the time, reports emerged of Google, unlike other less widely used search engines, rarely or never suggesting negative search results for Hillary Clinton. The company however provided negative search results for then-candidate Trump. Then, as today, the company denied deliberately favouring any specific political candidate.
These occurrences led to research on how such search suggestions can influence public opinion and voting preferences. For example, the impact of simply removing negative search suggestions has been investigated. A study published in June 2024 reports that such search results can dramatically affect undecided voters. Reducing negative search suggestions can turn a 50/50 split into a 90/10 split in favour of the candidate for whom negative search suggestions were suppressed. The researchers concluded that search suggestions can have "a dramatic impact," that this can "shift a large number of votes" and do so without leaving "any trace for authorities to follow." How search engines operate should therefore be considered of great importance by anyone who claims to take democracy seriously. And this regardless of one's political sympathies.
A well-known thought experiment in philosophy asks: "If a tree falls in the forest and no one hears it, does it make a sound?" Translated to today's media landscape: If an assassination attempt took place on a former president, but search engines don't want to acknowledge it, did it really happen?
-
@ fa0165a0:03397073
2024-10-23 17:19:41Chef's notes
This recipe is for 48 buns. Total cooking time takes at least 90 minutes, but 60 minutes of that is letting the dough rest in between processing.
The baking is a simple three-step process. 1. Making the Wheat dough 2. Making and applying the filling 3. Garnishing and baking in the oven
When done: Enjoy during Fika!
PS;
-
Can be frozen and thawed in microwave for later enjoyment as well.
-
If you need unit conversion, this site may be of help: https://www.unitconverters.net/
-
Traditionally we use something we call "Pearl sugar" which is optimal, but normal sugar or sprinkles is okay too. Pearl sugar (Pärlsocker) looks like this: https://search.brave.com/images?q=p%C3%A4rlsocker
Ingredients
- 150 g butter
- 5 dl milk
- 50 g baking yeast (normal or for sweet dough)
- 1/2 teaspoon salt
- 1-1 1/2 dl sugar
- (Optional) 2 teaspoons of crushed or grounded cardamom seeds.
- 1.4 liters of wheat flour
- Filling: 50-75 g butter, room temperature
- Filling: 1/2 - 1 dl sugar
- Filling: 1 teaspoons crushed or ground cardamom and 1 teaspoons ground cinnamon (or 2 teaspoons of cinnamon)
- Garnish: 1 egg, sugar or Almond Shavings
Directions
- Melt the butter/margarine in a saucepan.
- Pour in the milk and allow the mixture to warm reach body temperature (approx. + 37 ° C).
- Dissolve the yeast in a dough bowl with the help of the salt.
- Add the 37 ° C milk/butter mixture, sugar and if you choose to the optional cardamom. (I like this option!) and just over 2/3 of the flour.
- Work the dough shiny and smooth, about 4 minutes with a machine or 8 minutes by hand.
- Add if necessary. additional flour but save at least 1 dl for baking.
- Let the dough rise covered (by a kitchen towel), about 30 minutes.
- Work the dough into the bowl and then pick it up on a floured workbench. Knead the dough smoothly. Divide the dough into 2 parts. Roll out each piece into a rectangular cake.
- Stir together the ingredients for the filling and spread it.
- Roll up and cut each roll into 24 pieces.
- Place them in paper molds or directly on baking paper with the cut surface facing up. Let them rise covered with a baking sheet, about 30 minutes.
- Brush the buns with beaten egg and sprinkle your chosen topping.
- Bake in the middle of the oven at 250 ° C, 5-8 minutes.
- Allow to cool on a wire rack under a baking sheet.
-
-
@ 3bf0c63f:aefa459d
2024-03-23 08:57:08Nostr is not decentralized nor censorship-resistant
Peter Todd has been saying this for a long time and all the time I've been thinking he is misunderstanding everything, but I guess a more charitable interpretation is that he is right.
Nostr today is indeed centralized.
Yesterday I published two harmless notes with the exact same content at the same time. In two minutes the notes had a noticeable difference in responses:
The top one was published to
wss://nostr.wine
,wss://nos.lol
,wss://pyramid.fiatjaf.com
. The second was published to the relay where I generally publish all my notes to,wss://pyramid.fiatjaf.com
, and that is announced on my NIP-05 file and on my NIP-65 relay list.A few minutes later I published that screenshot again in two identical notes to the same sets of relays, asking if people understood the implications. The difference in quantity of responses can still be seen today:
These results are skewed now by the fact that the two notes got rebroadcasted to multiple relays after some time, but the fundamental point remains.
What happened was that a huge lot more of people saw the first note compared to the second, and if Nostr was really censorship-resistant that shouldn't have happened at all.
Some people implied in the comments, with an air of obviousness, that publishing the note to "more relays" should have predictably resulted in more replies, which, again, shouldn't be the case if Nostr is really censorship-resistant.
What happens is that most people who engaged with the note are following me, in the sense that they have instructed their clients to fetch my notes on their behalf and present them in the UI, and clients are failing to do that despite me making it clear in multiple ways that my notes are to be found on
wss://pyramid.fiatjaf.com
.If we were talking not about me, but about some public figure that was being censored by the State and got banned (or shadowbanned) by the 3 biggest public relays, the sad reality would be that the person would immediately get his reach reduced to ~10% of what they had before. This is not at all unlike what happened to dozens of personalities that were banned from the corporate social media platforms and then moved to other platforms -- how many of their original followers switched to these other platforms? Probably some small percentage close to 10%. In that sense Nostr today is similar to what we had before.
Peter Todd is right that if the way Nostr works is that you just subscribe to a small set of relays and expect to get everything from them then it tends to get very centralized very fast, and this is the reality today.
Peter Todd is wrong that Nostr is inherently centralized or that it needs a protocol change to become what it has always purported to be. He is in fact wrong today, because what is written above is not valid for all clients of today, and if we drive in the right direction we can successfully make Peter Todd be more and more wrong as time passes, instead of the contrary.
See also:
-
@ 42342239:1d80db24
2024-07-28 08:35:26Jerome Powell, Chairman of the US Federal Reserve, stated during a hearing in March that the central bank has no plans to introduce a central bank digital currency (CBDCs) or consider it necessary at present. He said this even though the material Fed staff presents to Congress suggests otherwise - that CBDCs are described as one of the Fed’s key duties .
A CBDC is a state-controlled and programmable currency that could allow the government or its intermediaries the possibility to monitor all transactions in detail and also to block payments based on certain conditions.
Critics argue that the introduction of CBDCs could undermine citizens’ constitutionally guaranteed freedoms and rights . Republican House Majority Leader Tom Emmer, the sponsor of a bill aimed at preventing the central bank from unilaterally introducing a CBDC, believes that if they do not mimic cash, they would only serve as a “CCP-style [Chinese Communist Party] surveillance tool” and could “undermine the American way of life”. Emmer’s proposed bill has garnered support from several US senators , including Republican Ted Cruz from Texas, who introduced the bill to the Senate. Similarly to how Swedish cash advocates risk missing the mark , Tom Emmer and the US senators risk the same outcome with their bill. If the central bank is prevented from introducing a central bank digital currency, nothing would stop major banks from implementing similar systems themselves, with similar consequences for citizens.
Indeed, the entity controlling your money becomes less significant once it is no longer you. Even if central bank digital currencies are halted in the US, a future administration could easily outsource financial censorship to the private banking system, similar to how the Biden administration is perceived by many to have circumvented the First Amendment by getting private companies to enforce censorship. A federal court in New Orleans ruled last fall against the Biden administration for compelling social media platforms to censor content. The Supreme Court has now begun hearing the case.
Deng Xiaoping, China’s paramount leader who played a vital role in China’s modernization, once said, “It does not matter if the cat is black or white. What matters is that it catches mice.” This statement reflected a pragmatic approach to economic policy, focusing on results foremost. China’s economic growth during his tenure was historic.
The discussion surrounding CBDCs and their negative impact on citizens’ freedoms and rights would benefit from a more practical and comprehensive perspective. Ultimately, it is the outcomes that matter above all. So too for our freedoms.
-
@ 57d1a264:69f1fee1
2024-10-23 16:11:26Welcome back to our weekly
JABBB
, Just Another Bitcoin Bubble Boom, a comics and meme contest crafted for you, creative stackers!If you'd like to learn more, check our welcome post here.
This week sticker:
Bitcoin Students Network
You can download the source file directly from the HereComesBitcoin website in SVG and PNG.
The task
Starting from this sticker, design a comic frame or a meme, add a message that perfectly captures the sentiment of the current most hilarious take on the Bitcoin space. You can contextualize it or not, it's up to you, you chose the message, the context and anything else that will help you submit your comic art masterpiece.
Are you a meme creator? There's space for you too: select the most similar shot from the gifts hosted on the Gif Station section and craft your best meme... Let's Jabbb!
If you enjoy designing and memeing, feel free to check our JABBB archive and create more to spread Bitcoin awareness to the moon.
Submit each proposal on the relative thread, bounties will be distributed next week together with the release of next JABBB contest.
₿ creative and have fun!
originally posted at https://stacker.news/items/736536
-
@ ee11a5df:b76c4e49
2024-07-11 23:57:53What Can We Get by Breaking NOSTR?
"What if we just started over? What if we took everything we have learned while building nostr and did it all again, but did it right this time?"
That is a question I've heard quite a number of times, and it is a question I have pondered quite a lot myself.
My conclusion (so far) is that I believe that we can fix all the important things without starting over. There are different levels of breakage, starting over is the most extreme of them. In this post I will describe these levels of breakage and what each one could buy us.
Cryptography
Your key-pair is the most fundamental part of nostr. That is your portable identity.
If the cryptography changed from secp256k1 to ed25519, all current nostr identities would not be usable.
This would be a complete start over.
Every other break listed in this post could be done as well to no additional detriment (save for reuse of some existing code) because we would be starting over.
Why would anyone suggest making such a break? What does this buy us?
- Curve25519 is a safe curve meaning a bunch of specific cryptography things that us mortals do not understand but we are assured that it is somehow better.
- Ed25519 is more modern, said to be faster, and has more widespread code/library support than secp256k1.
- Nostr keys could be used as TLS server certificates. TLS 1.3 using RFC 7250 Raw Public Keys allows raw public keys as certificates. No DNS or certification authorities required, removing several points of failure. These ed25519 keys could be used in TLS, whereas secp256k1 keys cannot as no TLS algorithm utilizes them AFAIK. Since relays currently don't have assigned nostr identities but are instead referenced by a websocket URL, this doesn't buy us much, but it is interesting. This idea is explored further below (keep reading) under a lesser level of breakage.
Besides breaking everything, another downside is that people would not be able to manage nostr keys with bitcoin hardware.
I am fairly strongly against breaking things this far. I don't think it is worth it.
Signature Scheme and Event Structure
Event structure is the next most fundamental part of nostr. Although events can be represented in many ways (clients and relays usually parse the JSON into data structures and/or database columns), the nature of the content of an event is well defined as seven particular fields. If we changed those, that would be a hard fork.
This break is quite severe. All current nostr events wouldn't work in this hard fork. We would be preserving identities, but all content would be starting over.
It would be difficult to bridge between this fork and current nostr because the bridge couldn't create the different signature required (not having anybody's private key) and current nostr wouldn't be generating the new kind of signature. Therefore any bridge would have to do identity mapping just like bridges to entirely different protocols do (e.g. mostr to mastodon).
What could we gain by breaking things this far?
- We could have a faster event hash and id verification: the current signature scheme of nostr requires lining up 5 JSON fields into a JSON array and using that as hash input. There is a performance cost to copying this data in order to hash it.
- We could introduce a subkey field, and sign events via that subkey, while preserving the pubkey as the author everybody knows and searches by. Note however that we can already get a remarkably similar thing using something like NIP-26 where the actual author is in a tag, and the pubkey field is the signing subkey.
- We could refactor the kind integer into composable bitflags (that could apply to any application) and an application kind (that specifies the application).
- Surely there are other things I haven't thought of.
I am currently against this kind of break. I don't think the benefits even come close to outweighing the cost. But if I learned about other things that we could "fix" by restructuring the events, I could possibly change my mind.
Replacing Relay URLs
Nostr is defined by relays that are addressed by websocket URLs. If that changed, that would be a significant break. Many (maybe even most) current event kinds would need superseding.
The most reasonable change is to define relays with nostr identities, specifying their pubkey instead of their URL.
What could we gain by this?
- We could ditch reliance on DNS. Relays could publish events under their nostr identity that advertise their current IP address(es).
- We could ditch certificates because relays could generate ed25519 keypairs for themselves (or indeed just self-signed certificates which might be much more broadly supported) and publish their public ed25519 key in the same replaceable event where they advertise their current IP address(es).
This is a gigantic break. Almost all event kinds need redefining and pretty much all nostr software will need fairly major upgrades. But it also gives us a kind of Internet liberty that many of us have dreamt of our entire lives.
I am ambivalent about this idea.
Protocol Messaging and Transport
The protocol messages of nostr are the next level of breakage. We could preserve keypair identities, all current events, and current relay URL references, but just break the protocol of how clients and relay communicate this data.
This would not necessarily break relay and client implementations at all, so long as the new protocol were opt-in.
What could we get?
- The new protocol could transmit events in binary form for increased performance (no more JSON parsing with it's typical many small memory allocations and string escaping nightmares). I think event throughput could double (wild guess).
- It could have clear expectations of who talks first, and when and how AUTH happens, avoiding a lot of current miscommunication between clients and relays.
- We could introduce bitflags for feature support so that new features could be added later and clients would not bother trying them (and getting an error or timing out) on relays that didn't signal support. This could replace much of NIP-11.
- We could then introduce something like negentropy or negative filters (but not that... probably something else solving that same problem) without it being a breaking change.
- The new protocol could just be a few websocket-binary messages enhancing the current protocol, continuing to leverage the existing websocket-text messages we currently have, meaning newer relays would still support all the older stuff.
The downsides are just that if you want this new stuff you have to build it. It makes the protocol less simple, having now multiple protocols, multiple ways of doing the same thing.
Nonetheless, this I am in favor of. I think the trade-offs are worth it. I will be pushing a draft PR for this soon.
The path forward
I propose then the following path forward:
- A new nostr protocol over websockets binary (draft PR to be shared soon)
- Subkeys brought into nostr via NIP-26 (but let's use a single letter tag instead, OK?) via a big push to get all the clients to support it (the transition will be painful - most major clients will need to support this before anybody can start using it).
- Some kind of solution to the negative-filter-negentropy need added to the new protocol as its first optional feature.
- We seriously consider replacing Relay URLs with nostr pubkeys assigned to the relay, and then have relays publish their IP address and TLS key or certificate.
We sacrifice these:
- Faster event hash/verification
- Composable event bitflags
- Safer faster more well-supported crypto curve
- Nostr keys themselves as TLS 1.3 RawPublicKey certificates
-
@ 42342239:1d80db24
2024-07-06 15:26:39Claims that we need greater centralisation, more EU, or more globalisation are prevalent across the usual media channels. The climate crisis, environmental destruction, pandemics, the AI-threat, yes, everything will apparently be solved if a little more global coordination, governance and leadership can be brought about.
But, is this actually true? One of the best arguments for this conclusion stems implicitly from the futurist Eliezer Yudkowsky, who once proposed a new Moore's Law, though this time not for computer processors but instead for mad science: "every 18 months, the minimum IQ necessary to destroy the world drops by one point".
Perhaps we simply have to tolerate more centralisation, globalisation, control, surveillance, and so on, to prevent all kinds of fools from destroying the world?
Note: a Swedish version of this text is avalable at Affärsvärlden.
At the same time, more centralisation, globalisation, etc. is also what we have experienced. Power has been shifting from the local, and from the majorities, to central-planning bureaucrats working in remote places. This has been going on for several decades. The EU's subsidiarity principle, i.e. the idea that decisions should be made at the lowest expedient level, and which came to everyone's attention ahead of Sweden's EU vote in 1994, is today swept under the rug as untimely and outdated, perhaps even retarded.
At the same time, there are many crises, more than usual it would seem. If it is not a crisis of criminality, a logistics/supply chain crisis or a water crisis, then it is an energy crisis, a financial crisis, a refugee crisis or a climate crisis. It is almost as if one starts to suspect that all this centralisation may be leading us down the wrong path. Perhaps centralisation is part of the problem, rather than the capital S solution?
Why centralisation may cause rather than prevent problems
There are several reasons why centralisation, etc, may actually be a problem. And though few seem to be interested in such questions today (or perhaps they are too timid to mention their concerns?), it has not always been this way. In this short essay we'll note four reasons (though there are several others):
- Political failures (Buchanan et al)
- Local communities & skin in the game (Ostrom and Taleb)
- The local knowledge problem (von Hayek)
- Governance by sociopaths (Hare)
James Buchanan who was given the so-called Nobel price in economics in the eighties once said that: "politicians and bureaucrats are no different from the rest of us. They will maximise their incentives just like everybody else.".
Buchanan was prominent in research on rent-seeking and political failures, i.e. when political "solutions" to so-called market failures make everything worse. Rent-seeking is when a company spends resources (e.g. lobbying) to get legislators or other decision makers to pass laws or create regulations that benefit the company instead of it having to engage in productive activities. The result is regulatory capture. The more centralised decision-making is, the greater the negative consequences from such rent-seeking will be for society at large. This is known.
Another economist, Elinor Ostrom, was given the same prize in the great financial crisis year of 2009. In her research, she had found that local communities where people had influence over rules and regulations, as well as how violations there-of were handled, were much better suited to look after common resources than centralised bodies. To borrow a term from the combative Nassim Nicholas Taleb: everything was better handled when decision makers had "skin in the game".
A third economist, Friedrich von Hayek, was given this prize as early as 1974, partly because he showed that central planning could not possibly take into account all relevant information. The information needed in economic planning is by its very nature distributed, and will never be available to a central planning committee, or even to an AI.
Moreover, human systems are complex and not just complicated. When you realise this, you also understand why the forecasts made by central planners often end up wildly off the mark - and at times in a catastrophic way. (This in itself is an argument for relying more on factors outside of the models in the decision-making process.)
From Buchanan's, Ostrom's, Taleb's or von Hayek's perspectives, it also becomes difficult to believe that today's bureaucrats are the most suited to manage and price e.g. climate risks. One can compare with the insurance industry, which has both a long habit of pricing risks as well as "skin in the game" - two things sorely missing in today's planning bodies.
Instead of preventing fools, we may be enabling madmen
An even more troubling conclusion is that centralisation tends to transfer power to people who perhaps shouldn't have more of that good. "Not all psychopaths are in prison - some are in the boardroom," psychologist Robert Hare once said during a lecture. Most people have probably known for a long time that those with sharp elbows and who don't hesitate to stab a colleague in the back can climb quickly in organisations. In recent years, this fact seems to have become increasingly well known even in academia.
You will thus tend to encounter an increased prevalance of individuals with narcissistic and sociopathic traits the higher up you get in the the status hierarchy. And if working in large organisations (such as the European Union or Congress) or in large corporations, is perceived as higher status - which is generally the case, then it follows that the more we centralise, the more we will be governed by people with less flattering Dark Triad traits.
By their fruits ye shall know them
Perhaps it is thus not a coincidence that we have so many crises. Perhaps centralisation, globalisation, etc. cause crises. Perhaps the "elites" and their planning bureaucrats are, in fact, not the salt of the earth and the light of the world. Perhaps President Trump even had a point when he said "they are not sending their best".
https://www.youtube.com/watch?v=w4b8xgaiuj0
The opposite of centralisation is decentralisation. And while most people may still be aware that decentralisation can be a superpower within the business world, it's time we remind ourselves that this also applies to the economy - and society - at large, and preferably before the next Great Leap Forward is fully thrust upon us.
-
@ b12b632c:d9e1ff79
2024-05-29 12:10:18One other day on Nostr, one other app!
Today I'll present you a new self-hosted Nostr blog web application recently released on github by dtonon, Oracolo:
https://github.com/dtonon/oracolo
Oracolo is a minimalist blog powered by Nostr, that consists of a single html file, weighing only ~140Kb. You can use whatever Nostr client that supports long format (habla.news, yakihonne, highlighter.com, etc ) to write your posts, and your personal blog is automatically updated.
It works also without a web server; for example you can send it via email as a business card.Oracolo fetches Nostr data, builds the page, execute the JavaScript code and displays article on clean and sobr blog (a Dark theme would be awesome 👀).
Blog articles are nostr events you published or will publish on Nostr relays through long notes applications like the ones quoted above.
Don't forget to use a NIP07 web browser extensions to login on those websites. Old time where we were forced to fill our nsec key is nearly over!
For the hurry ones of you, you can find here the Oracolo demo with my Nostr long notes article. It will include this one when I'll publish it on Nostr!
https://oracolo.fractalized.net/
How to self-host Oracolo?
You can build the application locally or use a docker compose stack to run it (or any other method). I just build a docker compose stack with Traefik and an Oracolo docker image to let you quickly run it.
The oracolo-docker github repo is available here:
https://github.com/PastaGringo/oracolo-docker
PS: don't freak out about the commits number, oracolo has been the lucky one to let me practrice docker image CI/CD build/push with Forgejo, that went well but it took me a while before finding how to make Forgejo runner dood work 😆). Please ping me on Nostr if you are interested by an article on this topic!
This repo is a mirror from my new Forgejo git instance where the code has been originaly published and will be updated if needed (I think it will):
https://git.fractalized.net/PastaGringo/oracolo-docker
Here is how to do it.
1) First, you need to create an A DNS record into your domain.tld zone. You can create a A with "oracolo" .domain.tld or "*" .domain.tld. The second one will allow traefik to generate all the future subdomain.domain.tld without having to create them in advance. You can verify DNS records with the website https://dnschecker.org.
2) Clone the oracolo-docker repository:
bash git clone https://git.fractalized.net/PastaGringo/oracolo-docker.git cd oracolo-docker
3) Rename the .env.example file:
bash mv .env.example .env
4) Modify and update your .env file with your own infos:
```bash
Let's Encrypt email used to generate the SSL certificate
LETSENCRYPT_EMAIL=
domain for oracolo. Ex: oracolo.fractalized.net
ORACOLO_DOMAIN=
Npub author at "npub" format, not HEX.
NPUB=
Relays where Oracolo will retrieve the Nostr events.
Ex: "wss://nostr.fractalized.net, wss://rnostr.fractalized.net"
RELAYS=
Number of blog article with an thumbnail. Ex: 4
TOP_NOTES_NB= ```
5) Compose Oracolo:
bash docker compose up -d && docker compose logs -f oracolo traefik
bash [+] Running 2/0 ✔ Container traefik Running 0.0s ✔ Container oracolo Running 0.0s WARN[0000] /home/pastadmin/DEV/FORGEJO/PLAY/oracolo-docker/docker-compose.yml: `version` is obsolete traefik | 2024-05-28T19:24:18Z INF Traefik version 3.0.0 built on 2024-04-29T14:25:59Z version=3.0.0 oracolo | oracolo | ___ ____ ____ __ ___ _ ___ oracolo | / \ | \ / | / ] / \ | | / \ oracolo | | || D )| o | / / | || | | | oracolo | | O || / | |/ / | O || |___ | O | oracolo | | || \ | _ / \_ | || || | oracolo | | || . \| | \ || || || | oracolo | \___/ |__|\_||__|__|\____| \___/ |_____| \___/ oracolo | oracolo | Oracolo dtonon's repo: https://github.com/dtonon/oracolo oracolo | oracolo | ╭────────────────────────────╮ oracolo | │ Docker Compose Env Vars ⤵️ │ oracolo | ╰────────────────────────────╯ oracolo | oracolo | NPUB : npub1ky4kxtyg0uxgw8g5p5mmedh8c8s6sqny6zmaaqj44gv4rk0plaus3m4fd2 oracolo | RELAYS : wss://nostr.fractalized.net, wss://rnostr.fractalized.net oracolo | TOP_NOTES_NB : 4 oracolo | oracolo | ╭───────────────────────────╮ oracolo | │ Configuring Oracolo... ⤵️ │ oracolo | ╰───────────────────────────╯ oracolo | oracolo | > Updating npub key with npub1ky4kxtyg0uxgw8g5p5mmedh8c8s6sqny6zmaaqj44gv4rk0plaus3m4fd2... ✅ oracolo | > Updating nostr relays with wss://nostr.fractalized.net, wss://rnostr.fractalized.net... ✅ oracolo | > Updating TOP_NOTE with value 4... ✅ oracolo | oracolo | ╭───────────────────────╮ oracolo | │ Installing Oracolo ⤵️ │ oracolo | ╰───────────────────────╯ oracolo | oracolo | added 122 packages, and audited 123 packages in 8s oracolo | oracolo | 20 packages are looking for funding oracolo | run `npm fund` for details oracolo | oracolo | found 0 vulnerabilities oracolo | npm notice oracolo | npm notice New minor version of npm available! 10.7.0 -> 10.8.0 oracolo | npm notice Changelog: https://github.com/npm/cli/releases/tag/v10.8.0 oracolo | npm notice To update run: npm install -g npm@10.8.0 oracolo | npm notice oracolo | oracolo | >>> done ✅ oracolo | oracolo | ╭─────────────────────╮ oracolo | │ Building Oracolo ⤵️ │ oracolo | ╰─────────────────────╯ oracolo | oracolo | > oracolo@0.0.0 build oracolo | > vite build oracolo | oracolo | 7:32:49 PM [vite-plugin-svelte] WARNING: The following packages have a svelte field in their package.json but no exports condition for svelte. oracolo | oracolo | @splidejs/svelte-splide@0.2.9 oracolo | @splidejs/splide@4.1.4 oracolo | oracolo | Please see https://github.com/sveltejs/vite-plugin-svelte/blob/main/docs/faq.md#missing-exports-condition for details. oracolo | vite v5.2.11 building for production... oracolo | transforming... oracolo | ✓ 84 modules transformed. oracolo | rendering chunks... oracolo | oracolo | oracolo | Inlining: index-C6McxHm7.js oracolo | Inlining: style-DubfL5gy.css oracolo | computing gzip size... oracolo | dist/index.html 233.15 kB │ gzip: 82.41 kB oracolo | ✓ built in 7.08s oracolo | oracolo | >>> done ✅ oracolo | oracolo | > Copying Oracolo built index.html to nginx usr/share/nginx/html... ✅ oracolo | oracolo | ╭────────────────────────╮ oracolo | │ Configuring Nginx... ⤵️ │ oracolo | ╰────────────────────────╯ oracolo | oracolo | > Copying default nginx.conf file... ✅ oracolo | oracolo | ╭──────────────────────╮ oracolo | │ Starting Nginx... 🚀 │ oracolo | ╰──────────────────────╯ oracolo |
If you don't have any issue with the Traefik container, Oracolo should be live! 🔥
You can now access it by going to the ORACOLO_DOMAIN URL configured into the .env file.
Have a good day!
Don't hesisate to follow dtonon on Nostr to follow-up the future updates ⚡🔥
See you soon in another Fractalized story!
PastaGringo 🤖⚡ -
@ 4523be58:ba1facd0
2024-05-28 11:05:17NIP-116
Event paths
Description
Event kind
30079
denotes an event defined by its event path rather than its event kind.The event directory path is included in the event path, specified in the event's
d
tag. For example, an event path might beuser/profile/name
, whereuser/profile
is the directory path.Relays should parse the event directory from the event path
d
tag and index the event by it. Relays should support "directory listing" of kind30079
events using the#f
filter, such as{"#f": ["user/profile"]}
.For backward compatibility, the event directory should also be saved in the event's
f
tag (for "folder"), which is already indexed by some relay implementations, and can be queried using the#f
filter.Event content should be a JSON-encoded value. An empty object
{}
signifies that the entry at the event path is itself a directory. For example, when savinguser/profile/name
:Bob
, you should also saveuser/profile
:{}
so the subdirectory can be listed underuser
.In directory names, slashes should be escaped with a double slash.
Example
Event
json { "tags": [ ["d", "user/profile/name"], ["f", "user/profile"] ], "content": "\"Bob\"", "kind": 30079, ... }
Query
json { "#f": ["user/profile"], "authors": ["[pubkey]"] }
Motivation
To make Nostr an "everything app," we need a sustainable way to support new kinds of applications. Browsing Nostr data by human-readable nested directories and paths rather than obscure event kind numbers makes the data more manageable.
Numeric event kinds are not sustainable for the infinite number of potential applications. With numeric event kinds, developers need to find an unused number for each new application and announce it somewhere, which is cumbersome and not scalable.
Directories can also replace monolithic list events like follow lists or profile details. You can update a single directory entry such as
user/profile/name
orgroups/follows/[pubkey]
without causing an overwrite of the whole profile or follow list when your client is out-of-sync with the most recent list version, as often happens on Nostr.Using
d
-tagged replaceable events for reactions, such as{tags: [["d", "reactions/[eventId]"]], content: "\"👍\"", kind: 30079, ...}
would make un-reacting trivial: just publish a new event with the samed
tag and an empty content. Toggling a reaction on and off would not cause a flurry of new reaction & delete events that all need to be persisted.Implementations
- Relays that support tag-replaceable events and indexing by arbitrary tags (in this case
f
) already support this feature. - IrisDB client side library: treelike data structure with subscribable nodes.
https://github.com/nostr-protocol/nips/pull/1266
- Relays that support tag-replaceable events and indexing by arbitrary tags (in this case
-
@ b12b632c:d9e1ff79
2024-04-24 20:21:27What's Blossom?
Blossom offers a bunch of HTTP endpoints that let Nostr users stash and fetch binary data on public servers using the SHA256 hash as a universal ID.
You can find more -precise- information about Blossom on the Nostr article published today by hzrd149, the developper behind it:
nostr:naddr1qqxkymr0wdek7mfdv3exjan9qgszv6q4uryjzr06xfxxew34wwc5hmjfmfpqn229d72gfegsdn2q3fgrqsqqqa28e4v8zy
You find the Blossom github repo here:
GitHub - hzrd149/blossom: Blobs stored simply on mediaservers https://github.com/hzrd149/blossom
Meet Blobs
Blobs are files with SHA256 hashes as IDs, making them unique and secure. You can compute these IDs from the files themselves using the sha256 hashing algorithm (when you run
sha256sum bitcoin.pdf
).Meet Drives
Drives are like organized events on Nostr, mapping blobs to filenames and extra info. It's like setting up a roadmap for your data.
How do Servers Work?
Blossom servers have four endpoints for users to upload and handle blobs:
GET /<sha256>: Get blobs by their SHA256 hash, maybe with a file extension. PUT /upload: Chuck your blobs onto the server, verified with signed Nostr events. GET /list/<pubkey>: Peek at a list of blobs tied to a specific public key for smooth management. DELETE /<sha256>: Trash blobs from the server when needed, keeping things tidy.
Yon can find detailed information about the Blossom Server Implementation here..
https://github.com/hzrd149/blossom/blob/master/Server.md
..and the Blossom-server source code is here:
https://github.com/hzrd149/blossom-server
What's Blossom Drive?
Think of Blossom Drive as the "Front-End" (or a public cloud drive) of Blossom servers, letting you upload, manage, share your files/folders blobs.
Source code is available here:
https://github.com/hzrd149/blossom-drive
Developpers
If you want to add Blossom into your Nostr client/app, the blossom-client-sdk explaining how it works (with few examples 🙏) is published here:
https://github.com/hzrd149/blossom-client-sdk
How to self-host Blossom server & Blossom Drive
We'll use docker compose to setup Blossom server & drive. I included Nginx Proxy Manager because it's the Web Proxy I use for all the Fractalized self-hosted services :
Create a new docker-compose file:
~$ nano docker-compose.yml
Insert this content into the file:
``` version: '3.8' services:
blossom-drive: container_name: blossom-drive image: pastagringo/blossom-drive-docker
ports:
- '80:80'
blossom-server: container_name: blossom-server image: 'ghcr.io/hzrd149/blossom-server:master'
ports:
- '3000:3000'
volumes: - './blossom-server/config.yml:/app/config.yml' - 'blossom_data:/app/data'
nginxproxymanager: container_name: nginxproxymanager image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: - '80:80' - '81:81' - '443:443' volumes: - ./nginxproxymanager/data:/data - ./nginxproxymanager/letsencrypt:/etc/letsencrypt - ./nginxproxymanager/_hsts_map.conf:/app/templates/_hsts_map.conf
volumes: blossom_data: ```
You now need to personalize the blossom-server config.yml:
bash ~$ mkdir blossom-server ~$ nano blossom-server/config.yml
Insert this content to the file (CTRL+X & Y to save/exit):
```yaml
Used when listing blobs
publicDomain: https://blossom.fractalized.net
databasePath: data/sqlite.db
discovery: # find files by querying nostr relays nostr: enabled: true relays: - wss://nostrue.com - wss://relay.damus.io - wss://nostr.wine - wss://nos.lol - wss://nostr-pub.wellorder.net - wss://nostr.fractalized.net # find files by asking upstream CDNs upstream: enabled: true domains: - https://cdn.satellite.earth # don't set your blossom server here!
storage: # local or s3 backend: local local: dir: ./data # s3: # endpoint: https://s3.endpoint.com # bucket: blossom # accessKey: xxxxxxxx # secretKey: xxxxxxxxx # If this is set the server will redirect clients when loading blobs # publicURL: https://s3.region.example.com/
# rules are checked in descending order. if a blob matches a rule it is kept # "type" (required) the type of the blob, "" can be used to match any type # "expiration" (required) time passed since last accessed # "pubkeys" (optional) a list of owners # any blobs not matching the rules will be removed rules: # mime type of blob - type: text/ # time since last accessed expiration: 1 month - type: "image/" expiration: 1 week - type: "video/" expiration: 5 days - type: "model/" expiration: 1 week - type: "" expiration: 2 days
upload: # enable / disable uploads enabled: true # require auth to upload requireAuth: true # only check rules that include "pubkeys" requirePubkeyInRule: false
list: requireAuth: false allowListOthers: true
tor: enabled: false proxy: "" ```
You need to update few values with your own:
- Your own Blossom server public domain :
publicDomain: https://YourBlossomServer.YourDomain.tld
and upstream domains where Nostr clients will also verify if the Blossom server own the file blob: :
upstream: enabled: true domains: - https://cdn.satellite.earth # don't set your blossom server here!
- The Nostr relays where you want to publish your Blossom events (I added my own Nostr relay):
yaml discovery: # find files by querying nostr relays nostr: enabled: true relays: - wss://nostrue.com - wss://relay.damus.io - wss://nostr.wine - wss://nos.lol - wss://nostr-pub.wellorder.net - wss://nostr.fractalized.net
Everything is setup! You can now compose your docker-compose file:
~$ docker compose up -d
I will let your check this article to know how to configure and use Nginx Proxy Manager.
You can check both Blossom containers logs with this command:
~$ docker compose logs -f blossom-drive blossom-server
Regarding the Nginx Proxy Manager settings for Blossom, here is the configuration I used:
PS: it seems the naming convention for the kind of web service like Blossom is named "CDN" (for: "content delivery network"). It's not impossible in a near future I rename my subdomain blossom.fractalized.net to cdn.blossom.fractalized.net and blossom-drive.fractalized.net to blossom.fractalized.net 😅
Do what you prefer!
After having configured everything, you can now access Blossom server by going to your Blossom server subdomain. You should see a homepage as below:
Same thing for the Blossom Drive, you should see this homepage:
You can now login with your prefered method. In my case, I login on Blossom Drive with my NIP-07 Chrome extension.
You now need to go the "Servers" tab to add some Blossom servers, including the fresh one you just installed.
You can now create your first Blossom Drive by clicking on "+ New" > "Drive" on the top left button:
Fill your desired blossom drive name and select the media servers where you want to host your files and click on "Create":
PS: you can enable "Encrypted" option but as hzrd149 said on his Nostr note about Blossom:
"There is also the option to encrypt drives using NIP-49 password encryption. although its not tested at all so don't trust it, verify"
You are now able to upload some files (a picture for instance):
And obtain the HTTP direct link by clicking on the "Copy Link" button:
If you check URL image below, you'll see that it is served by Blossom:
It's done ! ✅
You can now upload your files to Blossom accross several Blossom servers to let them survive the future internet apocalypse.
Blossom has just been released few days ago, many news and features will come!
Don't hesisate to follow hzrd149 on Nostr to follow-up the future updates ⚡🔥
See you soon in another Fractalized story!
PastaGringo 🤖⚡ -
@ 42342239:1d80db24
2024-04-05 08:21:50Trust is a topic increasingly being discussed. Whether it is trust in each other, in the media, or in our authorities, trust is generally seen as a cornerstone of a strong and well-functioning society. The topic was also the theme of the World Economic Forum at its annual meeting in Davos earlier this year. Even among central bank economists, the subject is becoming more prevalent. Last year, Agustín Carstens, head of the BIS ("the central bank of central banks"), said that "[w]ith trust, the public will be more willing to accept actions that involve short-term costs in exchange for long-term benefits" and that "trust is vital for policy effectiveness".
It is therefore interesting when central banks or others pretend as if nothing has happened even when trust has been shattered.
Just as in Sweden and in hundreds of other countries, Canada is planning to introduce a central bank digital currency (CBDC), a new form of money where the central bank or its intermediaries (the banks) will have complete insight into citizens' transactions. Payments or money could also be made programmable. Everything from transferring ownership of a car automatically after a successful payment to the seller, to payments being denied if you have traveled too far from home.
"If Canadians decide a digital dollar is necessary, our obligation is to be ready" says Carolyn Rogers, Deputy Head of Bank of Canada, in a statement shared in an article.
So, what do the citizens want? According to a report from the Bank of Canada, a whopping 88% of those surveyed believe that the central bank should refrain from developing such a currency. About the same number (87%) believe that authorities should guarantee the opportunity to pay with cash instead. And nearly four out of five people (78%) do not believe that the central bank will care about people's opinions. What about trust again?
Canadians' likely remember the Trudeau government's actions against the "Freedom Convoy". The Freedom Convoy consisted of, among others, truck drivers protesting the country's strict pandemic policies, blocking roads in the capital Ottawa at the beginning of 2022. The government invoked never-before-used emergency measures to, among other things, "freeze" people's bank accounts. Suddenly, truck drivers and those with a "connection" to the protests were unable to pay their electricity bills or insurances, for instance. Superficially, this may not sound so serious, but ultimately, it could mean that their families end up in cold houses (due to electricity being cut off) and that they lose the ability to work (driving uninsured vehicles is not taken lightly). And this applied not only to the truck drivers but also to those with a "connection" to the protests. No court rulings were required.
Without the freedom to pay for goods and services, i.e. the freedom to transact, one has no real freedom at all, as several participants in the protests experienced.
In January of this year, a federal judge concluded that the government's actions two years ago were unlawful when it invoked the emergency measures. The use did not display "features of rationality - motivation, transparency, and intelligibility - and was not justified in relation to the relevant factual and legal limitations that had to be considered". He also argued that the use was not in line with the constitution. There are also reports alleging that the government fabricated evidence to go after the demonstrators. The case is set to continue to the highest court. Prime Minister Justin Trudeau and Finance Minister Chrystia Freeland have also recently been sued for the government's actions.
The Trudeau government's use of emergency measures two years ago sadly only provides a glimpse of what the future may hold if CBDCs or similar systems replace the current monetary system with commercial bank money and cash. In Canada, citizens do not want the central bank to proceed with the development of a CBDC. In canada, citizens in Canada want to strengthen the role of cash. In Canada, citizens suspect that the central bank will not listen to them. All while the central bank feverishly continues working on the new system...
"Trust is vital", said Agustín Carstens. But if policy-makers do not pause for a thoughtful reflection even when trust has been utterly shattered as is the case in Canada, are we then not merely dealing with lip service?
And how much trust do these policy-makers then deserve?
-
@ 42342239:1d80db24
2024-03-31 11:23:36Biologist Stuart Kauffman introduced the concept of the "adjacent possible" in evolutionary biology in 1996. A bacterium cannot suddenly transform into a flamingo; rather, it must rely on small exploratory changes (of the "adjacent possible") if it is ever to become a beautiful pink flying creature. The same principle applies to human societies, all of which exemplify complex systems. It is indeed challenging to transform shivering cave-dwellers into a space travelers without numerous intermediate steps.
Imagine a water wheel – in itself, perhaps not such a remarkable invention. Yet the water wheel transformed the hard-to-use energy of water into easily exploitable rotational energy. A little of the "adjacent possible" had now been explored: water mills, hammer forges, sawmills, and textile factories soon emerged. People who had previously ground by hand or threshed with the help of oxen could now spend their time on other things. The principles of the water wheel also formed the basis for wind power. Yes, a multitude of possibilities arose – reminiscent of the rapid development during the Cambrian explosion. When the inventors of bygone times constructed humanity's first water wheel, they thus expanded the "adjacent possible". Surely, the experts of old likely sought swift prohibitions. Not long ago, our expert class claimed that the internet was going to be a passing fad, or that it would only have the same modest impact on the economy as the fax machine. For what it's worth, there were even attempts to ban the number zero back in the days.
The pseudonymous creator of Bitcoin, Satoshi Nakamoto, wrote in Bitcoin's whitepaper that "[w]e have proposed a system for electronic transactions without relying on trust." The Bitcoin system enables participants to agree on what is true without needing to trust each other, something that has never been possible before. In light of this, it is worth noting that trust in the federal government in the USA is among the lowest levels measured in almost 70 years. Trust in media is at record lows. Moreover, in countries like the USA, the proportion of people who believe that one can trust "most people" has decreased significantly. "Rebuilding trust" was even the theme of the World Economic Forum at its annual meeting. It is evident, even in the international context, that trust between countries is not at its peak.
Over a fifteen-year period, Bitcoin has enabled electronic transactions without its participants needing to rely on a central authority, or even on each other. This may not sound like a particularly remarkable invention in itself. But like the water wheel, one must acknowledge that new potential seems to have been put in place, potential that is just beginning to be explored. Kauffman's "adjacent possible" has expanded. And despite dogmatic statements to the contrary, no one can know for sure where this might lead.
The discussion of Bitcoin or crypto currencies would benefit from greater humility and openness, not only from employees or CEOs of money laundering banks but also from forecast-failing central bank officials. When for instance Chinese Premier Zhou Enlai in the 1970s was asked about the effects of the French Revolution, he responded that it was "too early to say" - a far wiser answer than the categorical response of the bureaucratic class. Isn't exploring systems not based on trust is exactly what we need at this juncture?
-
@ 8d34bd24:414be32b
2024-10-23 15:30:53Check out earlier posts in God Makes Himself Known series:
-
God Demonstrates His Power: Part 1 (Egypt)
-
God Defends His Honor: Part 2 (Phillistines)
-
The One True God: Part 3 (Who deserves worship)
-
Jesus is God: Part 4 (Jews & Gentiles)
Throughout history, God has shown His power and glory at different times and different places and to different people, but never shown it to everyone at once. The time is coming when God will show His power, glory, and judgement to all mankind.
But we do not want you to be uninformed, brethren, about those who are asleep, so that you will not grieve as do the rest who have no hope. For if we believe that Jesus died and rose again, even so God will bring with Him those who have fallen asleep in Jesus. For this we say to you by the word of the Lord, that we who are alive and remain until the coming of the Lord, will not precede those who have fallen asleep. For the Lord Himself will descend from heaven with a shout, with the voice of the archangel and with the trumpet of God, and the dead in Christ will rise first. Then we who are alive and remain will be caught up together with them in the clouds to meet the Lord in the air, and so we shall always be with the Lord. (1 Thessalonians 4:13-17) {emphasis mine}
God’s first end times miracle will be the rapture of the church. All true believers (dead & alive) will be taken out of this world, given new, perfect, eternal bodies, and join Jesus in paradise. The world will see these Christians disappear. They will be there one moment and gone the next.
I always thought it strange, that after the Bible’s prophecies of the church being raptured, that most people would not believe God had removed His followers. I think something (maybe fallen angels pretending to be aliens who have rescued mankind) will make people not get panicked about millions of Christians all disappearing in an instant. Maybe it will be solely based on people’s hard hearts and their unwillingness to consider an unpleasant thought (that they were left behind), but life will go on.
The Bible makes it sound like there will be a short time of disarray, but then a man will come who make peace for a time (the Anti-Christ). After 3.5 years, things will get worse than they have ever been in the history of mankind.
The 7 Seals
In the end times, the world will experience unfathomably hard times. At first everything that happens will be things that have happened before, just worse. In Revelation 6, we see each of the 7 seals opened:
-
White Horse = Conquering
-
Red Horse = Takes away peace so men slay one another
-
Black Horse = Famine
-
Pale (Ashen) Horse = A quarter of the earth killed by war, famine, pestilence, and animals.
-
Those who trust in Jesus are martyred.
-
Awesome, terrifying natural disasters:\ — A great earthquake,\ — the sun becomes black,
— the moon becomes like blood,\ — the stars fall to earth,\ — the sky is split like a scroll,\ — every mountain & island moved out of its place,\ — everyone from kings to slaves hide in fear due to the wrath of the Lord.
The 7 Trumpets
The opening of the 7th Seal leads to the 7 trumpets which are more terrifying than the first 6 seals and in faster succession. These miraculous signs of God’s wrath are beyond anything mankind has ever experienced and to a degree never experienced. First there is a half hour of silence and reprieve before the sounding of the trumpets in Revelation 8 & 9:
-
Hail & fire, mixed with blood, fall to earth and “a third of the earth was burned up, and a third of the trees were burned up, and all the green grass was burned up.” This burning will likely destroy crops and kill livestock causing famine.
-
“something like a great mountain burning with fire [asteroid?] was thrown into the sea; and a third of the sea became blood, and a third of the creatures which were in the sea and had life, died; and a third of the ships were destroyed.” A mountain sized asteroid would cause terrible tsunamis. The death of a third of the sea creatures will cause worse famine. The destruction of a third of the ships will cause a disruption in international trade leading to shortages and prices skyrocketing.
-
“a great star [comet?] fell from heaven, burning like a torch, and it fell on a third of the rivers and on the springs of waters.” A third of the waters of earth are poisoned. If you don’t have clean water, you die. People will be dying of thirst and of drinking the poisoned waters out of desperation.
-
“a third of the sun and a third of the moon and a third of the stars were struck, so that a third of them would be darkened.” The world will experience fearful darkness. Sinful man hid their evil deeds under the cover of darkness and will now experience darkness that they don’t want.
-
A “star from heaven” (most likely a fallen angel) falls to earth with the key to the bottomless pit and “smoke went up out of the pit, like the smoke of a great furnace; and the sun and the air were darkened by the smoke of the pit. Then out of the smoke came locusts [fallen angels that procreated with women: see Genesis 6] upon the earth.” These “locusts” will torment those who reject God and are forbidden from harming the remaining plants. The torment will last 5 months and will be so bad that people will desperately want to die.
-
Four angels will “kill a third of mankind.” Death comes through an army of 200 million that kills with “fire and smoke and brimstone” that comes out of their mouths. The fourth seal resulted in the death of a quarter of mankind. The sixth trumpet will then lead to an additional third of mankind dying.
The 2 Witnesses
God’s wrath will be unimaginably bad, but God still cares and he sends two witnesses to witness to the world to make sure that every person on earth has the opportunity to repent and turn to God. Many will, but many more will reject God despite His miraculous judgements upon them.
And I will grant authority to my two witnesses, and they will prophesy for twelve hundred and sixty days, clothed in sackcloth.” These are the two olive trees and the two lampstands that stand before the Lord of the earth. And if anyone wants to harm them, fire flows out of their mouth and devours their enemies; so if anyone wants to harm them, he must be killed in this way. These have the power to shut up the sky, so that rain will not fall during the days of their prophesying; and they have power over the waters to turn them into blood, and to strike the earth with every plague, as often as they desire. (Revelation 11:3-6) {emphasis mine}
God’s two witnesses will:
-
Prophecy
-
Shoot flames out of their mouths to devour those who want to harm them
-
Stop the rain as judgement
-
Turn water into blood
-
Strike the earth with every plague
After 1260 days, God will allow:
-
The beast to kill them
-
Their bodies lie in the streets watched by the world as the world celebrates
But after 3.5 days, the witnesses will be raised from the dead in the view of every person on earth and be raised in the air up to heaven. God’s unmistakable power will be demonstrated in a way that there will not be any excuse for any person to reject Him as God and Creator.
And in that hour there was a great earthquake, and a tenth of the city fell; seven thousand people were killed in the earthquake, and the rest were terrified and gave glory to the God of heaven. (Revelation 11:13)
At this point the unholy trinity (Satan always copies God because he can’t create anything himself) will rule the world and will begin to seriously persecute all those who have believed in Jesus and trusted Him as savior.
Amazingly, after all that God has demonstrated, the majority of people will worship the Anti-Christ rather than their creator who is the one, true God.
Additional Witnesses
As if the two witnesses that are seen by every person on earth is not enough, God sends other witnesses. He makes sure that every person truly knows who He is, what everyone is expected to do, and the consequences of refusing.
First God marks 144,000 Jews, 12,000 from every tribe, to be witnesses throughout the world. Then God sends an angel up in the sky to witness:
And I saw another angel flying in midheaven, having an eternal gospel to preach to those who live on the earth, and to every nation and tribe and tongue and people; and he said with a loud voice, “Fear God, and give Him glory, because the hour of His judgment has come; worship Him who made the heaven and the earth and sea and springs of waters.” (Revelation 14:6-7)
He sends a second angel to warn that Babylon the Great has fallen. He then sends a third angel to give warning to people against following the Beast (Anti-Christ) or receiving his mark.
“If anyone worships the beast and his image, and receives a mark on his forehead or on his hand, he also will drink of the wine of the wrath of God, which is mixed in full strength in the cup of His anger; and he will be tormented with fire and brimstone in the presence of the holy angels and in the presence of the Lamb. And the smoke of their torment goes up forever and ever; they have no rest day and night, those who worship the beast and his image, and whoever receives the mark of his name.” (Revelation 14:9b-11) {emphasis mine}
Most of those who believe and are saved during the seven year tribulation will be martyred, but they are promised “rest from their labors” and that “their deeds follow with them”:
… “Blessed are the dead who die in the Lord from now on!’ ” “Yes,” says the Spirit, “so that they may rest from their labors, for their deeds follow with them.” (Revelation 14:13b)
The 7 Bowls
The final miracles that show the wrath of God against those who refuse to trust in Him come in the 7 bowls (or vials). These are poured out in very rapid succession, probably hours or a few days.
-
“… a loathsome and malignant sore on the people who had the mark of the beast and who worshiped his image.” (Revelation 16:2b)
-
“the sea … became blood like that of a dead man; and every living thing in the sea died.” (Revelation 16:3b) Notice that this blood isn’t like the other instances where it is like fresh blood. This blood is “like that of a dead man.” Every creature in the sea died, and I’m sure caused a rotting, putrid mess that smelled of death.
-
“… rivers and the springs of waters … became blood.” (Revelation 16:4b) God gave those who had murdered the prophets blood as the only thing they could drink, as a just punishment.
-
“… the sun … was given to it to scorch men with fire.” (Revelation 16:8b)
-
“… his *[the beast’s*] kingdom became darkened; and they gnawed their tongues because of pain.” (Revelation 16:10b) [clarification mine]
-
“… the Euphrates … water was dried up, so that the way would be prepared for the kings from the east.” (Revelation 16:12b)
-
“… a loud voice came out of the temple from the throne, saying, “It is done.” (Revelation 16:17b) and “… there was a great earthquake, such as there had not been since man came to be upon the earth …” (Revelation 16:18b) “And every island fled away, and the mountains were not found. And huge hailstones, about one hundred pounds each, came down from heaven upon men. …” (Revelation 16:20-21a)
More than half of the population of earth will be killed during the tribulation. Some will repent and turn to God, but then pay with their lives. Others will willfully disobey and reject God and refuse to submit to Him despite knowing who He is and why He should be worshipped. God will have shared His mercy and then His wrath in order to turn people back to Him, but too many will be hard hearted and reject Him. They will be guilty and every punishment they receive will be well deserved. Justice will ultimately be served.
The 4 Hallelujahs
Right before the end the whole earth will hear the voice of a great multitude in heaven saying:
“Hallelujah! Salvation and glory and power belong to our God; because His judgments are true and righteous; for He has judged the great harlot who was corrupting the earth with her immorality, and He has avenged the blood of His bond-servants on her.” And a second time they said, “Hallelujah! Her smoke rises up forever and ever.” And the twenty-four elders and the four living creatures fell down and worshiped God who sits on the throne saying, “Amen. Hallelujah!” And a voice came from the throne, saying, “Give praise to our God, all you His bond-servants, you who fear Him, the small and the great.” Then I heard something like the voice of a great multitude and like the sound of many waters and like the sound of mighty peals of thunder, saying, “Hallelujah! For the Lord our God, the Almighty, reigns. (Revelation 19:4-6) {emphasis mine}
The 2nd Coming of Jesus
Last of all is the greatest moment in all of history when Jesus returns to earth as Lord and King to claim His own and to judge those who rejected Him and lived evil lives.
And I saw heaven opened, and behold, a white horse, and He who sat on it is called Faithful and True, and in righteousness He judges and wages war. His eyes are a flame of fire, and on His head are many diadems; and He has a name written on Him which no one knows except Himself. He is clothed with a robe dipped in blood, and His name is called The Word of God. (Revelation19:11-13)
The most magnificent man who ever lived, the eternal, creator God comes down from heaven in the sight of all with all of His followers behind Him dressed in white, but all those, both man and angel, who refused to worship Him and submit to Him decide to line up for battle against Him. They somehow think they have a chance against the very one who upholds their life with the power of His mind. All of the men who took the mark of the beast will line up for battle against their creator.
The Millennium
Of course these rebels have no chance. The beast and false prophet are thrown into the lake of fire. The rest are destroyed by the sword in the mouth of Jesus while His followers watch, never having to lift a hand or dirty their white attire. Satan is bound for a thousand years and Jesus reigns over all of the earth for a thousand years on earth fulfilling the promises to Abraham, Isaac, Jacob, Moses, Joshua, David, and everyone else in the Bible. There will be a thousand years without the evil influences of Satan and his fallen angels. What a wonderful time that will be, but sadly, not all will fully submit to the perfect, sinless, creator God and one more moment of glory will be shown before the earth is burned with fire.
Judgement Throne of God
At the end of 1,000 years, there will be one more rebellion when Satan is released.
I used to think it strange that God would allow Satan to be released again to mislead, but I think it is a test to expose those who are not really trusting in Jesus. During the millennial reign, there will not be much outright sin, but not all will follow with all their heart, mind, soul, and strength. Many will be going through the motions. They will be going along to stay out of trouble, but not because their hearts are following Jesus. When temptation comes along, many will turn away from God again into judgment. Satan will gather people from all the nations to surround the saints, but God will send fire down from heaven to devour all of the rebels
And the devil who deceived them was thrown into the lake of fire and brimstone, where the beast and the false prophet are also; and they will be tormented day and night forever and ever. (Revelation 20:10)
Great White Throne Judgement
In the end, everyone that rejected Jesus will have their lives judged at the Great White Throne.
And I saw the dead, the great and the small, standing before the throne, and books were opened; and another book was opened, which is the book of life; and the dead were judged from the things which were written in the books, according to their deeds. (Revelation 20:12)
We are all sinners and do not want to be given what we deserve. We want God’s grace. Those who had trusted Jesus before the tribulation were judged by Jesus to determine their rewards. Those who rejected Jesus will be judged at the Great White Throne Judgement and will experience the just wrath of our holy, creator God. I hope you will be one of those who trust Jesus, otherwise you will receive your just punishment:
And if anyone’s name was not found written in the book of life, he was thrown into the lake of fire. (Revelation 20:15)
By the end every person who has ever lived on earth will know who God is and will understand how much they have failed Him. Every knee will bow — whether in awe or in terror.
God made Himself known through His creation. God made Himself known through His blessings. God made Himself known by coming to earth to die for mankind to take away our sins. God made Himself known through His written word. God made Himself known through His wrath. Nobody can reject Him and claim they did not know.
Trust Jesus.\ \ your sister in Christ,
Christy
Bible verses are NASB (New American Standard Bible) 1995 edition unless otherwise stated
-
-
@ a012dc82:6458a70d
2024-10-23 15:23:26Table Of Content
-
The Origins of the Debate
-
The Rise of Bitcoin
-
HODLing: A Strategy to Weather the Storm
-
The Skepticism of Peter Schiff
-
The Bitcoin Community's Response
-
Conclusion
-
FAQ
In the world of finance, few debates have garnered as much attention as the showdown between Peter Schiff and Bitcoin. As an outspoken critic of the cryptocurrency, Peter Schiff, a well-known economist, has long been skeptical of its value and viability. However, as the Bitcoin market experienced unprecedented growth and adoption, the concept of "HODLing" emerged victorious. This article delves into the clash between Schiff and Bitcoin, exploring the reasons behind their differing views and ultimately highlighting how the strategy of HODLing prevailed.
The Origins of the Debate
It all began when Bitcoin, the first decentralized cryptocurrency, was introduced to the world in 2009. While many were intrigued by the concept of digital currency, Schiff expressed skepticism and voiced his concerns regarding its lack of intrinsic value. Schiff, a proponent of traditional investments like gold, believed that Bitcoin's meteoric rise was nothing more than a speculative bubble waiting to burst. This clash of ideologies set the stage for an ongoing battle between Schiff and Bitcoin enthusiasts.
The Rise of Bitcoin
Despite Schiff's reservations, Bitcoin steadily gained traction and captured the imagination of investors worldwide. Its decentralized nature, limited supply, and potential as a store of value attracted a growing number of individuals looking to diversify their portfolios. Bitcoin's disruptive technology and its ability to facilitate fast and secure transactions also played a significant role in its ascent.
HODLing: A Strategy to Weather the Storm
As Bitcoin's price exhibited volatility, a strategy emerged among its proponents: HODLing. This term, originating from a misspelling of "hold" in a Bitcoin forum post, refers to the practice of holding onto Bitcoin for the long term, irrespective of short-term market fluctuations. HODLers believed that Bitcoin's underlying technology and its potential to revolutionize the financial industry made it a worthwhile investment, regardless of temporary setbacks.
The Skepticism of Peter Schiff
While Bitcoin continued to surge in popularity, Peter Schiff remained steadfast in his skepticism. He argued that Bitcoin's lack of intrinsic value made it a speculative asset rather than a legitimate currency or store of wealth. Schiff often compared Bitcoin to gold, highlighting the tangible and historical significance of the precious metal. He warned investors of the potential risks associated with Bitcoin, emphasizing the possibility of a catastrophic collapse.
The Bitcoin Community's Response
Bitcoin enthusiasts fervently defended their chosen asset against Schiff's criticism. They pointed out that while Bitcoin may not possess intrinsic value like gold, its value is derived from its decentralized network, scarcity, and the trust placed in its underlying technology. They highlighted the benefits of Bitcoin's borderless transactions, low fees, and the potential for financial inclusion. The community also stressed that Bitcoin's volatility was merely a temporary characteristic during its early stages of adoption.
Conclusion
In the ultimate showdown between Peter Schiff and Bitcoin, the concept of HODLing emerged victorious. Despite Schiff's skepticism, Bitcoin's decentralized nature, technological innovation, and growing adoption propelled it forward. The strategy of HODLing, fueled by the belief in Bitcoin's long-term potential, allowed investors to weather the storm of market volatility. As the world continues to embrace digital currencies and explore the possibilities they offer, the clash between traditionalists and innovators will persist. The story of Peter Schiff vs Bitcoin serves as a testament to the ever-evolving landscape of finance.
FAQ
Is Bitcoin a reliable investment in the long term? Bitcoin's reliability as an investment depends on one's risk tolerance and long-term perspective. While it has exhibited significant growth, its volatility and regulatory uncertainties make it a speculative asset that carries inherent risks.
Can Bitcoin replace traditional forms of currency? Bitcoin's potential to replace traditional forms of currency is still highly debated. While it offers certain advantages, such as faster and cheaper cross-border transactions, widespread adoption and regulatory clarity would be necessary for it to become a mainstream currency.
How does HODLing differ from traditional investing? HODLing differs from traditional investing as it involves holding onto an asset for the long term, regardless of short-term market fluctuations. Traditional investing often involves active management, buying and selling based on market trends and analysis.
That's all for today
If you want more, be sure to follow us on:
NOSTR: croxroad@getalby.com
Instagram: @croxroadnews.co
Youtube: @croxroadnews
Store: https://croxroad.store
Subscribe to CROX ROAD Bitcoin Only Daily Newsletter
https://www.croxroad.co/subscribe
DISCLAIMER: None of this is financial advice. This newsletter is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. Please be careful and do your own research.
-
-
@ b12b632c:d9e1ff79
2024-03-23 16:42:49CASHU AND ECASH ARE EXPERIMENTAL PROJECTS. BY THE OWN NATURE OF CASHU ECASH, IT'S REALLY EASY TO LOSE YOUR SATS BY LACKING OF KNOWLEDGE OF THE SYSTEM MECHANICS. PLEASE, FOR YOUR OWN GOOD, ALWAYS USE FEW SATS AMOUNT IN THE BEGINNING TO FULLY UNDERSTAND HOW WORKS THE SYSTEM. ECASH IS BASED ON A TRUST RELATIONSHIP BETWEEN YOU AND THE MINT OWNER, PLEASE DONT TRUST ECASH MINT YOU DONT KNOW. IT IS POSSIBLE TO GENERATE UNLIMITED ECASH TOKENS FROM A MINT, THE ONLY WAY TO VALIDATE THE REAL EXISTENCE OF THE ECASH TOKENS IS TO DO A MULTIMINT SWAP (BETWEEN MINTS). PLEASE, ALWAYS DO A MULTISWAP MINT IF YOU RECEIVE SOME ECASH FROM SOMEONE YOU DON'T KNOW/TRUST. NEVER TRUST A MINT YOU DONT KNOW!
IF YOU WANT TO RUN AN ECASH MINT WITH A BTC LIGHTNING NODE IN BACK-END, PLEASE DEDICATE THIS LN NODE TO YOUR ECASH MINT. A BAD MANAGEMENT OF YOUR LN NODE COULD LET PEOPLE TO LOOSE THEIR SATS BECAUSE THEY HAD ONCE TRUSTED YOUR MINT AND YOU DID NOT MANAGE THE THINGS RIGHT.
What's ecash/Cashu ?
I recently listened a passionnating interview from calle 👁️⚡👁 invited by the podcast channel What Bitcoin Did about the new (not so much now) Cashu protocol.
Cashu is a a free and open-source Chaumian ecash project built for Bitcoin protocol, recently created in order to let users send/receive Ecash over BTC Lightning network. The main Cashu ecash goal is to finally give you a "by-design" privacy mechanism to allow us to do anonymous Bitcoin transactions.
Ecash for your privacy.\ A Cashu mint does not know who you are, what your balance is, or who you're transacting with. Users of a mint can exchange ecash privately without anyone being able to know who the involved parties are. Bitcoin payments are executed without anyone able to censor specific users.
Here are some useful links to begin with Cashu ecash :
Github repo: https://github.com/cashubtc
Documentation: https://docs.cashu.space
To support the project: https://docs.cashu.space/contribute
A Proof of Liabilities Scheme for Ecash Mints: https://gist.github.com/callebtc/ed5228d1d8cbaade0104db5d1cf63939
Like NOSTR and its own NIPS, here is the list of the Cashu ecash NUTs (Notation, Usage, and Terminology): https://github.com/cashubtc/nuts?tab=readme-ov-file
I won't explain you at lot more on what's Casu ecash, you need to figured out by yourself. It's really important in order to avoid any mistakes you could do with your sats (that you'll probably regret).
If you don't have so much time, you can check their FAQ right here: https://docs.cashu.space/faq
I strongly advise you to listen Calle's interviews @whatbbitcoindid to "fully" understand the concept and the Cashu ecash mechanism before using it:
Scaling Bitcoin Privacy with Calle
In the meantime I'm writing this article, Calle did another really interesting interview with ODELL from CitadelDispatch:
CD120: BITCOIN POWERED CHAUMIAN ECASH WITH CALLE
Which ecash apps?
There are several ways to send/receive some Ecash tokens, you can do it by using mobile applications like eNuts, Minibits or by using Web applications like Cashu.me, Nustrache or even npub.cash. On these topics, BTC Session Youtube channel offers high quality contents and very easy to understand key knowledge on how to use these applications :
Minibits BTC Wallet: Near Perfect Privacy and Low Fees - FULL TUTORIAL
Cashu Tutorial - Chaumian Ecash On Bitcoin
Unlock Perfect Privacy with eNuts: Instant, Free Bitcoin Transactions Tutorial
Cashu ecash is a very large and complex topic for beginners. I'm still learning everyday how it works and the project moves really fast due to its commited developpers community. Don't forget to follow their updates on Nostr to know more about the project but also to have a better undertanding of the Cashu ecash technical and political implications.
There is also a Matrix chat available if you want to participate to the project:
https://matrix.to/#/#cashu:matrix.org
How to self-host your ecash mint with Nutshell
Cashu Nutshell is a Chaumian Ecash wallet and mint for Bitcoin Lightning. Cashu Nutshell is the reference implementation in Python.
Github repo:
https://github.com/cashubtc/nutshell
Today, Nutshell is the most advanced mint in town to self-host your ecash mint. The installation is relatively straightforward with Docker because a docker-compose file is available from the github repo.
Nutshell is not the only cashu ecash mint server available, you can check other server mint here :
https://docs.cashu.space/mints
The only "external" requirement is to have a funding source. One back-end funding source where ecash will mint your ecash from your Sats and initiate BTC Lightning Netwok transactions between ecash mints and BTC Ligtning nodes during a multimint swap. Current backend sources supported are: FakeWallet*, LndRestWallet, CoreLightningRestWallet, BlinkWallet, LNbitsWallet, StrikeUSDWallet.
*FakeWallet is able to generate unlimited ecash tokens. Please use it carefully, ecash tokens issued by the FakeWallet can be sent and accepted as legit ecash tokens to other people ecash wallets if they trust your mint. In the other way, if someone send you 2,3M ecash tokens, please don't trust the mint in the first place. You need to force a multimint swap with a BTC LN transaction. If that fails, someone has maybe tried to fool you.
I used a Voltage.cloud BTC LN node instance to back-end my Nutshell ecash mint:
SPOILER: my nutshell mint is working but I have an error message "insufficient balance" when I ask a multiswap mint from wallet.cashu.me or the eNuts application. In order to make it work, I need to add some Sats liquidity (I can't right now) to the node and open few channels with good balance capacity. If you don't have an ecash mint capable of doig multiswap mint, you'll only be able to mint ecash into your ecash mint and send ecash tokens to people trusting your mint. It's working, yes, but you need to be able to do some mutiminit swap if you/everyone want to fully profit of the ecash system.
Once you created your account and you got your node, you need to git clone the Nutshell github repo:
git clone https://github.com/cashubtc/nutshell.git
You next need to update the docker compose file with your own settings. You can comment the wallet container if you don't need it.
To generate a private key for your node, you can use this openssl command
openssl rand -hex 32 054de2a00a1d8e3038b30e96d26979761315cf48395aa45d866aeef358c91dd1
The CLI Cashu wallet is not needed right now but I'll show you how to use it in the end of this article. Feel free to comment it or not.
``` version: "3" services: mint: build: context: . dockerfile: Dockerfile container_name: mint
ports:
- "3338:3338"
environment:
- DEBUG=TRUE
- LOG_LEVEL=DEBUG
- MINT_URL=https://YourMintURL - MINT_HOST=YourMintDomain.tld - MINT_LISTEN_HOST=0.0.0.0 - MINT_LISTEN_PORT=3338 - MINT_PRIVATE_KEY=YourPrivateKeyFromOpenSSL - MINT_INFO_NAME=YourMintInfoName - MINT_INFO_DESCRIPTION=YourShortInfoDesc - MINT_INFO_DESCRIPTION_LONG=YourLongInfoDesc - MINT_LIGHTNING_BACKEND=LndRestWallet #- MINT_LIGHTNING_BACKEND=FakeWallet - MINT_INFO_CONTACT=[["email","YourConctact@email"], ["twitter","@YourTwitter"], ["nostr", "YourNPUB"]] - MINT_INFO_MOTD=Thanks for using my mint! - MINT_LND_REST_ENDPOINT=https://YourVoltageNodeDomain:8080 - MINT_LND_REST_MACAROON=YourDefaultAdminMacaroonBase64 - MINT_MAX_PEG_IN=100000 - MINT_MAX_PEG_OUT=100000 - MINT_PEG_OUT_ONLY=FALSE command: ["poetry", "run", "mint"]
wallet-voltage: build: context: . dockerfile: Dockerfile container_name: wallet-voltage
ports:
- "4448:4448"
depends_on: - nutshell-voltage environment:
- DEBUG=TRUE
- MINT_URL=http://nutshell-voltage:3338
- API_HOST=0.0.0.0 command: ["poetry", "run", "cashu", "-d"]
```
To build, run and see the container logs:
docker compose up -d && docker logs -f mint
0.15.1 2024-03-22 14:45:45.490 | WARNING | cashu.lightning.lndrest:__init__:49 - no certificate for lndrest provided, this only works if you have a publicly issued certificate 2024-03-22 14:45:45.557 | INFO | cashu.core.db:__init__:135 - Creating database directory: data/mint 2024-03-22 14:45:45.68 | INFO | Started server process [1] 2024-03-22 14:45:45.69 | INFO | Waiting for application startup. 2024-03-22 14:45:46.12 | INFO | Loaded 0 keysets from database. 2024-03-22 14:45:46.37 | INFO | Current keyset: 003dba9e589023f1 2024-03-22 14:45:46.37 | INFO | Using LndRestWallet backend for method: 'bolt11' and unit: 'sat' 2024-03-22 14:45:46.97 | INFO | Backend balance: 1825000 sat 2024-03-22 14:45:46.97 | INFO | Data dir: /root/.cashu 2024-03-22 14:45:46.97 | INFO | Mint started. 2024-03-22 14:45:46.97 | INFO | Application startup complete. 2024-03-22 14:45:46.98 | INFO | Uvicorn running on http://0.0.0.0:3338 (Press CTRL+C to quit) 2024-03-22 14:45:47.27 | INFO | 172.19.0.22:48528 - "GET /v1/keys HTTP/1.1" 200 2024-03-22 14:45:47.34 | INFO | 172.19.0.22:48544 - "GET /v1/keysets HTTP/1.1" 200 2024-03-22 14:45:47.38 | INFO | 172.19.0.22:48552 - "GET /v1/info HTTP/1.1" 200
If you see the line :
Uvicorn running on http://0.0.0.0:3338 (Press CTRL+C to quit)
Nutshell is well started.
I won't explain here how to create a reverse proxy to Nutshell, you can find how to do it into my previous article. Here is the reverse proxy config into Nginx Proxy Manager:
If everything is well configured and if you go on your mint url (https://yourminturl) you shoud see this:
It's not helping a lot because at first glance it seems to be not working but it is. You can also check these URL path to confirm :
- https://yourminturl/keys and https://yourminturl/keysets
or
- https://yourminturl/v1/keys and https://yourminturl/v1/keysets
Depending of the moment when you read this article, the first URLs path might have been migrated to V1. Here is why:
https://github.com/cashubtc/nuts/pull/55
The final test is to add your mint to your prefered ecash wallets.
SPOILER: AT THIS POINT, YOU SHOUD KNOW THAT IF YOU RESET YOUR LOCAL BROWSER INTERNET CACHE FILE, YOU'LL LOSE YOUR MINTED ECASH TOKENS. IF NOT, PLEASE READ THE DOCUMENTATION AGAIN.
For instace, if we use wallet.cashu.me:
You can go into the "Settings" tab and add your mint :
If everything went find, you shoud see this :
You can now mint some ecash from your mint creating a sats invoice :
You can now scan the QR diplayed with your prefered BTC LN wallet. If everything is OK, you should receive the funds:
It may happen that some error popup sometimes. If you are curious and you want to know what happened, Cashu wallet has a debug console you can activate by clicking on the "Settings" page and "OPEN DEBUG TERMINAL". A little gear icon will be displayed in the bottom of the screen. You can click on it, go to settings and enable "Auto Display If Error Occurs" and "Display Extra Information". After enabling this setting, you can close the popup windows and let the gear icon enabled. If any error comes, this windows will open again and show you thé error:
Now that you have some sats in your balance, you can try to send some ecash. Open in a new windows another ecash wallet like Nutstach for instance.
Add your mint again :
Return on Cashu wallet. The ecash token amount you see on the Cashu wallet home page is a total of all the ecash tokens you have on all mint connected.
Next, click on "Send ecach". Insert the amout of ecash you want to transfer to your other wallet. You can select the wallet where you want to extract the funds by click on the little arrow near the sats funds you currenly selected :
Click now on "SEND TOKENS". That will open you a popup with a QR code and a code CONTAINING YOUR ECASH TOKENS (really).
You can now return on nutstach, click on the "Receive" button and paste the code you get from Cashu wallet:
Click on "RECEIVE" again:
Congrats, you transfered your first ecash tokens to yourself ! 🥜⚡
You may need some time to transfer your ecash tokens between your wallets and your mint, there is a functionality existing for that called "Multimint swaps".
Before that, if you need new mints, you can check the very new website Bitcoinmints.com that let you see the existing ecash mints and rating :
Don't forget, choose your mint carefuly because you don't know who's behind.
Let's take a mint and add it to our Cashu wallet:
If you want to transfer let's say 20 sats from minibits mint to bitcointxoko mint, go just bottom into the "Multimint swap" section. Select the mint into "Swap from mint", the mint into "Swap to mint" and click on "SWAP" :
A popup window will appear and will request the ecash tokens from the source mint. It will automatically request the ecash amount via a Lightning node transaction and add the fund to your other wallet in the target mint. As it's a Lightning Network transaction, you can expect some little fees.
If everything is OK with the mints, the swap will be successful and the ecash received.
You can now see that the previous sats has been transfered (minus 2 fee sats).
Well done, you did your first multimint swap ! 🥜⚡
One last thing interresting is you can also use CLI ecash wallet. If you created the wallet contained in the docker compose, the container should be running.
Here are some commands you can do.
To verify which mint is currently connected :
``` docker exec -it wallet-voltage poetry run cashu info
2024-03-22 21:57:24.91 | DEBUG | cashu.wallet.wallet:init:738 | Wallet initialized 2024-03-22 21:57:24.91 | DEBUG | cashu.wallet.wallet:init:739 | Mint URL: https://nutshell-voltage.fractalized.net 2024-03-22 21:57:24.91 | DEBUG | cashu.wallet.wallet:init:740 | Database: /root/.cashu/wallet 2024-03-22 21:57:24.91 | DEBUG | cashu.wallet.wallet:init:741 | Unit: sat 2024-03-22 21:57:24.92 | DEBUG | cashu.wallet.wallet:init:738 | Wallet initialized 2024-03-22 21:57:24.92 | DEBUG | cashu.wallet.wallet:init:739 | Mint URL: https://nutshell-voltage.fractalized.net 2024-03-22 21:57:24.92 | DEBUG | cashu.wallet.wallet:init:740 | Database: /root/.cashu/wallet 2024-03-22 21:57:24.92 | DEBUG | cashu.wallet.wallet:init:741 | Unit: sat Version: 0.15.1 Wallet: wallet Debug: True Cashu dir: /root/.cashu Mints: - https://nutshell-voltage.fractalized.net ```
To verify your balance :
``` docker exec -it wallet-voltage poetry run cashu balance
2024-03-22 21:59:26.67 | DEBUG | cashu.wallet.wallet:init:738 | Wallet initialized 2024-03-22 21:59:26.67 | DEBUG | cashu.wallet.wallet:init:739 | Mint URL: https://nutshell-voltage.fractalized.net 2024-03-22 21:59:26.67 | DEBUG | cashu.wallet.wallet:init:740 | Database: /root/.cashu/wallet 2024-03-22 21:59:26.67 | DEBUG | cashu.wallet.wallet:init:741 | Unit: sat 2024-03-22 21:59:26.68 | DEBUG | cashu.wallet.wallet:init:738 | Wallet initialized 2024-03-22 21:59:26.68 | DEBUG | cashu.wallet.wallet:init:739 | Mint URL: https://nutshell-voltage.fractalized.net 2024-03-22 21:59:26.68 | DEBUG | cashu.wallet.wallet:init:740 | Database: /root/.cashu/wallet 2024-03-22 21:59:26.68 | DEBUG | cashu.wallet.wallet:init:741 | Unit: sat Balance: 0 sat ```
To create an sats invoice to have ecash :
``` docker exec -it wallet-voltage poetry run cashu invoice 20
2024-03-22 22:00:59.12 | DEBUG | cashu.wallet.wallet:_load_mint_info:275 | Mint info: name='nutshell.fractalized.net' pubkey='02008469922e985cbc5368ce16adb6ed1aaea0f9ecb21639db4ded2e2ae014a326' version='Nutshell/0.15.1' description='Official Fractalized Mint' description_long='TRUST THE MINT' contact=[['email', 'pastagringo@fractalized.net'], ['twitter', '@pastagringo'], ['nostr', 'npub1ky4kxtyg0uxgw8g5p5mmedh8c8s6sqny6zmaaqj44gv4rk0plaus3m4fd2']] motd='Thanks for using official ecash fractalized mint!' nuts={4: {'methods': [['bolt11', 'sat']], 'disabled': False}, 5: {'methods': [['bolt11', 'sat']], 'disabled': False}, 7: {'supported': True}, 8: {'supported': True}, 9: {'supported': True}, 10: {'supported': True}, 11: {'supported': True}, 12: {'supported': True}} Balance: 0 sat
Pay invoice to mint 20 sat:
Invoice: lnbc200n1pjlmlumpp5qh68cqlr2afukv9z2zpna3cwa3a0nvla7yuakq7jjqyu7g6y69uqdqqcqzzsxqyz5vqsp5zymmllsqwd40xhmpu76v4r9qq3wcdth93xthrrvt4z5ct3cf69vs9qyyssqcqppurrt5uqap4nggu5tvmrlmqs5guzpy7jgzz8szckx9tug4kr58t4avv4a6437g7542084c6vkvul0ln4uus7yj87rr79qztqldggq0cdfpy
You can use this command to check the invoice: cashu invoice 20 --id 2uVWELhnpFcNeFZj6fWzHjZuIipqyj5R8kM7ZJ9_
Checking invoice .................2024-03-22 22:03:25.27 | DEBUG | cashu.wallet.wallet:verify_proofs_dleq:1103 | Verified incoming DLEQ proofs. Invoice paid.
Balance: 20 sat ```
To pay an invoice by pasting the invoice you received by your or other people :
``` docker exec -it wallet-voltage poetry run cashu pay lnbc150n1pjluqzhpp5rjezkdtt8rjth4vqsvm50xwxtelxjvkq90lf9tu2thsv2kcqe6vqdq2f38xy6t5wvcqzzsxqrpcgsp58q9sqkpu0c6s8hq5pey8ls863xmjykkumxnd8hff3q4fvxzyh0ys9qyyssq26ytxay6up54useezjgqm3cxxljvqw5vq2e94ru7ytqc0al74hr4nt5cwpuysgyq8u25xx5la43mx4ralf3mq2425xmvhjzvwzqp54gp0e3t8e
2024-03-22 22:04:37.23 | DEBUG | cashu.wallet.wallet:_load_mint_info:275 | Mint info: name='nutshell.fractalized.net' pubkey='02008469922e985cbc5368ce16adb6ed1aaea0f9ecb21639db4ded2e2ae014a326' version='Nutshell/0.15.1' description='Official Fractalized Mint' description_long='TRUST THE MINT' contact=[['email', 'pastagringo@fractalized.net'], ['twitter', '@pastagringo'], ['nostr', 'npub1ky4kxtyg0uxgw8g5p5mmedh8c8s6sqny6zmaaqj44gv4rk0plaus3m4fd2']] motd='Thanks for using official ecash fractalized mint!' nuts={4: {'methods': [['bolt11', 'sat']], 'disabled': False}, 5: {'methods': [['bolt11', 'sat']], 'disabled': False}, 7: {'supported': True}, 8: {'supported': True}, 9: {'supported': True}, 10: {'supported': True}, 11: {'supported': True}, 12: {'supported': True}} Balance: 20 sat 2024-03-22 22:04:37.45 | DEBUG | cashu.wallet.wallet:get_pay_amount_with_fees:1529 | Mint wants 0 sat as fee reserve. 2024-03-22 22:04:37.45 | DEBUG | cashu.wallet.cli.cli:pay:189 | Quote: quote='YpNkb5f6WVT_5ivfQN1OnPDwdHwa_VhfbeKKbBAB' amount=15 fee_reserve=0 paid=False expiry=1711146847 Pay 15 sat? [Y/n]: y Paying Lightning invoice ...2024-03-22 22:04:41.13 | DEBUG | cashu.wallet.wallet:split:613 | Calling split. POST /v1/swap 2024-03-22 22:04:41.21 | DEBUG | cashu.wallet.wallet:verify_proofs_dleq:1103 | Verified incoming DLEQ proofs. Error paying invoice: Mint Error: Lightning payment unsuccessful. insufficient_balance (Code: 20000) ```
It didn't work, yes. That's the thing I told you earlier but it would work with a well configured and balanced Lightning Node.
That's all ! You should now be able to use ecash as you want! 🥜⚡
See you on NOSTR! 🤖⚡\ PastaGringo
-
@ ee11a5df:b76c4e49
2024-03-22 23:49:09Implementing The Gossip Model
version 2 (2024-03-23)
Introduction
History
The gossip model is a general concept that allows clients to dynamically follow the content of people, without specifying which relay. The clients have to figure out where each person puts their content.
Before NIP-65, the gossip client did this in multiple ways:
- Checking kind-3 contents, which had relay lists for configuring some clients (originally Astral and Damus), and recognizing that wherever they were writing our client could read from.
- NIP-05 specifying a list of relays in the
nostr.json
file. I added this to NIP-35 which got merged down into NIP-05. - Recommended relay URLs that are found in 'p' tags
- Users manually making the association
- History of where events happen to have been found. Whenever an event came in, we associated the author with the relay.
Each of these associations were given a score (recommended relay urls are 3rd party info so they got a low score).
Later, NIP-65 made a new kind of relay list where someone could advertise to others which relays they use. The flag "write" is now called an OUTBOX, and the flag "read" is now called an INBOX.
The idea of inboxes came about during the development of NIP-65. They are a way to send an event to a person to make sure they get it... because putting it on your own OUTBOX doesn't guarantee they will read it -- they may not follow you.
The outbox model is the use of NIP-65. It is a subset of the gossip model which uses every other resource at it's disposal.
Rationale
The gossip model keeps nostr decentralized. If all the (major) clients were using it, people could spin up small relays for both INBOX and OUTBOX and still be fully connected, have their posts read, and get replies and DMs. This is not to say that many people should spin up small relays. But the task of being decentralized necessitates that people must be able to spin up their own relay in case everybody else is censoring them. We must make it possible. In reality, congregating around 30 or so popular relays as we do today is not a problem. Not until somebody becomes very unpopular with bitcoiners (it will probably be a shitcoiner), and then that person is going to need to leave those popular relays and that person shouldn't lose their followers or connectivity in any way when they do.
A lot more rationale has been discussed elsewhere and right now I want to move on to implementation advice.
Implementation Advice
Read NIP-65
NIP-65 will contain great advice on which relays to consult for which purposes. This post does not supersede NIP-65. NIP-65 may be getting some smallish changes, mostly the addition of a private inbox for DMs, but also changes to whether you should read or write to just some or all of a set of relays.
How often to fetch kind-10002 relay lists for someone
This is up to you. Refreshing them every hour seems reasonable to me. Keeping track of when you last checked so you can check again every hour is a good idea.
Where to fetch events from
If your user follows another user (call them jack), then you should fetch jack's events from jack's OUTBOX relays. I think it's a good idea to use 2 of those relays. If one of those choices fails (errors), then keep trying until you get 2 of them that worked. This gives some redundancy in case one of them is censoring. You can bump that number up to 3 or 4, but more than that is probably just wasting bandwidth.
To find events tagging your user, look in your user's INBOX relays for those. In this case, look into all of them because some clients will only write to some of them (even though that is no longer advised).
Picking relays dynamically
Since your user follows many other users, it is very useful to find a small subset of all of their OUTBOX relays that cover everybody followed. I wrote some code to do this as (it is used by gossip) that you can look at for an example.
Where to post events to
Post all events (except DMs) to all of your users OUTBOX relays. Also post the events to all the INBOX relays of anybody that was tagged or mentioned in the contents in a nostr bech32 link (if desired). That way all these mentioned people are aware of the reply (or quote or repost).
DMs should be posted only to INBOX relays (in the future, to PRIVATE INBOX relays). You should post it to your own INBOX relays also, because you'll want a record of the conversation. In this way, you can see all your DMs inbound and outbound at your INBOX relay.
Where to publish your user's kind-10002 event to
This event was designed to be small and not require moderation, plus it is replaceable so there is only one per user. For this reason, at the moment, just spread it around to lots of relays especially the most popular relays.
For example, the gossip client automatically determines which relays to publish to based on whether they seem to be working (several hundred) and does so in batches of 10.
How to find replies
If all clients used the gossip model, you could find all the replies to any post in the author's INBOX relays for any event with an 'e' tag tagging the event you want replies to... because gossip model clients will publish them there.
But given the non-gossip-model clients, you should also look where the event was seen and look on those relays too.
Clobbering issues
Please read your users kind 10002 event before clobbering it. You should look many places to make sure you didn't miss the newest one.
If the old relay list had tags you don't understand (e.g. neither "read" nor "write"), then preserve them.
How users should pick relays
Today, nostr relays are not uniform. They have all kinds of different rule-sets and purposes. We severely lack a way to advice non-technical users as to which relays make good OUTBOX relays and which ones make good INBOX relays. But you are a dev, you can figure that out pretty well. For example, INBOX relays must accept notes from anyone meaning they can't be paid-subscription relays.
Bandwidth isn't a big issue
The outbox model doesn't require excessive bandwidth when done right. You shouldn't be downloading the same note many times... only 2-4 times depending on the level of redundancy your user wants.
Downloading 1000 events from 100 relays is in theory the same amount of data as downloading 1000 events from 1 relay.
But in practice, due to redundancy concerns, you will end up downloading 2000-3000 events from those 100 relays instead of just the 1000 you would in a single relay situation. Remember, per person followed, you will only ask for their events from 2-4 relays, not from all 100 relays!!!
Also in practice, the cost of opening and maintaining 100 network connections is more than the cost of opening and maintaining just 1. But this isn't usually a big deal unless...
Crypto overhead on Low-Power Clients
Verifying Schnorr signatures in the secp256k1 cryptosystem is not cheap. Setting up SSL key exchange is not cheap either. But most clients will do a lot more event signature validations than they will SSL setups.
For this reason, connecting to 50-100 relays is NOT hugely expensive for clients that are already verifying event signatures, as the number of events far surpasses the number of relay connections.
But for low-power clients that can't do event signature verification, there is a case for them not doing a lot of SSL setups either. Those clients would benefit from a different architecture, where half of the client was on a more powerful machine acting as a proxy for the low-power half of the client. These halves need to trust each other, so perhaps this isn't a good architecture for a business relationship, but I don't know what else to say about the low-power client situation.
Unsafe relays
Some people complain that the outbox model directs their client to relays that their user has not approved. I don't think it is a big deal, as such users can use VPNs or Tor if they need privacy. But for such users that still have concerns, they may wish to use clients that give them control over this. As a client developer you can choose whether to offer this feature or not.
The gossip client allows users to require whitelisting for connecting to new relays and for AUTHing to relays.
See Also
-
@ 4ba8e86d:89d32de4
2024-10-23 15:19:11Para ajudar a contornar a censura da Internet em locais onde o acesso ao Tor é bloqueado. Ele é uma espécie de ponte do Tor que permite que os usuários se conectem à rede Tor por meio de um sistema distribuído de voluntários.
A história do Snowflake começou em 2019, quando o Tor Project percebeu que muitas pessoas em locais com restrições de acesso à Internet estavam recorrendo a soluções de VPN e proxies para contornar a censura. No entanto, muitas dessas soluções eram bloqueadas pelas autoridades governamentais, o que criou a necessidade de encontrar novas maneiras de contornar a censura. Foi aí que surgiu a ideia do Snowflake, que foi lançado como uma solução para ajudar a aumentar a capacidade do Tor de contornar a censura da Internet em todo o mundo.
O Snowflake funciona por meio de um sistema distribuído de voluntários que oferecem seus proxies para ajudar a contornar a censura. Quando um usuário se conecta ao Snowflake, seu tráfego é roteado por meio de um conjunto de proxies voluntários que se oferecem para ajudar a contornar a censura. Esses proxies são distribuídos em todo o mundo, o que ajuda a garantir que haja sempre uma opção disponível para os usuários que desejam acessar a Internet livremente.
O Snowflake resolve o problema de acesso à Internet em locais onde o acesso ao Tor é bloqueado. Ele permite que os usuários contornem a censura e a vigilância da Internet, acessando sites e aplicativos que seriam bloqueados em suas regiões. Com o Snowflake, os usuários podem navegar na Internet com mais privacidade e segurança, evitando serem detectados pelos censores da Internet.
"A privacidade é necessária para uma sociedade aberta na era eletrônica. Privacidade não é sigilo. Uma sociedade livre requer privacidade na comunicação, bem como privacidade na busca e na associação." - Eric Hughes
https://snowflake.torproject.org/
https://youtu.be/ZC6GXRJOWmo
-
@ 4ba8e86d:89d32de4
2024-10-23 13:28:09Ele possibilita o compartilhamento seguro de arquivos e mensagens entre dispositivos próximos em uma rede Wi-Fi local, sem a necessidade de uma conexão com a Internet.
O LocalSend é um aplicativo de plataforma cruzada que utiliza uma API REST e criptografia HTTPS para garantir a segurança da comunicação. Diferentemente de outros aplicativos de mensagens que dependem de servidores externos, o LocalSend não requer acesso à Internet ou servidores de terceiros.
Para assegurar a máxima segurança, o LocalSend utiliza um protocolo de comunicação seguro em que todos os dados são enviados por HTTPS, e um certificado TLS/SSL é gerado instantaneamente em cada dispositivo durante a comunicação.
https://f-droid.org/pt_BR/packages/org.localsend.localsend_app/
https://github.com/localsend/localsend
-
@ 9e69e420:d12360c2
2024-10-23 13:05:29Former Democratic Representative Tulsi Gabbard has officially joined the Republican Party, announcing her decision at a Donald Trump rally in North Carolina. As one of the few voices of reason within the Democratic Party, Gabbard's departure marks the end of her journey from Democratic presidential candidate to Republican Party member[1]. Her transition began with her exit from the Democratic Party in 2022, when she accurately called it an "elitist cabal of warmongers"[3].
Throughout her congressional career, Gabbard demonstrated a consistent anti-interventionist stance and a strong defense of civil liberties. She was an original member of the bi-partisan 4th Amendment Caucus, fighting against warrantless searches and championing privacy rights in the digital age[4]. Her principled opposition to the surveillance state and bulk data collection set her apart from the establishment of both major parties.
While some may view her political evolution as opportunistic, Gabbard's core positions on foreign policy have remained remarkably consistent. She has maintained her opposition to unnecessary foreign interventions and has been vocal about the risks of nuclear war[2]. Her criticism of the military-industrial complex and her calls for the Democratic party to renounce the influence of military contractors and corporate lobbyists demonstrate a commitment to principles over party loyalty[4]. Though her recent alignment with Trump may raise eyebrows, her consistent advocacy for peace and opposition to the warfare state make her a unique figure in American politics[1].
Sauce: [1] Tulsi Gabbard's Political Evolution | TIME https://time.com/7096376/tulsi-gabbard-democrat-republican-political-evolution-history-trump/ [2] Tulsi Gabbard wants to serve under Trump, but do their policies align? https://www.usatoday.com/story/news/politics/elections/2024/06/19/tulsi-gabbard-trump-vp-secretary-policy/73968445007/ [3] Tulsi Gabbard announces she is leaving Democratic Party https://abcnews.go.com/Politics/tulsi-gabbard-announces-leaving-democratic-party/story?id=91326164%2F [4] Political positions of Tulsi Gabbard - Wikipedia https://en.wikipedia.org/wiki/Political_positions_of_Tulsi_Gabbard [5] Tulsi Gabbard Turning Republican Is 'Surprise' to Donald Trump https://www.newsweek.com/tulsi-gabbard-donald-trump-republican-party-1973340
-
@ 4ba8e86d:89d32de4
2024-10-23 12:46:23Desenvolvido pelo Tor Project, o OnionShare é um projeto de código aberto que visa proteger a privacidade dos usuários durante o compartilhamento de arquivos.
Criado em 2014 por Micah Lee, um desenvolvedor de software e ativista de privacidade, o OnionShare surgiu como uma solução para compartilhamento seguro e anônimo de arquivos pela rede Tor. O OnionShare faz parte do desenvolvimento contínuo do Tor Project, a mesma equipe responsável pelo navegador Tor. Micah Lee reconheceu a necessidade de uma solução que respeitasse os princípios e valores da rede Tor, fornecendo privacidade aos usuários. O OnionShare utiliza serviços ocultos do Tor para criar servidores web temporários com endereços .onion exclusivos, permitindo que os arquivos compartilhados sejam acessados apenas pela rede Tor. Desde o seu lançamento, o OnionShare tem sido aprimorado e atualizado pela comunidade de desenvolvedores do Tor Project, garantindo a compatibilidade contínua com os princípios e protocolos da rede Tor. A ferramenta tornou-se amplamente reconhecida como uma opção confiável para compartilhamento de arquivos com privacidade e segurança, sendo usada por jornalistas, ativistas e pessoas preocupadas com a privacidade em todo o mundo.
Como funciona o OnionShare:
-
Geração do endereço .onion: Ao iniciar o OnionShare, a ferramenta gera um endereço .onion exclusivo para o compartilhamento dos arquivos. Esse endereço é composto por uma sequência de caracteres aleatórios que serve como identificador único para os arquivos compartilhados.
-
Configuração do servidor web temporário: O OnionShare cria um servidor web temporário no dispositivo do usuário, permitindo que os arquivos sejam acessados por meio desse servidor. O servidor web temporário está vinculado ao endereço .onion gerado.
-
Compartilhamento do endereço .onion: O usuário pode compartilhar o endereço .onion gerado com as pessoas que desejam acessar os arquivos compartilhados. Isso pode ser feito por meio de mensagens, e-mails ou qualquer outro canal de comunicação.
-
Acesso aos arquivos compartilhados: As pessoas que receberem o endereço .onion podem usar o navegador Tor para se conectar à rede Tor. Ao inserir o endereço .onion em seu navegador Tor, elas serão direcionadas ao servidor web temporário do OnionShare.
-
Download dos arquivos: Uma vez que a pessoa tenha acesso ao servidor web temporário por meio do endereço .onion, ela poderá visualizar e baixar os arquivos compartilhados. O OnionShare permite que os arquivos sejam transferidos diretamente do dispositivo do remetente para o dispositivo do destinatário, garantindo a privacidade e a segurança dos dados durante a transferência.
-
Encerramento do compartilhamento: O OnionShare permite definir uma duração específica para o compartilhamento dos arquivos. Após o período especificado, o servidor web temporário é desativado e os arquivos não estarão mais disponíveis para download.
É importante destacar que o OnionShare utiliza a rede Tor para criptografar as comunicações e garantir a privacidade dos usuários. Ele fornece uma camada adicional de segurança, permitindo que os arquivos sejam compartilhados diretamente entre os dispositivos dos usuários, sem a necessidade de intermediários ou serviços de armazenamento em nuvem.
O OnionShare resolve diversos problemas relacionados ao compartilhamento de arquivos, fornecendo uma solução segura e anônima.
-
Privacidade: Ao utilizar o OnionShare, os usuários podem compartilhar arquivos de forma privada e segura. A criptografia e a tecnologia da rede Tor garantem que apenas as pessoas com o endereço .onion específico possam acessar os arquivos compartilhados, mantendo a privacidade dos dados.
-
Anonimato: O OnionShare utiliza a rede Tor, que ajuda a ocultar a identidade dos usuários e a proteger sua localização. Isso permite que os usuários compartilhem arquivos anonimamente, sem revelar sua identidade ou localização geográfica aos destinatários.
-
Segurança dos dados: O OnionShare oferece uma forma segura de compartilhar arquivos, evitando a interceptação e a violação dos dados durante a transferência. A criptografia ponto a ponto e a natureza temporária do servidor web garantem que apenas os destinatários pretendidos tenham acesso aos arquivos compartilhados.
-
Contornar a censura e a vigilância: Através do uso da rede Tor, o OnionShare permite contornar a censura e a vigilância online. A criptografia e a rota de tráfego do Tor ajudam a evitar que terceiros, como governos ou provedores de serviços de internet, monitorem ou restrinjam o acesso aos arquivos compartilhados.
-
Eliminação de intermediários: Com o OnionShare, os arquivos são transferidos diretamente entre os dispositivos do remetente e do destinatário, sem a necessidade de intermediários ou serviços de armazenamento em nuvem. Isso reduz o risco de violações de privacidade ou acessos não autorizados aos arquivos compartilhados.
-
Facilidade de uso: O OnionShare foi projetado para ser uma ferramenta fácil de usar, mesmo para usuários não técnicos. Com uma interface simples, os usuários podem gerar endereços .onion e compartilhar os arquivos de forma conveniente, sem a necessidade de conhecimentos avançados de tecnologia.
O OnionShare resolve problemas relacionados à privacidade, anonimato, segurança dos dados e censura, oferecendo uma maneira confiável e acessível de compartilhar arquivos de forma segura e anônima pela rede Tor. É uma solução simples e eficaz para proteger a privacidade dos usuários durante as transferências de arquivos.
https://onionshare.org/
https://github.com/onionshare/onionshare
-
-
@ 42342239:1d80db24
2024-03-21 09:49:01It has become increasingly evident that our financial system has started undermine our constitutionally guaranteed freedoms and rights. Payment giants like PayPal, Mastercard, and Visa sometimes block the ability to donate money. Individuals, companies, and associations lose bank accounts — or struggle to open new ones. In bank offices, people nowadays risk undergoing something resembling being cross-examined. The regulations are becoming so cumbersome that their mere presence risks tarnishing the banks' reputation.
The rules are so complex that even within the same bank, different compliance officers can provide different answers to the same question! There are even departments where some of the compliance officers are reluctant to provide written responses and prefer to answer questions over an unrecorded phone call. Last year's corporate lawyer in Sweden recently complained about troublesome bureaucracy, and that's from a the perspective of a very large corporation. We may not even fathom how smaller businesses — the keys to a nation's prosperity — experience it.
Where do all these rules come?
Where do all these rules come from, and how well do they work? Today's regulations on money laundering (AML) and customer due diligence (KYC - know your customer) primarily originate from a G7 meeting in the summer of 1989. (The G7 comprises the seven advanced economies: the USA, Canada, the UK, Germany, France, Italy, and Japan, along with the EU.) During that meeting, the intergovernmental organization FATF (Financial Action Task Force) was established with the aim of combating organized crime, especially drug trafficking. Since then, its mandate has expanded to include fighting money laundering, terrorist financing, and the financing of the proliferation of weapons of mass destruction(!). One might envisage the rules soon being aimed against proliferation of GPUs (Graphics Processing Units used for AI/ML). FATF, dominated by the USA, provides frameworks and recommendations for countries to follow. Despite its influence, the organization often goes unnoticed. Had you heard of it?
FATF offered countries "a deal they couldn't refuse"
On the advice of the USA and G7 countries, the organization decided to begin grading countries in "blacklists" and "grey lists" in 2000, naming countries that did not comply with its recommendations. The purpose was to apply "pressure" to these countries if they wanted to "retain their position in the global economy." The countries were offered a deal they couldn't refuse, and the number of member countries rapidly increased. Threatening with financial sanctions in this manner has even been referred to as "extraterritorial bullying." Some at the time even argued that the process violated international law.
If your local Financial Supervisory Authority (FSA) were to fail in enforcing compliance with FATF's many checklists among financial institutions, the risk of your country and its banks being barred from the US-dominated financial markets would loom large. This could have disastrous consequences.
A cost-benefit analysis of AML and KYC regulations
Economists use cost-benefit analysis to determine whether an action or a policy is successful. Let's see what such an analysis reveals.
What are the benefits (or revenues) after almost 35 years of more and more rules and regulations? The United Nations Office on Drugs and Crime estimated that only 0.2% of criminal proceeds are confiscated. Other estimates suggest a success rate from such anti-money laundering rules of 0.07% — a rounding error for organized crime. Europol expects to recover 1.2 billion euros annually, equivalent to about 1% of the revenue generated in the European drug market (110 billion euros). However, the percentage may be considerably lower, as the size of the drug market is likely underestimated. Moreover, there are many more "criminal industries" than just the drug trade; human trafficking is one example - there are many more. In other words, criminal organizations retain at least 99%, perhaps even 99.93%, of their profits, despite all cumbersome rules regarding money laundering and customer due diligence.
What constitutes the total cost of this bureaurcratic activity, costs that eventually burden taxpayers and households via higher fees? Within Europe, private financial firms are estimated to spend approximately 144 billion euros on compliance. According to some estimates, the global cost is twice as high, perhaps even eight times as much.
For Europe, the cost may thus be about 120 times (144/1.2) higher than the revenues from these measures. These "compliance costs" bizarrely exceed the total profits from the drug market, as one researcher put it. Even though the calculations are uncertain, it is challenging — perhaps impossible — to legitimize these regulations from a cost-benefit perspective.
But it doesn't end there, unfortunately. The cost of maintaining this compliance circus, with around 80 international organizations, thousands of authorities, far more employees, and all this across hundreds of countries, remains a mystery. But it's unlikely to be cheap.
The purpose of a system is what it does
In Economic Possibilities for our Grandchildren (1930), John Maynard Keynes foresaw that thanks to technological development, we could have had a 15-hour workweek by now. This has clearly not happened. Perhaps jobs have been created that are entirely meaningless? Anthropologist David Graeber argued precisely this in Bullshit Jobs in 2018. In that case, a significant number of people spend their entire working lives performing tasks they suspect deep down don't need to be done.
"The purpose of a system is what it does" is a heuristic coined by Stafford Beer. He observed there is "no point in claiming that the purpose of a system is to do what it constantly fails to do. What the current regulatory regime fails to do is combat criminal organizations. Nor does it seem to prevent banks from laundering money as never before, or from providing banking services to sex-offending traffickers
What the current regulatory regime does do, is: i) create armies of meaningless jobs, ii) thereby undermining mental health as well as economic prosperity, while iii) undermining our freedom and rights.
What does this say about the purpose of the system?
-
@ 41ed0635:762415fc
2024-10-23 12:17:11O Que é um RAM Disk?
Um RAM Disk é uma área de armazenamento criada na memória RAM do sistema, tratada pelo sistema operacional como um dispositivo de armazenamento físico, similar a um disco rígido ou SSD. Sua característica principal é ser baseado em memória volátil, o que significa que os dados armazenados são perdidos quando o sistema é desligado ou reiniciado. Essa tecnologia oferece uma alternativa de armazenamento temporário extremamente rápida, ideal para situações que exigem acesso veloz a dados, mas sem a necessidade de persistência a longo prazo.
Vantagens e Desvantagens do RAM Disk
Vantagens - Velocidade Superior: Acesso muito mais rápido comparado a dispositivos de armazenamento baseados em NAND flash ou SSDs tradicionais. - Latência Mínima: Redução significativa na latência de leitura/escrita. - Isolamento de Dados Temporários: Ideal para armazenar dados temporários que não precisam de persistência após o desligamento.
Desvantagens - Volatilidade: Perda de dados em caso de desligamento ou falha de energia. - Consumo de Memória: Utiliza a RAM do sistema, que é um recurso limitado.
Consumo de Memória e Soluções Alternativas
Uma abordagem inovadora para lidar com o consumo de memória do sistema é a implementação de um SSD SATA em FPGA utilizando LiteSATA e LiteDRAM como backend de memória. Este método requer a seleção de uma placa FPGA apropriada com recursos necessários, incluindo transceptores capazes de operar na velocidade SATA desejada e memória DRAM compatível. O processo envolve a configuração do LiteDRAM e LiteSATA, implementação de lógica de controle, integração dos componentes, compilação e síntese do design, e programação do FPGA. Esta solução oferece uma alternativa flexível e personalizável para o armazenamento de alta velocidade.
1. Seleção da Placa FPGA
Escolha uma placa com transceptores adequados e memória DRAM compatível.
2. Configuração do LiteDRAM e LiteSATA
Instancie e configure os núcleos LiteDRAM e LiteSATA no design FPGA.
3. Implementação da Lógica de Controle
Desenvolva um controlador para interfacear LiteSATA com LiteDRAM.
4. Programação e Teste
Compile, sintetize e programe o FPGA, seguido por testes e otimizações.
Configuração do LiteDRAM
1. Instanciação
No seu design, instancie o núcleo LiteDRAM.
2. Configuração
Configure o LiteDRAM para corresponder ao tipo e à configuração da memória DRAM na sua placa (por exemplo, DDR3, DDR4).
3. Parâmetros
Defina parâmetros como frequência de operação, largura de banda e latências.
Configuração do LiteSATA
Instanciação
Comece por instanciar o núcleo LiteSATA no seu design FPGA.
PHY SATA
Em seguida, configure o PHY SATA de acordo com o hardware da sua placa FPGA, incluindo a taxa de transferência desejada. Por exemplo, para uma conexão SATA 3.0, a taxa de transferência será de 6 Gbps.
Mapeamento de Pinos
Por fim, certifique-se de que os pinos do FPGA estão corretamente mapeados para os conectores SATA.
Desenvolvimento do Controlador com Máquina de Estados Finitos
O desenvolvimento de um controlador que interfaceia o LiteSATA com o LiteDRAM utilizando uma Máquina de Estados Finitos (FSM) é crucial para gerenciar a comunicação entre o protocolo SATA e a memória DRAM. Este controlador é responsável por receber e interpretar comandos SATA, gerenciar operações de leitura/escrita na DRAM, e sincronizar dados entre os sistemas. A FSM é composta por vários estados, cada um representando uma etapa específica do processamento. A transição entre estados é baseada em eventos ou condições, como a recepção de um novo comando ou a conclusão de uma operação de leitura/escrita.
Objetivo do Controlador
Gerenciar comunicação entre SATA e DRAM, traduzindo comandos e garantindo integridade dos dados.
Estrutura da FSM
Estados representam etapas do processamento, com transições baseadas em eventos ou condições.
Implementação
Desenvolvimento de lógica para cada estado, garantindo fluxo correto de operações e tratamento de erros.
Estados da Máquina de Estados Finitos
A Máquina de Estados Finitos (FSM) do controlador é composta por vários estados essenciais. O estado IDLE é o ponto inicial, aguardando comandos. O COMMAND_DECODE decodifica os comandos recebidos. READ_SETUP e WRITE_SETUP preparam as operações de leitura e escrita, respectivamente. READ_EXECUTE e WRITE_EXECUTE realizam as operações de leitura e escrita na DRAM. O STATUS_UPDATE atualiza o status da operação e informa o host. O ERROR_HANDLE lida com erros ocorridos durante as operações. Cada estado tem ações específicas e critérios de transição para outros estados, garantindo um fluxo de operação eficiente e confiável.
- IDLE: Aguarda comandos, transição para COMMAND_DECODE ao receber novo comando.
- COMMAND_DECODE: Decodifica o comando recebido e prepara parâmetros.
- READ/WRITE_SETUP: Configura operações de leitura ou escrita na DRAM.
- READ/WRITE_EXECUTE: Executa operações de leitura ou escrita na DRAM.
- STATUS_UPDATE: Atualiza o status da operação e informa o host.
Comparação entre RAM Disk e SSD SATA em FPGA
RAM Disk e SSD SATA em FPGA com LiteSATA e LiteDRAM compartilham semelhanças em termos de velocidade superior e uso de memória volátil. Ambos oferecem velocidades de leitura/escrita superiores aos dispositivos de armazenamento convencionais. No entanto, diferem significativamente em sua implementação de hardware e flexibilidade. O RAM Disk é uma solução puramente de software que utiliza a RAM interna do sistema, enquanto o SSD em FPGA envolve hardware personalizado conectado a DRAM externa. A escalabilidade e flexibilidade também diferem, com o SSD em FPGA oferecendo maior potencial de expansão.
Característica | RAM Disk | SSD SATA em FPGA --- | --- | --- Implementação | Software | Hardware personalizado Escalabilidade | Limitada pela RAM do sistema | Expansível com módulos DRAM Flexibilidade | Baixa | Alta (ajustável via FPGA) Persistência | Volátil | Volátil (com opções de backup)
Integração com a Partição de Swap no Linux
A integração de um SSD SATA em FPGA com a partição de swap no Linux pode acelerar significativamente o desempenho do sistema. O Linux utiliza o swap como uma extensão da memória RAM, e um swap mais rápido permite que o sistema mova páginas de memória para o swap e as recupere mais rapidamente, aumentando efetivamente o desempenho percebido da RAM. Essa integração resulta em redução do tempo de espera para processos que requerem acesso a dados no swap, melhorando a eficiência geral do sistema. Em aplicações de alto desempenho, como bancos de dados, máquinas virtuais ou processamento de big data, um swap rápido pode evitar gargalos quando a memória física está esgotada.
1. Melhor Gestão de Memória
Movimento mais rápido de páginas entre RAM e swap, aumentando o desempenho percebido da memória.
2. Redução de Tempo de Espera
Processos acessam dados no swap mais rapidamente, melhorando a responsividade do sistema.
3. Melhoria em Aplicações de Alto Desempenho
Evita gargalos em workloads que exigem muita memória, como bancos de dados e processamento de big data.
Benefícios e Desvantagens da Integração
A integração de um SSD SATA em FPGA com LiteSATA e LiteDRAM como partição de swap no Linux oferece diversos benefícios. A latência reduzida e as altas taxas de transferência aceleram operações de leitura e escrita no swap. A ausência de desgaste por escrita na DRAM aumenta a longevidade do dispositivo. A capacidade pode ser expandida adicionando mais módulos DRAM, e atualizações são facilitadas via firmware. Contudo, existem desvantagens a considerar. A implementação requer hardware adicional (FPGA e DRAM externa), o que pode resultar em um custo mais elevado comparado a soluções de RAM Disk convencionais. A complexidade da implementação também pode ser um fator limitante para alguns usuários.
Benefícios - Latência reduzida - Altas taxas de transferência - Ausência de desgaste por escrita - Capacidade expansível - Atualizações facilitadas
Desvantagens - Necessidade de hardware adicional - Potencial custo mais elevado - Complexidade de implementação
Considerações Finais e Recursos
Compatibilidade de Protocolos
Garanta que o seu design esteja em conformidade com as especificações SATA para evitar problemas de compatibilidade.
Limitações de Hardware
Esteja ciente das limitações do seu FPGA, como recursos lógicos disponíveis e capacidades dos transceptores.
Segurança dos Dados
Considere implementar recursos de correção de erros ou proteção de dados se necessário.
Recursos Úteis: - Repositório do LiteSATA: https://github.com/enjoy-digital/litesata - Repositório do LiteDRAM: https://github.com/enjoy-digital/litedram - Documentação do LiteX: https://github.com/enjoy-digital/litex/wiki
-
@ ee11a5df:b76c4e49
2024-03-21 00:28:47I'm glad to see more activity and discussion about the gossip model. Glad to see fiatjaf and Jack posting about it, as well as many developers pitching in in the replies. There are difficult problems we need to overcome, and finding notes while remaining decentralized without huge note copying overhead was just the first. While the gossip model (including the outbox model which is just the NIP-65 part) completely eliminates the need to copy notes around to lots of relays, and keeps us decentralized, it brings about it's own set of new problems. No community is ever of the same mind on any issue, and this issue is no different. We have a lot of divergent opinions. This note will be my updated thoughts on these topics.
COPYING TO CENTRAL RELAYS IS A NON-STARTER: The idea that you can configure your client to use a few popular "centralized" relays and everybody will copy notes into those central relays is a non-starter. It destroys the entire raison d'être of nostr. I've heard people say that more decentralization isn't our biggest issue. But decentralization is THE reason nostr exists at all, so we need to make sure we live up to the hype. Otherwise we may as well just all join Bluesky. It has other problems too: the central relays get overloaded, and the notes get copied to too many relays, which is both space-inefficient and network bandwith inefficient.
ISSUE 1: Which notes should I fetch from which relays? This is described pretty well now in NIP-65. But that is only the "outbox" model part. The "gossip model" part is to also work out what relays work for people who don't publish a relay list.
ISSUE 2: Automatic failover. Apparently Peter Todd's definition of decentralized includes a concept of automatic failover, where new resources are brought up and users don't need to do anything. Besides this not being part of any definition of decentralized I have never heard of, we kind of have this. If a user has 5 outboxes, and 3 fail, everything still works. Redundancy is built in. No user intervention needed in most cases, at least in the short term. But we also don't have any notion of administrators who can fix this behind the scenes for the users. Users are sovereign and that means they have total control, but also take on some responsibility. This is obvious when it comes to keypair management, but it goes further. Users have to manage where they post and where they accept incoming notes, and when those relays fail to serve them they have to change providers. Putting the users in charge, and not having administrators, is kinda necessary to be truly decentralized.
ISSUE 3: Connecting to unvetted relays feels unsafe. It might even be the NSA tracking you! First off, this happens with your web browser all the time: you go visit a web page and it instructs your browser to fetch a font from google. If you don't like it, you can use uBlock origin and manage it manually. In the nostr world, if you don't like it, you can use a client that puts you more in control of this. The gossip client for example has options for whether you want to manually approve relay connections and AUTHs, just once or always, and always lets you change your mind later. If you turn those options on, initially it is a giant wall of approval requests... but that situation resolves rather quickly. I've been running with these options on for a long time now, and only about once a week do I have to make a decision for a new relay.
But these features aren't really necessary for the vast majority of users who don't care if a relay knows their IP address. Those users use VPNs or Tor when they want to be anonymous, and don't bother when they don't care (me included).
ISSUE 4: Mobile phone clients may find the gossip model too costly in terms of battery life. Bandwidth is actually not a problem: under the gossip model (if done correctly) events for user P are only downloaded from N relays (default for gossip client is N=2), which in general is FEWER events retrieved than other models which download the same event maybe 8 or more times. Rather, the problem here is the large number of network connections and in particular, the large number of SSL setups and teardowns. If it weren't for SSL, this wouldn't be much of a problem. But setting up and tearing down SSL on 50 simultaneous connections that drop and pop up somewhat frequently is a battery drain.
The solution to this that makes the most sense to me is to have a client proxy. What I mean by that is a piece of software on a server in a data centre. The client proxy would be a headless nostr client that uses the gossip model and acts on behalf of the phone client. The phone client doesn't even have to be a nostr client, but it might as well be a nostr client that just connects to this fixed proxy to read and write all of its events. Now the SSL connection issue is solved. These proxies can serve many clients and have local storage, whereas the phones might not even need local storage. Since very few users will set up such things for themselves, this is a business opportunity for people, and a better business opportunity IMHO than running a paid-for relay. This doesn't decentralize nostr as there can be many of these proxies. It does however require a trust relationship between the phone client and the proxy.
ISSUE 5: Personal relays still need moderation. I wrongly thought for a very long time that personal relays could act as personal OUTBOXes and personal INBOXes without needing moderation. Recently it became clear to me that clients should probably read from other people's INBOXes to find replies to events written by the user of that INBOX (which outbox model clients should be putting into that INBOX). If that is happening, then personal relays will need to serve to the public events that were just put there by the public, thus exposing them to abuse. I'm greatly disappointed to come to this realization and not quite settled about it yet, but I thought I had better make this known.
-
@ 4ba8e86d:89d32de4
2024-10-23 12:16:57Ao contrário dos aplicativos de mensagens tradicionais, o Briar não depende de um servidor central - as mensagens são sincronizadas diretamente entre os dispositivos dos usuários. Se a internet cair, o Briar pode sincronizar via Bluetooth ou Wi-Fi, mantendo o fluxo de informações em uma crise. Se a internet estiver ativa, o Briar pode sincronizar via rede Tor, protegendo os usuários e seus relacionamentos da vigilância.
O Briar foi criado em 2014 por um grupo de desenvolvedores alemães liderado por Michael Rogers. O objetivo era desenvolver um aplicativo que fosse seguro e privado o suficiente para ser usado em situações de repressão governamental e em outras regiões com censura e vigilância online. A equipe recebeu financiamento do Open Technology Fund e outros patrocinadores para desenvolver o aplicativo.
Briar usa conexões diretas e criptografadas entre usuários para evitar vigilância e censura. https://nostr.build/i/nostr.build_5fd2ffa577e4d9978199ba8e957cf9334efd40b3a648e504f24728e94c2a961a.jpg
Briar pode compartilhar dados via Wi-Fi, Bluetooth e Internet. https://nostr.build/i/nostr.build_1a2762f68f623f5598174801f01d89967197c5333dc52a0ff81a850d9afbefa9.png
A Briar fornece mensagens privadas, fóruns públicos e blogs protegidos contra as seguintes ameaças de vigilância e censura:
• Vigilância de metadados. Briar usa a rede Tor para impedir que os bisbilhoteiros descubram quais usuários estão conversando entre si. A lista de contatos de cada usuário é criptografada e armazenada em seu próprio dispositivo.
• Vigilância de conteúdo. Toda a comunicação entre os dispositivos é criptografada de ponta a ponta, protegendo o conteúdo contra espionagem ou adulteração.
• Filtragem de conteúdo. A criptografia de ponta a ponta do Briar impede a filtragem de palavras-chave e, devido ao seu design descentralizado, não há servidores para bloquear.
• Ordens de retirada. Todo usuário que se inscreve em um fórum mantém uma cópia de seu conteúdo, portanto, não há um único ponto em que uma postagem possa ser excluída.
• Ataques de negação de serviço. Os fóruns de Briar não têm um servidor central para atacar, e todo assinante tem acesso ao conteúdo, mesmo que esteja offline.
• Apagões da Internet. O Briar pode operar por Bluetooth e Wi-Fi para manter o fluxo de informações durante os apagões.
Acesse a F-droid ou Google play story em seu dispositivo Android.
Crie uma conta Quando abrir o app Briar pela primeira vez, você será convidado a criar uma conta. Sua conta será armazenada seguramente em seu dispositivo, criptografada com sua senha.
Escolha um apelido com cuidado pois não poderá alterá-lo depois. Você pode escolher o mesmo apelido que outra pessoa, assim como na vida real.
Escolha uma senha que seja difícil de adivinhar mas fácil de lembrar. Se você esquecer sua senha, não haverá como recuperar o acesso à sua conta.
Se você precisar deletar sua conta rapidamente, apenas desinstale o app Briar.
Adicione um contato Após criar a sua conta você verá uma lista vazia de contatos. Para adicionar um contato, pressione no botão de mais (+). Há duas opções, dependendo se a pessoa que você quer adicionar está próxima.
Se a pessoa que você quer adicionar está próxima, escolha “Adicionar contato que está próximo”. Se a pessoa que você quer adicionar não está próxima e você tem acesso à Internet, escolha “Adicionar contato à distância”.
Adicione um contato próximo Quando você escolhe “Adicionar contato que está próximo”, o Briar pedirá permissão para usar sua câmera para escanear o código QR do seu contato. O Briar tambem pedirá permissão para acessar sua localização, para que possa se conectar ao seu contato por Bluetooth. O Briar não armazena, compartilha ou faz upload da sua localização, mas essa permissão é necessária para descobrir dispositivos Bluetooth próximos.
Finalmente, o Briar pedirá permissão para ligar o Bluetooth e tornar seu dispositivo visível para dispositivos Bluetooth próximos por um curto perído de tempo.
Após conceder todas essas permissões, o Briar mostrará um código QR e uma visualização de câmera. Escaneie o código QR de seu contato e deixe ele escanear o seu. Após uns 30 segundos seus dispositivos devem estar conectados e seu contato adicionado à sua lista de contatos.
Se seus dispositivos não se conectarem, vocês devem voltar à lista de contatos e iniciar o processo de novo.
Adicione um contato à distância Quando você escolhe “Adicionar contato à distância”, o Briar mostrará um link que você deve enviar para a pessoa que deseja adicionar. Você pode enviar o link por outro app como o Element , Simplesxchat . Seu contato tambem precisa te enviar o link dele. Cole o link de seu contato e escolha um apelido para ele. Se você e seu contato estão conectados ao Briar e têm acesso à Internet, então seu contato deve ser adicionado à sua lista de contatos dentro de alguns minutos.
Envie uma mensagem Após adicionar seu primeiro contato, toque em seu nome na lista de contatos para enviar-lhe sua primeira mensagem. O círculo próximo ao nome de seu contato ficará verde quando o Briar estiver conectado ao seu contato via Internet, Wi-Fi ou Bluetooth.
Apresente os seus contatos Você pode usar o recurso de apresentação para apresentar dois contatos seus entre si, para que eles não precisem se encontrar pessoalmente para se adicionarem.
Comunique-se sem Internet O alcance do Bluetooth e do Wi-Fi está em torno de 10 metros, a depender dos obstáculos. Claramente isso não é suficiente para se comunicar através de uma cidade ou mesmo de um prédio grande. Então, quando o Briar recebe uma mensagem de um contato que está próximo, ele armazena a mensagem e pode depois repassá-la para outros contatos, quando eles estiverem em alcance (por exemplo, quando você for de um local para outro).
Por favor, note que o Briar só vai sincronizar mensagens com os seus contatos, não com estranhos próximos que tenham o Briar. E só vai sincronizar as mensagens que você escolheu compartilhar com cada contato. Por exemplo, se você convida seus contatos X e Y para entrar em um fórum e eles aceitam, as mensagens nesse forum serão sincronizadas com X ou Y sempre que estiverem em alcance. Logo, você pode receber mensagens de fórum de X em um local, ir até outro local e entregar essas mensagens para Y.
Mas isso não funciona para mensagens privadas: elas apenas são sincronizadas diretamente entre o remetente e o destinatário.
Conecte-se com seus contatos sem Internet Quando você se encontrar com um de seus contatos do Briar, você pode usar o recurso “Conectar via Bluetooth” na tela da conversa para fazer uma conexão Bluetooth entre seus dispositivos. Após fazer isso uma vez, seus dispositivos devem se conectar automaticamente no futuro, mas isso pode levar um minuto ou dois após seu contato entrar em alcance. Se você não quiser esperar, pode usar o recurso “Conectar via Bluetooth” de novo para fazer a conexão imeditamente.
Se um grupo de pessoas dentro do alcance de Wi-Fi quiser se comunicar, pode ser útil criar um hotspot Wi-Fi no celular de uma pessoa. Mesmo que o hotspot não tenha acesso à Internet, o Briar pode usá-lo para se comunicar com contatos conectados ao mesmo hotspot.
Se é muito arriscado carregar seu celular de um lugar para outro (por conta de pontos de revista policial, por exemplo), você pode sincronizar mensagens criptografadas usando um pendrive USB ou cartão SD para carregá-las mais discretamente.
https://youtu.be/sKuljekMzTc
https://github.com/briar/briar
-
@ 23acd1fa:0484c9e0
2024-10-23 09:21:33Chef's notes
Cocoa powder: You can use 100% natural unsweetened cocoa powder or Dutch-processed cocoa powder – both work well.
Gluten free flour: There are many different gluten free flours on the market. I tested this recipe using White Wings All Purpose Gluten Free flour. I recommend choosing a gluten free flour that says it can be subbed 1:1 for regular plain or all purpose flour.
*Chocolate chips: Double check your chocolate chips are gluten free if you are making this brownie for someone who is celiac.
Cook times: Cook times will vary depending on your oven, but you’ll know these brownies are done when they firm up around the edges and no longer wobble in the middle. Keep in mind they will continue to cook slightly as they cool. You can also check they’re done by inserting a skewer into the middle of the brownie. If the skewer emerges with only a few crumbs on it, they’re ready. If it is covered in wet, gooey batter, keep baking the brownies and check them in another 5 minutes.
Storage: Brownies will keep well in an airtight container at room temperature or in the fridge for up to 5 days. To serve warm, microwave each brownie for 20 seconds. You can also freeze these brownies to enjoy at a later date. Simply thaw at room temperature and then microwave if you prefer them warm.
Serving Size: 1 brownie Calories: 278 Sugar: 26.4 g Sodium: 22.9 mg Fat: 15.5 g Carbohydrates: 34.1 g Protein: 3 g Cholesterol: 77.3 mg Nutrition information is a guide only.
Details
- ⏲️ Prep time: 20 min
- 🍳 Cook time: 35 min
- 🍽️ Servings: 12
Ingredients
- 170 grams (3/4 cup) unsalted butter, melted
- 200 grams (1 cup) caster sugar or granulated sugar
- 90 grams (1/2 cup) brown sugar
- 1 teaspoon vanilla extract
- 3 large eggs
- 40 grams (1/2 cup) cocoa powder
- 70 grams (1/2 cup) gluten free plain or all purpose flour
- 75 grams milk or dark chocolate chips*
Directions
- Preheat the oven to 180 C (350 F) standard / 160 C (320 F) fan-forced. Grease and line an 8-inch square pan with baking or parchment paper, ensuring two sides overhang.
- In a large mixing bowl, add melted butter and sugars and gently whisk together. Add vanilla extract and stir.
- Add the eggs, one at a time, stirring in-between, then sift in the cocoa powder and flour. Stir until just combined. Add chocolate chips.
- Pour the brownie batter in the prepared pan and place in the oven. Bake brownies for approximately 30-35 minutes or until they no longer wobble in the middle.
- Leave brownie in pan and transfer to a wire rack to cool completely. These brownies are quite fragile so if you can, transfer to the fridge for an hour before cutting into squares to serve.
-
@ ee7d2dbe:4a5410b0
2024-10-23 07:14:35React native App Development Services from Agicent ?
We used to use various flavors of Javascript long before react native came into being and once it came, we went all big on it. There were times when React-native and other cross platform technologies (like Flutter, Ionic, QT) were only good for creating App MVPs and had a lot of issues such as integration with third party libraries, with Device’s own hardware capabilities and so on. But today as of mid-2022 we can safely say that React Native can help building as good or superior apps as a native tech would do and that do with single coding effort. Its like Java of old days, “Write once, run anywhere”.
Theoretically, an App that you can make in React-native can be created in Flutter or ionic or in native technologies too. However, if we have to rate the cross-platform technologies, react native rules the roost hands down because of its larger community support, flexible frameworks, and capability to generate the native code over other platforms.
Rates for react native App Developers on Demand
Following is the standard rate card for different Exp and skill levels of On Demand Reac Native App Developers. Besides this, we can also create a custom on Demand App Team and optimized the monthly rate based on our specific needs.
Junior React Native Developer
Exp. Level: 1-2 Years
Hands on react native Development.
3 Projects experience minimum.
Agicent’s inhouse trained.
Familiar with PM Tools
Perfect for rapid MVPs and maintenance react native works
Starting at $ 2200 /mo.
Mid-level React Native Developer
Exp. Level: 2-5 Years
All of Jr. Dev +
10 Projects experience minimum.
Has Backend Experience.
Hands on CI/CD pipeline.
Manages Jr. Developers.
Perfect for complex react-natived projects and fast Development
Starting at $ 2900 /mo.
Senior React Native Developer
Exp. Level: 5+ Years
All of Mid-level Exp +
15 Projects experience minimum.
Full Stack Developer.
Participate in Architecture.
Ability to play Tech. Lead Role.
Perfect for bigger size projects with multi teams
Starting at $ 3800 /mo
BEST Practices followed by Agicent React Native App Development Company
First and foremost, we critically analyze if the App project is a good candidate for cross platform or react native development or not. In some pretty niche Apps, native can still be a technology of choice so ruling out this possibility is most important first step. Once it is identified that react native is the tech of choice, we then figure out the backend stack (like node.js or graphql, or traditional LAMP stack), or web front-end third-party libraries, like vue.js, typescript, redux etc. If it’s a regular kind of app that we do time and again (like a dating app or ecommerce app, or healthcare app) then we decide on tech stack in few hours only; if it is a niche one- of-a-kind project (like an AI based app suggesting you medicine dosage or an App that heavily uses some third- party APIs for its core function – like creating digital avatars or facilitates Holoportation) then we take more than few hours to check on libraries, their scalability with react native and then decide.
Performance optimization, Build optimization
Native applications are top performers because they use stock APIs, patters, and enjoy best support from the OEM’s OS and hardware and you achieve great performance by the virtue of the platform. However, in case of creating a react native app, which is cross platform by the way you have to use a variety of testing tools (like Appium, jest, detox etc), be more meticulous for performance parameters, and optimize your code for best performance across different devices. It can be a time-consuming exercise at times, but totally worth it and warranted.
For react native app development, you have to take care of multi-threading, third party library integration in optimized way, image compression, APK or iPA file size optimization and lot more what you don’t really do when doing native app development.
Limitations of React Native App Development
Lack of native libraries:
If the app has a lot of features, React Native can slow down the development process due to a lack of native libraries and reliance on external, third-party libraries.
Takes more time to initialize:
The issue with React Native is that it takes more time to initialize the runtime for gadgets and devices, mostly because of the JavaScript thread which takes time to initialize.
Excessive Device Support required
Due to the variety of OEM Devices size, Type, Version, and OS Versions, it is challenging for a developer to provide full support to all the app uses in one go, so it becomes sort of a continuous exercise to extend support to more and more devices.
Still in pre-mature face
React native latest version is 0.68 as of June 2022, which shows that it is still in face of evolving which is why it still have lack functionality. Good thing is that it is continuously maturing and have community support with big tech giants like Facebook & tesla also it easy to learn and understand even for the beginner.
React Native Doesn’t Fully Support NFC
NFC enables the communication between nearby devices. But React Native Devices still don’t support or provide full access for NFC communication.
Future of React Native Development
Start-up first choice
Many big names like Facebook, Instagram, tesla and all have an app on react native also react native becomes one of the most discussed libraries on StackOverflow. Most startup and even enterprise start adopting it because it only needs a single manager to manage for both android and IOS app development which save it time, resources and money.
Better integration with Device’s and external Hardware
We are already working on some react native projects where we are interacting with external hardware (using Silabs or infi semiconductors) and have found that react native doesn’t always get priority support, however this is going to be changed in the future. React native will be more scalable and easier to integrate with device’s own hardware as well as external Hardware (Bluetooth, NFC devices).
Open the gate for new open-source frameworks
The domain-specific engineers are meet-up and do conferences in which each platform bring their own player who are working on a similar problem. Like web where react (which power react-native) which commonly draw inspiration from other open-source web frameworks like: Vue, Preact & Svelte. On mobile, React Native was inspired by other open-source mobile frameworks, and we learned from other mobile frameworks built within Facebook.
Source: https://www.agicent.com/react-native-development-company
-
@ b12b632c:d9e1ff79
2024-02-19 19:18:46Nostr decentralized network is growing exponentially day by day and new stuff comes out everyday. We can now use a NIP46 server to proxify our nsec key to avoid to use it to log on Nostr websites and possibly leak it, by mistake or by malicious persons. That's the point of this tutorial, setup a NIP46 server Nsec.app with its own Nostr relay. You'll be able to use it for you and let people use it, every data is stored locally in your internet browser. It's an non-custodial application, like wallets !
It's nearly a perfect solution (because nothing is perfect as we know) and that makes the daily use of Nostr keys much more secure and you'll see, much more sexy ! Look:
Nsec.app is not the only NIP46 server, in fact, @PABLOF7z was the first to create a NIP46 server called nsecBunker. You can also self-hosted nsecBunkerd, you can find a detailed explanation here : nsecbunkerd. I may write a how to self-host nsecBunkderd soon.
If you want more information about its bunker and what's behind this tutorial, you can check these links :
Few stuffs before beginning
Spoiler : I didn't automatized everything. The goal here is not to give you a full 1 click installation process, it's more to let you see and understand all the little things to configure and understand how works Nsec.app and the NIP46. There is a little bit of work, yes, but you'll be happy when it will work! Believe me.
Before entering into the battlefield, you must have few things : A working VPS with direct access to internet or a computer at home but NAT will certain make your life a hell. Use a VPS instead, on DigitalOcean, Linode, Scaleway, as you wish. A web domain that your own because we need to use at least 3 DNS A records (you can choose the subdomain you like) : domain.tld, noauth.domain.tld, noauth.domain.tld. You need to have some programs already installed : git, docker, docker-compose, nano/vi. if you fill in all the boxes, we can move forward !
Let's install everything !
I build a repo with a docker-compose file with all the required stuff to make the Bunker works :
Nsec.app front-end : noauth Nsec.app back-end : noauthd Nostr relay : strfry Nostr NIP05 : easy-nip5
First thing to do is to clone the repo "nsec-app-docker" from my repo:
$ git clone git clone https://github.com/PastaGringo/nsec-app-docker.git $ cd nsec-app-docker
When it's done, you'll have to do several things to make it work. 1) You need to generate some keys for the web-push library (keep them for later) :
``` $ docker run pastagringo/web-push-generate-keys
Generating your web-push keys...
Your private key : rQeqFIYKkInRqBSR3c5iTE3IqBRsfvbq_R4hbFHvywE Your public key : BFW4TA-lUvCq_az5fuQQAjCi-276wyeGUSnUx4UbGaPPJwEemUqp3Rr3oTnxbf0d4IYJi5mxUJOY4KR3ZTi3hVc ```
2) Generate a new keys pair (nsec/npub) for the NIP46 server by clicking on "Generate new key" from NostrTool website: nostrtool.com.
You should have something like this :
console Nostr private key (nsec): keep this -> nsec1zcyanx8zptarrmfmefr627zccrug3q2vhpfnzucq78357hshs72qecvxk6 Nostr private key (hex): 1609d998e20afa31ed3bca47a57858c0f888814cb853317300f1e34f5e178794 Nostr public key (npub): npub1ywzwtnzeh64l560a9j9q5h64pf4wvencv2nn0x4h0zw2x76g8vrq68cmyz Nostr public key (hex): keep this -> 2384e5cc59beabfa69fd2c8a0a5f550a6ae6667862a7379ab7789ca37b483b06
You need to keep Nostr private key (nsec) & Nostr public key (npub). 3) Open (nano/vi) the .env file located in the current folder and fill all the required info :
```console
traefik
EMAIL=pastagringo@fractalized.net <-- replace with your own domain NSEC_ROOT_DOMAIN=plebes.ovh <-- replace with your own domain <-- replace with your own relay domain RELAY_DOMAIN=relay.plebes.ovh <-- replace with your own noauth domainay.plebes.ovh <-- replace with your own relay domain <-- replace with your own noauth domain NOAUTH_DOMAIN=noauth.plebes.ovh <-- replace with your own noauth domain NOAUTHD_DOMAIN=noauthd.plebes.ovh <-- replace with your own noauth domain
noauth
APP_WEB_PUSH_PUBKEY=BGVa7TMQus_KVn7tAwPkpwnU_bpr1i6B7D_3TT-AwkPlPd5fNcZsoCkJkJylVOn7kZ-9JZLpyOmt7U9rAtC-zeg <-- replace with your own web push public key APP_NOAUTHD_URL=https://$NOAUTHD_DOMAIN APP_DOMAIN=$NSEC_ROOT_DOMAIN APP_RELAY=wss://$RELAY_DOMAIN
noauthd
PUSH_PUBKEY=$APP_WEB_PUSH_PUBKEY PUSH_SECRET=_Sz8wgp56KERD5R4Zj5rX_owrWQGyHDyY4Pbf5vnFU0 <-- replace with your own web push private key ORIGIN=https://$NOAUTHD_DOMAIN DATABASE_URL=file:./prod.db BUNKER_NSEC=nsec1f43635rzv6lsazzsl3hfsrum9u8chn3pyjez5qx0ypxl28lcar2suy6hgn <-- replace with your the bunker nsec key BUNKER_RELAY=wss://$RELAY_DOMAIN BUNKER_DOMAIN=$NSEC_ROOT_DOMAIN BUNKER_ORIGIN=https://$NOAUTH_DOMAIN ```
Be aware of noauth and noauthd (the d letter). Next, save and quit. 4) You now need to modify the nostr.json file used for the NIP05 to indicate which relay your bunker will use. You need to set the bunker HEX PUBLIC KEY (I replaced the info with the one I get from NostrTool before) :
console nano easy-nip5/nostr.json
console { "names": { "_": "ServerHexPubKey" }, "nip46": { "ServerHexPubKey": [ "wss://ReplaceWithYourRelayDomain" ] } }
5) You can now run the docker compose file by running the command (first run can take a bit of time because the noauth container needs to build the npm project):
console $ docker compose up -d
6) Before creating our first user into the Nostr Bunker, we need to test if all the required services are up. You should have :
noauth :
noauthd :
console CANNOT GET /
https://noauthd.yourdomain.tld/name :
console { "error": "Specify npub" }
https://yourdomain.tld/.well-known/nostr.json :
console { "names": { "_": "ServerHexPubKey" }, "nip46": { "ServerHexPubKey": [ "wss://ReplaceWithYourRelayDomain" ] } }
If you have everything working, we can try to create a new user!
7) Connect to noauth and click on "Get Started" :
At the bottom the screen, click on "Sign up" :
Fill a username and click on "Create account" :
If everything has been correctly configured, you should see a pop message with "Account created for "XXXX" :
PS : to know if noauthd is well serving the nostr.json file, you can check this URL : https://yourdomain.tld/.well-known/nostr.json?name=YourUser You should see that the user has now NIP05/NIP46 entries :
If the user creation failed, you'll see a red pop-up saying "Something went wrong!" :
To understand what happened, you need to inspect the web page to find the error :
For the example, I tried to recreate a user "jack" which has already been created. You may find a lot of different errors depending of the configuration you made. You can find that the relay is not reachable on w s s : / /, you can find that the noauthd is not accessible too, etc. Every answers should be in this place.
To completely finish the tests, you need to enable the browser notifications, otherwise you won't see the pop-up when you'll logon on Nostr web client, by clicking on "Enable background service" :
You need to click on allow notifications :
Should see this green confirmation popup on top right of your screen:
Well... Everything works now !
8) You try to use your brand new proxyfied npub by clicking on "Connect App" and buy copying your bunker URL :
You can now to for instance on Nostrudel Nostr web client to login with it. Select the relays you want (Popular is better ; if you don't have multiple relay configured on your Nostr profile, avoid "Login to use your relay") :
Click on "Sign in" :
Click on "Show Advanced" :
Click on "Nostr connect / Bunker" :
Paste your bunker URL and click on "Connect" :
The first time, tour browser (Chrome here) may blocks the popup, you need to allow it :
If the browser blocked the popup, NoStrudel will wait your confirmation to login :
You have to go back on your bunker URL to allow the NoStrudel connection request by clicking on on "Connect":
The first time connections may be a bit annoying with all the popup authorizations but once it's done, you can forget them it will connect without any issue. Congrats ! You are connected on NoStrudel with an npub proxyfied key !⚡
You can check to which applications you gave permissions and activity history in noauth by selecting your user. :
If you want to import your real Nostr profile, the one that everyone knows, you can import your nsec key by adding a new account and select "Import key" and adding your precious nsec key (reminder: your nsec key stays in your browser! The noauth provider won't have access to it!) :
You can see can that my profile picture has been retrieved and updated into noauth :
I can now use this new pubkey attached my nsec.app server to login in NoStrudel again :
Accounts/keys management in noauthd You can list created keys in your bunkerd by doing these command (CTRL+C to exit) :
console $ docker exec -it noauthd node src/index.js list_names [ '/usr/local/bin/node', '/noauthd/src/index.js', 'list_names' ] 1 jack npub1hjdw2y0t44q4znzal2nxy7vwmpv3qwrreu48uy5afqhxkw6d2nhsxt7x6u 1708173927920n 2 peter npub1yp752u5tr5v5u74kadrzgfjz2lsmyz8dyaxkdp4e0ptmaul4cyxsvpzzjz 1708174748972n 3 john npub1xw45yuvh5c73sc5fmmc3vf2zvmtrzdmz4g2u3p2j8zcgc0ktr8msdz6evs 1708174778968n 4 johndoe npub1xsng8c0lp9dtuan6tkdljy9q9fjdxkphvhj93eau07rxugrheu2s38fuhr 1708174831905n
If you want to delete someone key, you have to do :
```console $ docker exec -it noauthd node src/index.js delete_name johndoe [ '/usr/local/bin/node', '/noauthd/src/index.js', 'delete_name', 'johndoe' ] deleted johndoe { id: 4, name: 'johndoe', npub: 'npub1xsng8c0lp9dtuan6tkdljy9q9fjdxkphvhj93eau07rxugrheu2s38fuhr', timestamp: 1708174831905n
$ docker exec -it noauthd node src/index.js list_names [ '/usr/local/bin/node', '/noauthd/src/index.js', 'list_names' ] 1 jack npub1hjdw2y0t44q4znzal2nxy7vwmpv3qwrreu48uy5afqhxkw6d2nhsxt7x6u 1708173927920n 2 peter npub1yp752u5tr5v5u74kadrzgfjz2lsmyz8dyaxkdp4e0ptmaul4cyxsvpzzjz 1708174748972n 3 john npub1xw45yuvh5c73sc5fmmc3vf2zvmtrzdmz4g2u3p2j8zcgc0ktr8msdz6evs 1708174778968n ```
It could be pretty easy to create a script to handle the management of keys but I think @Brugeman may create a web interface for that. Noauth is still very young, changes are committed everyday to fix/enhance the application! As """everything""" is stored locally on your browser, you have to clear the cache of you bunker noauth URL to clean everything. This Chome extension is very useful for that. Check these settings in the extension option :
You can now enjoy even more Nostr ⚡ See you soon in another Fractalized story!
-
@ ee11a5df:b76c4e49
2023-11-09 05:20:37A lot of terms have been bandied about regarding relay models: Gossip relay model, outbox relay model, and inbox relay model. But this term "relay model" bothers me. It sounds stuffy and formal and doesn't actually describe what we are talking about very well. Also, people have suggested maybe there are other relay models. So I thought maybe we should rethink this all from first principles. That is what this blog post attempts to do.
Nostr is notes and other stuff transmitted by relays. A client puts an event onto a relay, and subsequently another client reads that event. OK, strictly speaking it could be the same client. Strictly speaking it could even be that no other client reads the event, that the event was intended for the relay (think about nostr connect). But in general, the reason we put events on relays is for other clients to read them.
Given that fact, I see two ways this can occur:
1) The reader reads the event from the same relay that the writer wrote the event to (this I will call relay rendezvous), 2) The event was copied between relays by something.
This second solution is perfectly viable, but it less scalable and less immediate as it requires copies which means that resources will be consumed more rapidly than if we can come up with workable relay rendezvous solutions. That doesn't mean there aren't other considerations which could weigh heavily in favor of copying events. But I am not aware of them, so I will be discussing relay rendezvous.
We can then divide the relay rendezvous situation into several cases: one-to-one, one-to-many, and one-to-all, where the many are a known set, and the all are an unbounded unknown set. I cannot conceive of many-to-anything for nostr so we will speak no further of it.
For a rendezvous to take place, not only do the parties need to agree on a relay (or many relays), but there needs to be some way that readers can become aware that the writer has written something.
So the one-to-one situation works out well by the writer putting the message onto a relay that they know the reader checks for messages on. This we call the INBOX model. It is akin to sending them an email into their inbox where the reader checks for messages addressed to them.
The one-to-(known)-many model is very similar, except the writer has to write to many people's inboxes. Still we are dealing with the INBOX model.
The final case, one-to-(unknown)-all, there is no way the writer can place the message into every person's inbox because they are unknown. So in this case, the writer can write to their own OUTBOX, and anybody interested in these kinds of messages can subscribe to the writer's OUTBOX.
Notice that I have covered every case already, and that I have not even specified what particular types of scenarios call for one-to-one or one-to-many or one-to-all, but that every scenario must fit into one of those models.
So that is basically it. People need INBOX and OUTBOX relays and nothing else for relay rendezvous to cover all the possible scenarios.
That is not to say that other kinds of concerns might not modulate this. There is a suggestion for a DM relay (which is really an INBOX but with a special associated understanding), which is perfectly fine by me. But I don't think there are any other relay models. There is also the case of a live event where two parties are interacting over the same relay, but in terms of rendezvous this isn't a new case, it is just that the shared relay is serving as both parties' INBOX (in the case of a closed chat) and/or both parties' OUTBOX (in the case of an open one) at the same time.
So anyhow that's my thinking on the topic. It has become a fairly concise and complete set of concepts, and this makes me happy. Most things aren't this easy.
-
@ b12b632c:d9e1ff79
2023-08-08 00:02:31"Welcome to the Bitcoin Lightning Bolt Card, the world's first Bitcoin debit card. This revolutionary card allows you to easily and securely spend your Bitcoin at lightning compatible merchants around the world." Bolt Card
I discovered few days ago the Bolt Card and I need to say that's pretty amazing. Thinking that we can pay daily with Bitcoin Sats in the same way that we pay with our Visa/Mastecard debit cards is really something huge⚡(based on the fact that sellers are accepting Bitcoins obviously!)
To use Bolt Card you have three choices :
- Use their (Bolt Card) own Bolt Card HUB and their own BTC Lightning node
- Use your own self hosted Bolt Card Hub and an external BTC Lightning node
- Use your own self hosted Bolt Card Hub and your BTC Lightning node (where you shoud have active Lightning channels)
⚡ The first choice is the quickiest and simpliest way to have an NFC Bolt Card. It will take you few seconds (for real). You'll have to wait much longer to receive your NFC card from a website where you bought it than configure it with Bolt Card services.
⚡⚡ The second choice is pretty nice too because you won't have a VPS + to deal with all the BTC Lightnode stuff but you'll use an external one. From the Bolt Card tutorial about Bolt Card Hub, they use a Lightning from voltage.cloud and I have to say that their services are impressive. In few seconds you'll have your own Lightning node and you'll be able to configure it into the Bolt Card Hub settings. PS : voltage.cloud offers 7 trial days / 20$ so don't hesitate to try it!
⚡⚡⚡ The third one is obvisouly a bit (way) more complex because you'll have to provide a VPS + Bitcoin node and a Bitcoin Lightning Node to be able to send and receive Lightning payments with your Bolt NFC Card. So you shoud already have configured everything by yourself to follow this tutorial. I will show what I did for my own installation and all my nodes (BTC & Lightning) are provided by my home Umbrel node (as I don't want to publish my nodes directly on the clearnet). We'll see how to connect to the Umbrel Lighting node later (spoiler: Tailscale).
To resume in this tutorial, I have :
- 1 Umbrel node (rpi4b) with BTC and Lightning with Tailscale installed.
- 1 VPS (Virtual Personal Server) to publish publicly the Bolt Card LNDHub and Bolt Card containers configured the same way as my other containers (with Nginx Proxy Manager)
Ready? Let's do it ! ⚡
Configuring Bolt Card & Bolt Card LNDHub
Always good to begin by reading the bolt card-lndhub-docker github repo. To a better understading of all the components, you can check this schema :
We'll not use it as it is because we'll skip the Caddy part because we already use Nginx Proxy Manager.
To begin we'll clone all the requested folders :
git clone https://github.com/boltcard/boltcard-lndhub-docker bolthub cd bolthub git clone https://github.com/boltcard/boltcard-lndhub BoltCardHub git clone https://github.com/boltcard/boltcard.git git clone https://github.com/boltcard/boltcard-groundcontrol.git GroundControl
PS : we won't see how to configure GroundControl yet. This article may be updated later.
We now need to modify the settings file with our own settings :
mv .env.example .env nano .env
You need to replace "your-lnd-node-rpc-address" by your Umbrel TAILSCALE ip address (you can find your Umbrel node IP from your Tailscale admin console):
``` LND_IP=your-lnd-node-rpc-address # <- UMBREL TAILSCALE IP ADDRESS LND_GRPC_PORT=10009 LND_CERT_FILE=tls.cert LND_ADMIN_MACAROON_FILE=admin.macaroon REDIS_PASSWORD=random-string LND_PASSWORD=your-lnd-node-unlock-password
docker-compose.yml only
GROUNDCONTROL=ground-control-url
docker-compose-groundcontrol.yml only
FCM_SERVER_KEY=hex-encoded APNS_P8=hex-encoded APNS_P8_KID=issuer-key-which-is-key-ID-of-your-p8-file APPLE_TEAM_ID=team-id-of-your-developer-account BITCOIN_RPC=bitcoin-rpc-url APNS_TOPIC=app-package-name ```
We now need to generate an AES key and insert it into the "settings.sql" file :
```
hexdump -vn 16 -e '4/4 "%08x" 1 "\n"' /dev/random 19efdc45acec06ad8ebf4d6fe50412d0 nano settings.sql ```
- Insert the AES between ' ' right from 'AES_DECRYPT_KEY'
- Insert your domain or subdomain (subdomain in my case) host between ' ' from 'HOST_DOMAIN'
- Insert your Umbrel tailscale IP between ' ' from 'LN_HOST'
Be aware that this subdomain won't be the LNDHub container (boltcard_hub:9002) but the Boltcard container (boltcard_main:9000)
``` \c card_db;
DELETE FROM settings;
-- at a minimum, the settings marked 'set this' must be set for your system -- an explanation for each of the bolt card server settings can be found here -- https://github.com/boltcard/boltcard/blob/main/docs/SETTINGS.md
INSERT INTO settings (name, value) VALUES ('LOG_LEVEL', 'DEBUG'); INSERT INTO settings (name, value) VALUES ('AES_DECRYPT_KEY', '19efdc45acec06ad8ebf4d6fe50412d0'); -- set this INSERT INTO settings (name, value) VALUES ('HOST_DOMAIN', 'sub.domain.tld'); -- set this INSERT INTO settings (name, value) VALUES ('MIN_WITHDRAW_SATS', '1'); INSERT INTO settings (name, value) VALUES ('MAX_WITHDRAW_SATS', '1000000'); INSERT INTO settings (name, value) VALUES ('LN_HOST', ''); -- set this INSERT INTO settings (name, value) VALUES ('LN_PORT', '10009'); INSERT INTO settings (name, value) VALUES ('LN_TLS_FILE', '/boltcard/tls.cert'); INSERT INTO settings (name, value) VALUES ('LN_MACAROON_FILE', '/boltcard/admin.macaroon'); INSERT INTO settings (name, value) VALUES ('FEE_LIMIT_SAT', '10'); INSERT INTO settings (name, value) VALUES ('FEE_LIMIT_PERCENT', '0.5'); INSERT INTO settings (name, value) VALUES ('LN_TESTNODE', ''); INSERT INTO settings (name, value) VALUES ('FUNCTION_LNURLW', 'ENABLE'); INSERT INTO settings (name, value) VALUES ('FUNCTION_LNURLP', 'ENABLE'); INSERT INTO settings (name, value) VALUES ('FUNCTION_EMAIL', 'DISABLE'); INSERT INTO settings (name, value) VALUES ('AWS_SES_ID', ''); INSERT INTO settings (name, value) VALUES ('AWS_SES_SECRET', ''); INSERT INTO settings (name, value) VALUES ('AWS_SES_EMAIL_FROM', ''); INSERT INTO settings (name, value) VALUES ('EMAIL_MAX_TXS', ''); INSERT INTO settings (name, value) VALUES ('FUNCTION_LNDHUB', 'ENABLE'); INSERT INTO settings (name, value) VALUES ('LNDHUB_URL', 'http://boltcard_hub:9002'); INSERT INTO settings (name, value) VALUES ('FUNCTION_INTERNAL_API', 'ENABLE'); ```
You now need to get two files used by Bolt Card LND Hub, the admin.macaroon and tls.cert files from your Umbrel BTC Ligtning node. You can get these files on your Umbrel node at these locations :
/home/umbrel/umbrel/app-data/lightning/data/lnd/tls.cert /home/umbrel/umbrel/app-data/lightning/data/lnd/data/chain/bitcoin/mainnet/admin.macaroon
You can use either WinSCP, scp or ssh to copy these files to your local workstation and copy them again to your VPS to the root folder "bolthub".
You shoud have all these files into the bolthub directory :
johndoe@yourvps:~/bolthub$ ls -al total 68 drwxrwxr-x 6 johndoe johndoe 4096 Jul 30 00:06 . drwxrwxr-x 3 johndoe johndoe 4096 Jul 22 00:52 .. -rw-rw-r-- 1 johndoe johndoe 482 Jul 29 23:48 .env drwxrwxr-x 8 johndoe johndoe 4096 Jul 22 00:52 .git -rw-rw-r-- 1 johndoe johndoe 66 Jul 22 00:52 .gitignore drwxrwxr-x 11 johndoe johndoe 4096 Jul 22 00:52 BoltCardHub -rw-rw-r-- 1 johndoe johndoe 113 Jul 22 00:52 Caddyfile -rw-rw-r-- 1 johndoe johndoe 173 Jul 22 00:52 CaddyfileGroundControl drwxrwxr-x 6 johndoe johndoe 4096 Jul 22 00:52 GroundControl -rw-rw-r-- 1 johndoe johndoe 431 Jul 22 00:52 GroundControlDockerfile -rw-rw-r-- 1 johndoe johndoe 1913 Jul 22 00:52 README.md -rw-rw-r-- 1 johndoe johndoe 293 May 6 22:24 admin.macaroon drwxrwxr-x 16 johndoe johndoe 4096 Jul 22 00:52 boltcard -rw-rw-r-- 1 johndoe johndoe 3866 Jul 22 00:52 docker-compose-groundcontrol.yml -rw-rw-r-- 1 johndoe johndoe 2985 Jul 22 00:57 docker-compose.yml -rw-rw-r-- 1 johndoe johndoe 1909 Jul 29 23:56 settings.sql -rw-rw-r-- 1 johndoe johndoe 802 May 6 22:21 tls.cert
We need to do few last tasks to ensure that Bolt Card LNDHub will work perfectly.
It's maybe already the case on your VPS but your user should be member of the docker group. If not, you can add your user by doing :
sudo groupadd docker sudo usermod -aG docker ${USER}
If you did these commands, you need to logout and login again.
We also need to create all the docker named volumes by doing :
docker volume create boltcard_hub_lnd docker volume create boltcard_redis
Configuring Nginx Proxy Manager to proxify Bolt Card LNDHub & Boltcard
You need to have followed my previous blog post to fit with the instructions above.
As we use have the Bolt Card LNDHub docker stack in another directory than we other services and it has its own docker-compose.yml file, we'll have to configure the docker network into the NPM (Nginx Proxy Manager) docker-compose.yml to allow NPM to communicate with the Bolt Card LNDHub & Boltcard containers.
To do this we need to add these lines into our NPM external docker-compose (not the same one that is located into the bolthub directory, the one used for all your other containers) :
nano docker-compose.yml
networks: bolthub_boltnet: name: bolthub_boltnet external: true
Be careful, "bolthub" from "bolthub_boltnet" is based on the directory where Bolt Card LNDHub Docker docker-compose.yml file is located.
We also need to attach this network to the NPM container :
nginxproxymanager: container_name: nginxproxymanager image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: - '80:80' # Public HTTP Port - '443:443' # Public HTTPS Port - '81:81' # Admin Web Port volumes: - ./nginxproxymanager/data:/data - ./nginxproxymanager/letsencrypt:/etc/letsencrypt networks: - fractalized - bolthub_boltnet
You can now recreate the NPM container to attach the network:
docker compose up -d
Now, you'll have to create 2 new Proxy Hosts into NPM admin UI. First one for your domain / subdomain to the Bolt Card LNDHub GUI (boltcard_hub:9002) :
And the second one for the Boltcard container (boltcard_main:9000).
In both Proxy Host I set all the SSL options and I use my wildcard certificate but you can generate one certificate for each Proxy Host with Force SSL, HSTS enabled, HTTP/2 Suppot and HSTS Subdomains enabled.
Starting Bolt Card LNDHub & BoltCard containers
Well done! Everything is setup, we can now start the Bolt Card LNDHub & Boltcard containers !
You need to go again to the root folder of the Bolt Card LNDHub projet "bolthub" and start the docker compose stack. We'll begin wihtout a "-d" to see if we have some issues during the containers creation :
docker compose up
I won't share my containers logs to avoid any senstive information disclosure about my Bolt Card LNDHub node, but you can see them from the Bolt Card LNDHub Youtube video (link with exact timestamp where it's shown) :
If you have some issues about files mounting of admin.macaroon or tls.cert because you started the docker compose stack the first time without the files located in the bolthub folder do :
docker compose down && docker compose up
After waiting few seconds/minutes you should go to your Bolt Card LNDHub Web UI domain/sudomain (created earlier into NPM) and you should see the Bolt Card LNDHub Web UI :
if everything is OK, you now run the containers in detached mode :
docker compose up -d
Voilààààà ⚡
If you need to all the Bolt Card LNDHub logs you can use :
docker compose logs -f --tail 30
You can now follow the video from Bolt Card to configure your Bolt Card NFC card and using your own Bolt Card LNDHub :
~~PS : there is currently a bug when you'll click on "Connect Bolt Card" from the Bold Card Walle app, you might have this error message "API error: updateboltcard: enable_pin is not a valid boolean (code 6)". It's a know issue and the Bolt Card team is currently working on it. You can find more information on their Telegram~~
Thanks to the Bolt Card, the issue has been corrected : changelog
See you soon in another Fractalized story!
-
@ ee11a5df:b76c4e49
2023-07-29 03:27:23Gossip: The HTTP Fetcher
Gossip is a desktop nostr client. This post is about the code that fetches HTTP resources.
Gossip fetches HTTP resources. This includes images, videos, nip05 json files, etc. The part of gossip that does this is called the fetcher.
We have had a fetcher for some time, but it was poorly designed and had problems. For example, it was never expiring items in the cache.
We've made a lot of improvements to the fetcher recently. It's pretty good now, but there is still room for improvement.
Caching
Our fetcher caches data. Each URL that is fetched is hashed, and the content is stored under a file in the cache named by that hash.
If a request is in the cache, we don't do an HTTP request, we serve it directly from the cache.
But cached data gets stale. Sometimes resources at a URL change. We generally check resources again after three days.
We save the server's ETag value for content, and when we check the content again we supply an If-None-Match header with the ETag so the server could respond with 304 Not Modified in which case we don't need to download the resource again, we just bump the filetime to now.
In the event that our cache data is stale, but the server gives us an error, we serve up the stale data (stale is better than nothing).
Queueing
We used to fire off HTTP GET requests as soon as we knew that we needed a resource. This was not looked on too kindly by servers and CDNs who were giving us either 403 Forbidden or 429 Too Many Requests.
So we moved into a queue system. The host is extracted from each URL, and each host is only given up to 3 requests at a time. If we want 29 images from the same host, we only ask for three, and the remaining 26 remain in the queue for next time. When one of those requests completes, we decrement the host load so we know that we can send it another request later.
We process the queue in an infinite loop where we wait 1200 milliseconds between passes. Passes take time themselves and sometimes must wait for a timeout. Each pass fetches potentially multiple HTTP resources in parallel, asynchronously. If we have 300 resources at 100 different hosts, three per host, we could get them all in a single pass. More likely a bunch of resources are at the same host, and we make multiple passes at it.
Timeouts
When we fetch URLs in parallel asynchronously, we wait until all of the fetches complete before waiting another 1200 ms and doing another loop. Sometimes one of the fetches times out. In order to keep things moving, we use short timeouts of 10 seconds for a connect, and 15 seconds for a response.
Handling Errors
Some kinds of errors are more serious than others. When we encounter these, we sin bin the server for a period of time where we don't try fetching from it until a specified period elapses.
-
@ 3c7dc2c5:805642a8
2024-10-23 06:09:57🧠Quote(s) of the week:
"At some point, people will realize that they have more to fear by not embracing this technology than by embracing it." - Michael Saylor
'The tail (Bitcoin) seems to be wagging the dog (ECB)... until the world realizes Bitcoin is the dog and central banks the tail.' -Tuur Demeester
🧡Bitcoin news🧡
Of course, I will start this week's Weekly Recap with a short recap on the new ECB paper: 'The distributional consequences of Bitcoin.' by Jürgen Schaaf & Ulrich Bindseid aka the ECB BTC-gaslighters.
Why do I say gaslighters? Those two gentlemen already wrote a paper in 2022 "Bitcoin is basically dead". Just for your information, Bitcoin is worth 64% more today than when Jürgen & Uli made that statement. Ergo, ECB economists during the bear market: it's a worthless scam; ECB economists during the bull market: it's not fair.
Anyway, the new paper is a true declaration of war: the ECB claims that early Bitcoin adopters steal economic value from latecomers. “If the price of Bitcoin rises for good, the existence of Bitcoin impoverishes both non-holders and latecomers"
I quote Tuur Demeester: 'I strongly believe authorities will use this luddite argument to enact harsh taxes or bans' A bit below in the Bitcoin segment you will already see some first steps from governments to imply some harsh taxes on their (Bitcoin) citizens. Now back to the paper and my view on it. Let me start with the following: It’s not our fault (ECB, no coiners) you didn’t take the time to study Bitcoin, the signs were everywhere.
How freaking cool is it and I know we are winning when the ECB is quoting people like Marty Bent, and Molly White, quoting from Bitcoin Magazine, excerpting Trump and RFK from Nashville, and matching price movements with electoral polls in its research papers. Madness!
This is one of the first times in history that those who weren’t supposed to get wealthy did. Bitcoin is leveling the playfield for everyone. Lovely to see that the ECB, and in time the EU, will actively attack the only thing that can save this continent in decline.
Just to hit you with some logic. We, the public need a PhD - some ECB economist, to tell us that investing early in stocks, a house, art, or whatever makes it any different than investing early in Bitcoin. All they say is whoever enters an asset class earlier accrues more value than those who enter later, right? Again, like in every other asset class. For example, and I quote Ben Kaufmann: “If the price of Apple /Nvidia (you name it) stock rises for good, the existence of Apple/Nvidia (you name it) impoverishes both non-holders and latecomers.” See how stupid that sounds? Can someone find any logic in this?
Bitcoin protects you from the fiat clowns at the ECB who inflate the currency into oblivion and then blame Bitcoin for people being poor. “protect yourself from currency debasement with Bitcoin.”
I truly hope the ECB is at its final stage of denial and is only one 'logical' leap away from realizing that they need Bitcoin on their balance sheet. To finish this bit, the ECB is doing a great job. Cumulative change has been just -40,57 % in about 20 years.
The real thieves are the people who can create fiat money out of thin air and dilute entire populations behind their backs. Something the ECB and big banks perform as a primary function. Bitcoin simply allows anyone to protect themselves from fiat debasement and reliably improve their standard of living.
Sam Callahan: 'Holders of worse money will lose out to holders of better money. Welcome to Reverse Thiers’ Law. It’s not Bitcoin’s fault that these institutions destroyed their currencies. Fortunately, anyone can opt into BTC. The choice has been yours since 2009. Choose your money wisely.'
On the 14th of October:
➡️Publicly-listed German company Samara Asset Group to buy Bitcoin using a $33M bond.
➡️MicroStrategy stock has done over $3.2 billion in trading volume so far today.
➡️$1.45 trillion Deutsche Bank partnered with Keyrock for Crypto and FX services. Great thing that these huge, although DB is insolvent, legacy banks will dump billions into the space, and allow people with complex financial situations to own and utilize Bitcoin.
On the 15th of October:
➡️Fed’s Kashkari: Bitcoin remains worthless after twelve years. Almost the 6th largest monetary asset in the world...Blackrock and Fidelity beg to differ. Blackrock now holds 1.87% of the mined supply for its clients. Fidelity holds 0.93%. At a 1.35 trillion market cap 1.2 billion wallet addresses would say otherwise.
➡️Metaplanet purchases an additional 106.97 $BTC. They have purchased an additional 106.976 Bitcoin for ¥1 billion at an average price of ¥9,347,891 per Bitcoin. As of October 15, Metaplanet holds ~855.478 Bitcoin acquired for ¥7.965 billion at an average price of ¥9,310,061 per Bitcoin.
➡️Adam Back's company Blockstream raises $210 MILLION to buy more Bitcoin for its treasury and fund its other initiatives.
On the 16th of October:
➡️93% of the Bitcoin supply is in profit. We can HODL longer than the market can remain irrational.
➡️You know the day will come. Governments will use Bitcoin to get more taxes or do 'capital control' / 'exit control' / 'control of money'.
Italy's Deputy Finance Minister Maurizio Leo announced their plan to increase capital gains on Bitcoin from 26% to 42% because "the phenomenon is spreading." Immoral tax on regular people. Rich people will have a workaround.
The main question on this matter is how many other states follow this path before other nations realize the benefits of embracing Bitcoin. Work > Pay Income Tax > Invest your money > Take risk > Pay half your profits to them.
➡️Bitcoin miner Marathon Digital Holdings has obtained a $200M credit line from an unnamed lender.
The loan is secured by a portion of the company's Bitcoin holdings and will be used for strategic investments and general corporate purposes.
➡️ETFs have taken in $1.64 BILLION in just 4 days.
On the 17th of October:
➡️BlackRock's spot Bitcoin ETF bought $391.8 MILLION worth of Bitcoin today. Since launching in January, Bitcoin ETFs officially broke $20 BILLION in net inflows. This milestone took gold ETFs about 5 years to achieve. Total assets now stand at $65B, another all-time high.
Bitcoin ETFs have taken in $1.7 BILLION in October so far. • IBIT: $1.27b • FBTC: $278m • BITB: $126m • ARKB: $29m
On the 18th of October:
Samson Mow, JAN3 Founder, speaks at the German Bundestag telling MPs about Bitcoin adoption for nation-states.
➡️Whales have been buying Bitcoin aggressively at an unprecedented pace.
➡️'When Bitcoin was $68k in 2021 the hash rate was 160M TH/s. Now it’s 661M TH/s. The network is 4x more secure. Large players are suppressing the price to stack at low prices, but they can’t fake the electricity numbers. Something will happen soon to the price!' -Bitcoin for Freedom
➡️Morgan Stanley reveals $272.1M Bitcoin ETF holdings in SEC filing.
➡️Although I don't like the guy, Anthony Pompliano, because of his BlockFi 'rocketship' endorsement, last week he was spot on as he destroyed a Fox Host who says Bitcoin is "way too volatile" to be used for savings. Pomp: "Since 2016, the average cost of an American house has increased by over 50% in dollars but has dropped 99% when measured in Bitcoin."
➡️SEC approves NSYE options trading on spot Bitcoin ETFs.
On the 19th of October:
➡️Bitcoin balance on exchanges hits a 5-year low.
On the 21st of October.
➡️Now at the beginning of the Weekly Recap I already showed how central banks are losing their mind(ECB in that case), as they lose their grip and control of money.
I don't wanna be doom and gloom, I rather want to write positive news... but sometimes I really think it is important to show you what on earth is going on.
Multiple different government entities suggested making Bitcoin illegal because people would rather buy Bitcoin and have the price go up than buy bonds and lose purchasing power to inflation. All I know is if the government doesn't want you to have something that's often a good reason to get some. Especially when it's something that simply stores value! I 100% get why governments are against it. It's a threat to their unchecked money printing!
On this day the following paper came out. A paper from the Minneapolis Federal Reserve states that "a legal prohibition against Bitcoin can restore unique implementation of permanent primary deficits." Without Bitcoin to contend with, they believe the debt can go up forever.
You can find the paper here: https://www.minneapolisfed.org/research/working-papers/unique-implementation-of-permanent-primary-deficits
Technically a much better paper than the ECB one. Unfortunately, I am not surprised about that.
This is a real piece of research that identifies the key issue: consumers are too smart to fund forever deficits.
All these papers coming out are too timely. They are coordinating something against Bitcoin...central banks are freaking out… can you smell the panic in the air?
ECB - Bitcoin is so much better than fiat it could go to $10 million and make fiat holders worse off (I don't make that up)
FED - It's useless but we have to ban or tax the shit out of it so we can run deficits indefinitely
➡️Japan’s Democratic Party of the People leader pledges to cut Bitcoin and crypto taxes if elected. They will cut the taxes from 55% to 20% if the Democratic Party of the People wins the election.
➡️BlackRock’s Bitcoin ETF is now the 3rd biggest ETF for 2024 inflows and the fastest-growing ETF ever!
➡️'October's Bitcoin mining revenue has plummeted by 70% from its March peak, with miners generating less than 9,000 BTC so far this month.' -Bitcoin News
💸Traditional Finance / Macro:
On the 18th of October:
👉🏽 Investor's allocation to stocks hit 61%, the highest level in at least 40 years. 'This share has ALMOST DOUBLED since 2009 and is in line with the 2000 Dot-Com Bubble levels.
The median value of US consumers’ stock holdings spiked to $250,000 in October, the most on record. Over the last 12 months, this amount has DOUBLED, according to the University of Michigan consumer survey. In 2010, Americans' investments in single stocks, mutual funds, and retirement accounts were worth just ~$50,000, or 5 times less.
Now, equities account for 48% of US households' net worth, the highest since the 2000 Dot-Com bubble peak.' -TKL
And this all is mostly with retirement money.
🏦Banks:
👉🏽no news
🌎Macro/Geopolitics:
On the 14th of October:
Google agrees to buy nuclear power from Small Modular Reactors to be built by Kairos Power.
ZeroHedge: "First it was Amazon, then Microsoft, now Google telegraphs why the "next AI trade" will generate obnoxious amounts of alpha in the coming years by sending the same message: i) it's all about how all those data centers will be powered, and ii) in the future a growing number of data centers will be powered by small modular nuclear reactors."
Senior director for energy and climate at Google, Michael Terrell, said on a call with reporters that "we believe that nuclear energy has a critical role to play in supporting our clean growth and helping to deliver on the progress of AI." Full article: https://www.zerohedge.com/markets/google-inks-deal-nuclear-small-modular-reactor-company-power-data-centers
As of 2024, there are three operational small modular reactors (SMRs) in the world, located in different countries:
-
Russia - Akademik Lomonosov, a floating nuclear power plant operated by Rosatom. It is located in the Arctic town of Pevek, Chukotka. This is the world’s first floating nuclear plant.
-
China - Linglong One (ACP100), developed by the China National Nuclear Corporation (CNNC). This SMR is operational at the Changjiang Nuclear Power Plant in Hainan province, China.
-
Argentina - CAREM-25, developed by the Argentine state-owned company CNEA (Comisión Nacional de Energía Atómica). This reactor is located in the Lima district, Buenos Aires province, near the Atucha Nuclear Power Plant. These SMRs are being used to demonstrate the feasibility of small-scale nuclear power generation for diverse applications.
On the 13h of October:
👉🏽The US money supply hit $21.17 trillion in August, the highest level since January 2023.
This also marks a fifth consecutive monthly increase in the US money supply. Over the last 10 months, the amount of US Dollars in circulation has jumped by a MASSIVE $484 billion.
In effect, the money supply is now just $548 billion below a new all-time high. After a brief decline, the quantity of money in the financial system is surging again raising concerns about another inflation wave. - TKL
I won't worry about a recession or something like that, but more on the inflation and debasement part.
On the 16th of October:
👉🏽'Let‘s take a look at Europe‘s largest economies: Germany - commits deindustrialization suicide France - 6% fiscal deficit during good times despite highest tax burden in Europe Italy & Spain - collapsing demographics, pensions and healthcare systems to follow.' - Michael A. Arouet
👉🏽In the last couple of months I have shared how the U.S. government did its revision on job reports and other data points.
Now the FBI "revised" violent crime data, reporting that instead of a 2.1% drop in violent crime in 2022, it was actually a 4.5% increase. They missed 1.699 murders in 2022. What are a few murders among friends, innit? '6.6 points of net swing. So basically took them 18 months to realize they had missed 1 in 15 violent crimes in the country.
Conveniently released after all debates complete + no future ones agreed. Don't want to be this cynical, I really don't, but they aren't leaving me any room' - Pi Prime Pi
All government data is fudged it seems. From jobs to the economy, and inflation... It's all just made up.
"Oopsie whoopsie we made a fucky wucky! And people (politicians) wonder why trust in institutions is cratering...
Just to give you another example...
'The US Treasury is supposed to release its Monthly Treasury Statement at 2 pm on the 8th business day of each month. We are now on the 13th business day of October and have still not received last month's report. No explanation or estimation, complete radio silence.' - James Lavish
On the 17th of October:
👉🏽ECB cuts interest rates again by 0.25 percentage points.
'The ECB cut rates and signals it is getting hastier to bring rates back to more 'neutral' levels. It's funny how many investors argue that this time is different and that the ECB would take it easy or even refrain from rate cuts. Eurozone GDP growth is going nowhere, not in the near term and definitely not in the long term. Official inflation numbers in all major Eurozone economies are below 2%, and France is making headlines every day concerning debt sustainability. In the long term, interest rates will be low or negative when corrected for inflation, while bond volatility will increase.' - Jeroen Blokland
👉🏽Gold is back in record territory:
Even as markets price in a chance of a NO rate cut in November, gold just broke above $2700/oz.
The US Dollar has strengthened by 3% since September 30th and gold prices are STILL positive this month. Bank of America warns that gold may be the last safe haven as US Treasuries face risks from surging national debt. We all know that it is not the last and best safe haven, got Bitcoin?
'The higher gold goes in EUR, the greater the % of Eurozone gold reserves as a % of total EZ reserves goes (the EZ marks its gold to mkt quarterly, like China & Russia, but unlike the US.) In extremis, this gives the ECB option of “QE Heavy” (print EUR, bid for gold)' - Luke Gromen
Gold is up 31% YTD, the best year since 1979. Bitcoin is up 60% YTD, a casual year so far.
Gold for the central banks. Bitcoin for the citizens. Bitcoin wins and power returns to the people!
👉🏽 Half a trillion increase in US debt in a few days before the election. The fiscal year ends in September. So in October, the US treasury injects liquidity. Still, crazy work to increase the debt by half a trillion.
https://fiscaldata.treasury.gov/datasets/debt-to-the-penny/debt-to-the-penny
James Lavish: 'Pure unadulterated reckless government spending and so an exponential rise in debt, requiring more units of debt to drive each unit of GDP. Madness.'
On the 18th of October:
👉🏽Last week I mentioned that China's CB launched a $70 billion funding program to stimulate the economy and the capital market.
But China is in a debt bubble: 'China’s debt-to-GDP ratio hit a jaw-dropping 366% in Q1 2024, a new all-time high. Since the 2008 Financial Crisis, the ratio has DOUBLED. To put this differently, for 1 unit of GDP, the Chinese economy has produced 3.66 units of debt burden. Non-financial corporates have the highest ratio, at 171%, followed by the government sector, at 86%. Households and financial entities' debt-to-GDP is 64% and 45%, respectively. Stimulus won't solve China's debt crisis.' - TKL.
On the 21st of October:
👉🏽The US just recorded the 3rd largest budget deficit in its entire history. The federal deficit hit a WHOPPING $1.83 trillion in Fiscal Year 2024 ended Sept. 30. In other words, the government borrowed a staggering $5 billion A DAY. This was only below the 2020 and 2021 pandemic years.
🎁If you have made it this far I would like to give you a little gift:
Technology Powered Freedom - Bitcoin, eCash & Nostr | Alex Gladstein x Peter McCormack:
https://www.youtube.com/watch?v=L5BVxfdYgNo
Credit: I have used multiple sources!
My savings account: Bitcoin
The tool I recommend for setting up a Bitcoin savings plan: @Relai 🇨🇭 especially suited for beginners or people who want to invest in Bitcoin with an automated investment plan once a week or monthly. (Please only use it till the 31st of October - after that full KYC)
Hence a DCA, Dollar cost Average Strategy. Check out my tutorial post (Instagram) & video (YouTube) for more info.⠀⠀⠀⠀
Get your Bitcoin out of exchanges. Save them on a hardware wallet, run your own node...be your own bank. Not your keys, not your coins. It's that simple.
Do you think this post is helpful to you? If so, please share it and support my work with sats.
▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃
⭐ Many thanks⭐
Felipe - Bitcoin Friday!
▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃
-
-
@ ee11a5df:b76c4e49
2023-07-29 03:13:59Gossip: Switching to LMDB
Unlike a number of other nostr clients, Gossip has always cached events and related data in a local data store. Up until recently, SQLite3 has served this purpose.
SQLite3 offers a full ACID SQL relational database service.
Unfortunately however it has presented a number of downsides:
- It is not as parallel as you might think.
- It is not as fast as you might hope.
- If you want to preserve the benefit of using SQL and doing joins, then you must break your objects into columns, and map columns back into objects. The code that does this object-relational mapping (ORM) is not trivial and can be error prone. It is especially tricky when working with different types (Rust language types and SQLite3 types are not a 1:1 match).
- Because of the potential slowness, our UI has been forbidden from direct database access as that would make the UI unresponsive if a query took too long.
- Because of (4) we have been firing off separate threads to do the database actions, and storing the results into global variables that can be accessed by the interested code at a later time.
- Because of (4) we have been caching database data in memory, essentially coding for yet another storage layer that can (and often did) get out of sync with the database.
LMDB offers solutions:
- It is highly parallel.
- It is ridiculously fast when used appropriately.
- Because you cannot run arbitrary SQL, there is no need to represent the fields within your objects separately. You can serialize/deserialize entire objects into the database and the database doesn't care what is inside of the blob (yes, you can do that into an SQLite field, but if you did, you would lose the power of SQL).
- Because of the speed, the UI can look stuff up directly.
- We no longer need to fork separate threads for database actions.
- We no longer need in-memory caches of data. The LMDB data is already in-memory (it is memory mapped) so we just access it directly.
The one obvious downside is that we lose SQL. We lose the query planner. We cannot ask arbitrary question and get answers. Instead, we have to pre-conceive of all the kinds of questions we want to ask, and we have to write code that answers them efficiently. Often this involves building and maintaining indices.
Indices
Let's say I want to look at fiatjaf's posts. How do I efficiently pull out just his recent feed-related events in reverse chronological order? It is easy if we first construct the following index
key: EventKind + PublicKey + ReverseTime value: Event Id
In the above, '+' is just a concatenate operator, and ReverseTime is just some distant time minus the time so that it sorts backwards.
Now I just ask LMDB to start from (EventKind=1 + PublicKey=fiatjaf + now) and scan until either one of the first two fields change, or more like the time field gets too old (e.g. one month ago). Then I do it again for the next event kind, etc.
For a generalized feed, I have to scan a region for each person I follow.
Smarter indexes can be imagined. Since we often want only feed-related event kinds, that can be implicit in an index that only indexes those kinds of events.
You get the idea.
A Special Event Map
At first I had stored events into a K-V database under the Id of the event. Then I had indexes on events that output a set of Ids (as in the example above).
But when it comes to storing and retrieving events, we can go even faster than LMDB.
We can build an append-only memory map that is just a sequence of all the events we have, serialized, and in no particular order. Readers do not need a lock and multiple readers can read simultaneously. Writers will need to acquire a lock to append to the map and there may only be one writer at a time. However, readers can continue reading even while a writer is writing.
We can then have a K-V database that maps Id -> Offset. To get the event you just do a direct lookup in the event memory map at that offset.
The real benefit comes when we have other indexes that yield events, they can yield offsets instead of ids. Then we don't need to do a second lookup from the Id to the Event, we can just look directly at the offset.
Avoiding deserialization
Deserialization has a price. Sometimes it requires memory allocation (if the object is not already linear, e.g. variable lengthed data like strings and vectors are allocated on the heap) which can be very expensive if you are trying to scan 150,000 or so events.
We serialize events (and other objects where we can) with a serialization library called speedy. It does its best to preserve the data much like it is represented in memory, but linearized. Because events start with fixed-length fields, we know the offset into the serialized event where these first fields occur and we can directly extract the value of those fields without deserializing the data before it.
This comes in useful whenever we need to scan a large number of events. Search is the one situation where I know that we must do this. We can search by matching against the content of every feed-related event without fully deserialing any of them.
-
@ ee11a5df:b76c4e49
2023-07-29 02:52:13Gossip: Zaps
Gossip is a desktop nostr client. This post is about the code that lets users send lightning zaps to each other (NIP-57).
Gossip implemented Zaps initially on 20th of June, 2023.
Gossip maintains a state of where zapping is at, one of: None, CheckingLnurl, SeekingAmount, LoadingInvoice, and ReadyToPay.
When you click the zap lightning bolt icon, Gossip moves to the CheckingLnurl state while it looks up the LN URL of the user.
If this is successful, it moves to the SeekingAmount state and presents amount options to the user.
Once a user chooses an amount, it moves to the LoadingInvoice state where it interacts with the lightning node and receives and checks an invoice.
Once that is complete, it moves to the ReadyToPay state, where it presents the invoice as a QR code for the user to scan with their phone. There is also a copy button so they can pay it from their desktop computer too.
Gossip also loads zap receipt events and associates them with the event that was zapped, tallying a zap total on that event. Gossip is unfortunately not validating these receipts very well currently, so fake zap receipts can cause an incorrect total to show. This remains an open issue.
Another open issue is the implementation of NIP-46 Nostr Connect and NIP-47 Wallet Connect.
-
@ 9aa75e0d:40534393
2024-10-23 05:36:30Finnish Customs, working together with Swedish police, has successfully shut down the Sipulitie marketplace, a dark web platform used for illegal activities, including the anonymous sale of drugs. The site, active since February 2023, operated on the encrypted Tor network and was available in both Finnish and English.
Sipulitie, which provided a platform for illegal drug transactions under the protection of anonymity, generated a reported revenue of €1.3 million, according to the site’s administrator who posted the figure on public forums.
Sipulitie had a predecessor, Sipulimarket, which launched in April 2019. This Finnish-language platform also facilitated illegal drug and doping sales in an anonymous environment. Finnish Customs, with help from Polish authorities, shut down Sipulimarket in December 2020. It is believed that Sipulimarket had a turnover exceeding €2 million.
Read further on SmuggleWire
-
@ b12b632c:d9e1ff79
2023-07-21 19:45:20I love testing every new self hosted app and I can say that Nostr "world" is really good regarding self hosting stuff.
Today I tested a Nostr relay named Strfry.
Strfry is really simple to setup and support a lot's of Nostr NIPs.
Here is the list of what it is able to do :
- Supports most applicable NIPs: 1, 2, 4, 9, 11, 12, 15, 16, 20, 22, 28, 33, 40
- No external database required: All data is stored locally on the filesystem in LMDB
- Hot reloading of config file: No server restart needed for many config param changes
- Zero downtime restarts, for upgrading binary without impacting users
- Websocket compression: permessage-deflate with optional sliding window, when supported by clients
- Built-in support for real-time streaming (up/down/both) events from remote relays, and bulk import/export of events from/to jsonl files
- negentropy-based set reconcilliation for efficient syncing with remote relays
Installation with docker compose (v2)
Spoiler : you need to have a computer with more than 1 (v)Core / 2GB of RAM to build the docker image locally. If not, this below might crash your computer during docker image build. You may need to use a prebuilt strfry docker image.
I assume you've read my first article on Managing domain with Nginx Proxy Manager because I will use the NPM docker compose stack to publish strfry Nostr relay. Without the initial NPM configuration done, it may not work as expected. I'll use the same docker-compose.yml file and folder.
Get back in the "npm-stack" folder :
cd npm-stack
Cloning the strfry github repo locally :
git clone https://github.com/hoytech/strfry.git
Modify the docker-compose file to locate the strfry configuration data outside of the folder repo directory to avoid mistake during futures upgrades (CTRL + X, S & ENTER to quit and save modifications) :
nano docker-compose.yml
You don't have to insert the Nginx Proxy Manager part, you should already have it into the file. If not, check here. You should only have to add the strfry part.
``` version: '3.8' services: # should already be present into the docker-compose.yml app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: # These ports are in format
: - '80:80' # Public HTTP Port - '443:443' # Public HTTPS Port - '81:81' # Admin Web Port # Add any other Stream port you want to expose # - '21:21' # FTP # Uncomment the next line if you uncomment anything in the section # environment: # Uncomment this if you want to change the location of # the SQLite DB file within the container # DB_SQLITE_FILE: "/data/database.sqlite" # Uncomment this if IPv6 is not enabled on your host # DISABLE_IPV6: 'true' volumes: - ./nginxproxymanager/data:/data - ./nginxproxymanager/letsencrypt:/etc/letsencrypt
strfry-nostr-relay: container_name: strfry build: ./strfry volumes: - ./strfry-data/strfry.conf:/etc/strfry.conf - ./strfry-data/strfry-db:/app/strfry-db
ports is commented by NPM will access through docker internal network
no need to expose strfry port directly to the internet
ports:
- "7777:7777"
```
Before starting the container, we need to customize the strfry configuration file "strfry.conf". We'll copy the strfry configuration file and place it into the "strfry-data" folder to modify it with our own settings :
mkdir strfry-data && cp strfry/strfry.conf strfry-data/
And modify the strfry.conf file with your own settings :
nano strfry-data/strfry.conf
You can modify all the settings you need but the basics settings are :
- bind = "127.0.0.1" --> bind = "0.0.0.0" --> otherwise NPM won't be able to contact the strfry service
-
name = "strfry default" --> name of your nostr relay
-
description = "This is a strfry instance." --> your nostr relay description
-
pubkey = "" --> your pubkey in hex format. You can use the Damu's tool to generate your hex key from your npub key : https://damus.io/key/
-
contact = "" --> your email
``` relay { # Interface to listen on. Use 0.0.0.0 to listen on all interfaces (restart required) bind = "127.0.0.1"
# Port to open for the nostr websocket protocol (restart required) port = 7777 # Set OS-limit on maximum number of open files/sockets (if 0, don't attempt to set) (restart required) nofiles = 1000000 # HTTP header that contains the client's real IP, before reverse proxying (ie x-real-ip) (MUST be all lower-case) realIpHeader = "" info { # NIP-11: Name of this server. Short/descriptive (< 30 characters) name = "strfry default" # NIP-11: Detailed information about relay, free-form description = "This is a strfry instance." # NIP-11: Administrative nostr pubkey, for contact purposes pubkey = "" # NIP-11: Alternative administrative contact (email, website, etc) contact = "" }
```
You can now start the docker strfry docker container :
docker compose up -d
This command will take a bit of time because it will build the strfry docker image locally before starting the container. If your VPS doesn't have lot's of (v)CPU/RAM, it could fail (nothing happening during the docker image build). My VPS has 1 vCore / 2GB of RAM and died few seconds after the build beginning.
If it's the case, you can use prebuilt strfry docker image available on the Docker hub : https://hub.docker.com/search?q=strfry&sort=updated_at&order=desc
That said, otherwise, you should see this :
``` user@vps:~/npm-stack$ docker compose up -d [+] Building 202.4s (15/15) FINISHED
=> [internal] load build definition from Dockerfile 0.2s => => transferring dockerfile: 724B 0.0s => [internal] load .dockerignore 0.3s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/ubuntu:jammy 0.0s => [build 1/7] FROM docker.io/library/ubuntu:jammy 0.4s => [internal] load build context 0.9s => => transferring context: 825.64kB 0.2s => [runner 2/4] WORKDIR /app 1.3s => [build 2/7] WORKDIR /build 1.5s => [runner 3/4] RUN apt update && apt install -y --no-install-recommends liblmdb0 libflatbuffers1 libsecp256k1-0 libb2-1 libzstd1 && rm -rf /var/lib/apt/lists/* 12.4s => [build 3/7] RUN apt update && apt install -y --no-install-recommends git g++ make pkg-config libtool ca-certificates libyaml-perl libtemplate-perl libregexp-grammars-perl libssl-dev zlib1g-dev l 55.5s => [build 4/7] COPY . . 0.9s => [build 5/7] RUN git submodule update --init 2.6s => [build 6/7] RUN make setup-golpe 10.8s => [build 7/7] RUN make -j4 126.8s => [runner 4/4] COPY --from=build /build/strfry strfry 1.3s => exporting to image 0.8s => => exporting layers 0.8s => => writing image sha256:1d346bf343e3bb63da2e4c70521a8350b35a02742dd52b12b131557e96ca7d05 0.0s => => naming to docker.io/library/docker-compose_strfry-nostr-relay 0.0sUse 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
[+] Running 02/02
⠿ Container strfry Started 11.0s ⠿ Container npm-stack-app-1 Running ```You can check if everything is OK with strfry container by checking the container logs :
user@vps:~/npm-stack$ docker logs strfry date time ( uptime ) [ thread name/id ] v| 2023-07-21 19:26:58.514 ( 0.039s) [main thread ]INFO| arguments: /app/strfry relay 2023-07-21 19:26:58.514 ( 0.039s) [main thread ]INFO| Current dir: /app 2023-07-21 19:26:58.514 ( 0.039s) [main thread ]INFO| stderr verbosity: 0 2023-07-21 19:26:58.514 ( 0.039s) [main thread ]INFO| ----------------------------------- 2023-07-21 19:26:58.514 ( 0.039s) [main thread ]INFO| CONFIG: Loading config from file: /etc/strfry.conf 2023-07-21 19:26:58.529 ( 0.054s) [main thread ]INFO| CONFIG: successfully installed 2023-07-21 19:26:58.533 ( 0.058s) [Websocket ]INFO| Started websocket server on 0.0.0.0:7777
Now, we have to create the subdomain where strfry Nostr relay will be accessible. You need to connect to your Nginx Proxy Manager admin UI and create a new proxy host with these settings :
"Details" tab (Websockets support is mandatory!, you can replace "strfry" by whatever you like, for instance : mybeautifulrelay.yourdomain.tld)
"Details" tab:
"SSL" tab:
And click on "Save"
If everything is OK, when you go to https://strfry.yourdomain.tld you should see :
To verify if strfry is working properly, you can test it with the (really useful!) website https://nostr.watch. You have to insert your relay URL into the nostr.watch URL like this : https://nostr.watch/relay/strfry.yourdomain.tld
You should see this :
If you are seeing your server as online, readable and writable, you made it ! You can add your Nostr strfry server to your Nostr prefered relay and begin to publish notes ! 🎇
Future work:
Once done, strfry will work like a charm but you may need to have more work to update strfry in the near future. I'm currently working on a bash script that will :
- Updatethe "strfry" folder,
- Backup the "strfry.conf" file,
- Download the latest "strfry.conf" from strfry github repo,
- Inject old configuration settings into the new "strfry.conf" file,
- Compose again the stack (rebuilding the image to get the latest code updates),
- etc.
Tell me if you need the script!
Voilààààà
See you soon in another Fractalized story!
-
@ b12b632c:d9e1ff79
2023-07-21 14:19:38Self hosting web applications comes quickly with the need to deal with HTTPS protocol and SSL certificates. The time where web applications was published over the 80/TCP port without any encryption is totally over. Now we have Let's Encrypt and other free certification authority that lets us play web applications with, at least, the basic minimum security required.
Second part of web self hosting stuff that is really useful is the web proxifycation.
It's possible to have multiple web applications accessible through HTTPS but as we can't use the some port (spoiler: we can) we are forced to have ugly URL as https://mybeautifudomain.tld:8443.
This is where Nginx Proxy Manager (NPM) comes to help us.
NPM, as gateway, will listen on the 443 https port and based on the subdomain you want to reach, it will redirect the network flow to the NPM differents declared backend ports. NPM will also request HTTPS cert for you and let you know when the certificate expires, really useful.
We'll now install NPM with docker compose (v2) and you'll see, it's very easy.
You can find the official NPM setup instructions here.
But before we absolutely need to do something. You need to connect to the registrar where you bought your domain name and go into the zone DNS section.You have to create a A record poing to your VPS IP. That will allow NPM to request SSL certificates for your domain and subdomains.
Create a new folder for the NPM docker stack :
mkdir npm-stack && cd npm-stack
Create a new docker-compose.yml :
nano docker-compose.yml
Paste this content into it (CTRL + X ; Y & ENTER to save/quit) :
``` version: '3.8' services: app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: # These ports are in format
: - '80:80' # Public HTTP Port - '443:443' # Public HTTPS Port - '81:81' # Admin Web Port # Add any other Stream port you want to expose # - '21:21' # FTP # Uncomment the next line if you uncomment anything in the section # environment: # Uncomment this if you want to change the location of # the SQLite DB file within the container # DB_SQLITE_FILE: "/data/database.sqlite" # Uncomment this if IPv6 is not enabled on your host # DISABLE_IPV6: 'true' volumes: - ./nginxproxymanager/data:/data - ./nginxproxymanager/letsencrypt:/etc/letsencrypt
```
You'll not believe but it's done. NPM docker compose configuration is done.
To start Nginx Proxy Manager with docker compose, you just have to :
docker compose up -d
You'll see :
user@vps:~/tutorials/npm-stack$ docker compose up -d [+] Running 2/2 ✔ Network npm-stack_default Created ✔ Container npm-stack-app-1 Started
You can check if NPM container is started by doing this command :
docker ps
You'll see :
user@vps:~/tutorials/npm-stack$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7bc5ea8ac9c8 jc21/nginx-proxy-manager:latest "/init" About a minute ago Up About a minute 0.0.0.0:80-81->80-81/tcp, :::80-81->80-81/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp npm-stack-app-1
If the command show "Up X minutes" for the npm-stack-app-1, you're good to go! You can access to the NPM admin UI by going to http://YourIPAddress:81.You shoud see :
The default NPM login/password are : admin@example.com/changeme .If the login succeed, you should see a popup asking to edit your user by changing your email password :
And your password :
Click on "Save" to finish the login. To verify if NPM is able to request SSL certificates for you, create first a subdomain for the NPM admin UI : Click on "Hosts" and "Proxy Hosts" :
Followed by "Add Proxy Host"
If you want to access the NPM admin UI with https://admin.yourdomain.tld, please set all the parameters like this (I won't explain each parameters) :
Details tab :
SSL tab :
And click on "Save".
NPM will request the SSL certificate "admin.yourdomain.tld" for you.
If you have an erreor message "Internal Error" it's probably because your domaine DNS zone is not configured with an A DNS record pointing to your VPS IP.
Otherwise you should see (my domain is hidden) :
Clicking on the "Source" URL link "admin.yourdomain.tld" will open a pop-up and, surprise, you should see the NPM admin UI with the URL "https://admin.yourdomain.tld" !
If yes, bravo, everything is OK ! 🎇
You know now how to have a subdomain of your domain redirecting to a container web app. In the next blog post, you'll see how to setup a Nostr relay with NPM ;)
Voilààààà
See you soon in another Fractalized story!
-
@ 59df1288:92e1744f
2024-10-23 05:00:39null
-
@ 59df1288:92e1744f
2024-10-23 04:59:30null
-
@ b12b632c:d9e1ff79
2023-07-20 20:12:39Self hosting web applications comes quickly with the need to deal with HTTPS protocol and SSL certificates. The time where web applications was published over the 80/TCP port without any encryption is totally over. Now we have Let's Encrypt and other free certification authority that lets us play web applications with, at least, the basic minimum security required.
Second part of web self hosting stuff that is really useful is the web proxifycation.
It's possible to have multiple web applications accessible through HTTPS but as we can't use the some port (spoiler: we can) we are forced to have ugly URL as https://mybeautifudomain.tld:8443.
This is where Nginx Proxy Manager (NPM) comes to help us.
NPM, as gateway, will listen on the 443 https port and based on the subdomain you want to reach, it will redirect the network flow to the NPM differents declared backend ports. NPM will also request HTTPS cert for you and let you know when the certificate expires, really useful.
We'll now install NPM with docker compose (v2) and you'll see, it's very easy.
You can find the official NPM setup instructions here.
But before we absolutely need to do something. You need to connect to the registrar where you bought your domain name and go into the zone DNS section.You have to create a A record poing to your VPS IP. That will allow NPM to request SSL certificates for your domain and subdomains.
Create a new folder for the NPM docker stack :
mkdir npm-stack && cd npm-stack
Create a new docker-compose.yml :
nano docker-compose.yml
Paste this content into it (CTRL + X ; Y & ENTER to save/quit) :
``` version: '3.8' services: app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: # These ports are in format
: - '80:80' # Public HTTP Port - '443:443' # Public HTTPS Port - '81:81' # Admin Web Port # Add any other Stream port you want to expose # - '21:21' # FTP # Uncomment the next line if you uncomment anything in the section # environment: # Uncomment this if you want to change the location of # the SQLite DB file within the container # DB_SQLITE_FILE: "/data/database.sqlite" # Uncomment this if IPv6 is not enabled on your host # DISABLE_IPV6: 'true' volumes: - ./nginxproxymanager/data:/data - ./nginxproxymanager/letsencrypt:/etc/letsencrypt
```
You'll not believe but it's done. NPM docker compose configuration is done.
To start Nginx Proxy Manager with docker compose, you just have to :
docker compose up -d
You'll see :
user@vps:~/tutorials/npm-stack$ docker compose up -d [+] Running 2/2 ✔ Network npm-stack_default Created ✔ Container npm-stack-app-1 Started
You can check if NPM container is started by doing this command :
docker ps
You'll see :
user@vps:~/tutorials/npm-stack$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7bc5ea8ac9c8 jc21/nginx-proxy-manager:latest "/init" About a minute ago Up About a minute 0.0.0.0:80-81->80-81/tcp, :::80-81->80-81/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp npm-stack-app-1
If the command show "Up X minutes" for the npm-stack-app-1, you're good to go! You can access to the NPM admin UI by going to http://YourIPAddress:81.You shoud see :
The default NPM login/password are : admin@example.com/changeme .If the login succeed, you should see a popup asking to edit your user by changing your email password :
And your password :
Click on "Save" to finish the login. To verify if NPM is able to request SSL certificates for you, create first a subdomain for the NPM admin UI : Click on "Hosts" and "Proxy Hosts" :
Followed by "Add Proxy Host"
If you want to access the NPM admin UI with https://admin.yourdomain.tld, please set all the parameters like this (I won't explain each parameters) :
Details tab :
SSL tab :
And click on "Save".
NPM will request the SSL certificate "admin.yourdomain.tld" for you.
If you have an erreor message "Internal Error" it's probably because your domaine DNS zone is not configured with an A DNS record pointing to your VPS IP.
Otherwise you should see (my domain is hidden) :
Clicking on the "Source" URL link "admin.yourdomain.tld" will open a pop-up and, surprise, you should see the NPM admin UI with the URL "https://admin.yourdomain.tld" !
If yes, bravo, everything is OK ! 🎇
You know now how to have a subdomain of your domain redirecting to a container web app. In the next blog post, you'll see how to setup a Nostr relay with NPM ;)
Voilààààà
See you soon in another Fractalized story!
-
@ b12b632c:d9e1ff79
2023-07-19 00:17:02Welcome to a new Fractalized episode of "it works but I can do better"!
Original blog post from : https://fractalized.ovh/use-ghost-blog-to-serve-your-nostr-nip05-and-lnurl/
Few day ago, I wanted to set my Ghost blog (this one) as root domain for Fractalized instead of the old basic Nginx container that served my NIP05. I succeed to do it with Nginx Proxy Manager (I need to create a blog post because NPM is really awesome) but my NIP05 was down.
Having a bad NIP05 on Amethyst is now in the top tier list of my worst nightmares.
As a reminder, to have a valid NIP05 on your prefered Nostr client, you need to have a file named "nostr.json" in a folder ".well-known" located in the web server root path (subdomain or domain). The URL shoud look like this http://domain.tld/.well-known/nostr.json
PS : this doesn't work with Ghost as explained later. If you are here for Ghost, skip this and go directly to 1).
You should have these info inside the nostr.json file (JSON format) :
{ "names": { "YourUsername": "YourNPUBkey" } }
You can test it directly by going to the nostr.json URL, it should show you the JSON.
It was working like a charm on my previous Nginx and I needed to do it on my new Ghost. I saw on Google that I need to have the .well-known folder into the Ghost theme root folder (In my case I choosed the Solo theme). I created the folder, put my nostr.json file inside it and crossed my fingers. Result : 404 error.
I had to search a lot but it seems that Ghost forbid to server json file from its theme folders. Maybe for security reasons, I don't know. Nevertheless Ghost provide a method to do it (but it requires more than copy/paste some files in some folders) => routes
In the same time that configuring your NIP05, I also wanted to setup an LNURL to be zapped to pastagringo@fractalized.ovh (configured into my Amethyst profile). If you want more infromation on LNURL, it's very well explained here.
Because I want to use my Wallet of Satoshi sats wallet "behind" my LNURL, you'll have to read carefuly the EzoFox's blog post that explains how to retrieve your infos from Alby or WoS.
1) Configuring Ghost routes
Add the new route into the routes.yaml (accessible from "Settings" > "Labs" > "Beta features" > "Routes" ; you need to download the file, modify it and upload it again) to let him know where to look for the nostr.json file.
Here is the content of my routes.yaml file :
``` routes: /.well-known/nostr.json/: template: _nostr-json content_type: application/json /.well-known/lnurlp/YourUsername/: template: _lnurlp-YourUsername content_type: application/json collections: /: permalink: /{slug}/ template: index
taxonomies: tag: /tag/{slug}/ author: /author/{slug}/ ```
=> template : name of the file that ghost will search localy
=> content_type : file type (JSON is required for the NIP05)
The first route is about my NIP05 and the second route is about my LNURL.
2) Creating Ghost route .hbs files
To let Ghost serve our JSON content, I needed to create the two .hbs file filled into the routes.yaml file. These files need to be located into the root directory of the Ghost theme used. For me, it was Solo.
Here is there content of both files :
"_nostr-json.hbs"
{ "names": { "YourUsername": "YourNPUBkey" } }
"_lnurlp-pastagringo.hbs" (in your case, _lnurlp-YourUsername.hbs)
{ "callback":"https://livingroomofsatoshi.com/api/v1/lnurl/payreq/XXXXX", "maxSendable":100000000000, "minSendable":1000, "metadata":"[[\"text/plain\",\"Pay to Wallet of Satoshi user: burlyring39\"],[\"text/identifier\",\"YourUsername@walletofsatoshi.com\"]]", "commentAllowed":32, "tag":"payRequest", "allowsNostr":true, "nostrPubkey":"YourNPUBkey" }
After doing that, you almost finished, you just have to restart your Ghost blog. As I use Ghost from docker container, I just had to restart the container.
To verify if everything is working, you have to go in the routes URL created earlier:
https://YourDomain.tld/.well-known/nostr.json
and
https://YourDomain.tld/.well-known/lnurlp/YourUsername
Both URL should display JSON files from .hbs created ealier.
If it's the case, you can add your NIP05 and LNURL into your Nostr profile client and you should be able to see a valid tick from your domain and to be zapped on your LNURL (sending zaps to your WoS or Alby backend wallet).
Voilààààà
See you soon in another Fractalized story!
-
@ 59df1288:92e1744f
2024-10-23 04:56:00null
-
@ b12b632c:d9e1ff79
2023-07-17 22:09:53Today I created a new item (NIP05) to sell into my BTCpayserver store and everything was OK. The only thing that annoyed me a bit is that when a user pay the invoice to get his NIP on my Nostr Paid Relay, I need to warn him that he needs to fill his npub in the email info required by BTCpayserver before paying. Was not so user friendly. Not at all.
So I checked if BTCpayserver is able to create custom input before paying the invoice and at my big surprise, it's possible ! => release version v1.8
They even use as example my need from Nostr to get the user npub.
So it's possible, yes, but with at least the version 1.8.0. I checked immediately my BTCpayserver version on my Umbrel node and... damn... I'm in 1.7.2. I need to update my local BTCpayserver through Umbrel to access my so wanted BTCpayserver feature. I checked the local App Store but I can't see any "Update" button or something. I can see that the version installed is well the v1.7.2 but when I check the latest version available from the online Umbrel App Store. I can see v1.9.2. Damn bis.
I searched into the Umbrel Community forum, Telegram group, Google posts, asked the question on Nostr and didn't have any clue.
I finally found the answer from the Umbrel App Framework documention :
sudo ./scripts/repo checkout https://github.com/getumbrel/umbrel-apps.git
So I did it and it updated my local Umbrel App Store to the latest version available and my BTCpayserver.
After few seconds, the BTCpaysever local URL is available and I'm allowed to login => everything is OK! \o/
See you soon in another Fractalized story!
-
@ 59df1288:92e1744f
2024-10-23 04:54:40null
-
@ 59df1288:92e1744f
2024-10-23 04:51:48null
-
@ 59df1288:92e1744f
2024-10-23 04:50:20null
-
@ 9349d012:d3e98946
2024-10-23 04:10:25Chef's notes
Ingredients
4 tablespoons (1/2 stick) butter 2 ounces thinly sliced prosciutto, cut into thin strips 1 1/4 cups orzo (about 8 ounces) 3 cups low-salt chicken broth 1/2 teaspoon (loosely packed) saffron threads, crushed 1 pound slender asparagus, trimmed, cut into 1/2-inch pieces 1/4 cup grated Parmesan cheese Parmesan cheese shavings
Preparation
Melt 2 tablespoons butter in large nonstick skillet over medium-high heat. Add prosciutto and sauté until almost crisp, about 3 minutes. Using slotted spoon, transfer to paper towels to drain. Melt 2 tablespoons butter in same skillet over high heat. Add orzo; stir 1 minute. Add broth and saffron; bring to boil. Reduce heat to medium-low, cover, and simmer until orzo begins to soften, stirring occasionally, about 8 minutes. Add asparagus; cover and simmer until tender, about 5 minutes. Uncover; simmer until almost all liquid is absorbed, about 1 minute. Remove from heat. Mix in prosciutto and 1/2 cup grated cheese. Season to taste with salt and pepper. Transfer to large bowl. Garnish with Parmesan shavings. Makes 6 servings.
Details
- ⏲️ Prep time: 30
- 🍳 Cook time: 30
Ingredients
- See Chef’s Notes
Directions
- See Chef’s Notes
-
@ 9e69e420:d12360c2
2024-10-23 01:58:14Amber why, what, how?
Why?
Downstairs different then other social media apps. Instead of passwords managed by a central entity that can be recovered your identity is secured cryptographically with a public and private key pair. This grants you a lot of freedom. It also burdens you with a lot of responsibility. If your private key gets lost you lose access to your account forever. No one can help you. If your private key gets compromised the attacker will have access to your account in perpetuity. Able to impersonate you to all your contacts forever.
We need a way to keep our nsec's safe. We want to be able to try out all the latest nostr apps without compromising our keys. We need a way to back up our keys offline.
What?
Enter Amber. By Greenart7c3. Amber allows you to leverage any noster app that utilizes nip-46. Which is rapidly growing in popularity. And it's relatively simple to implement for developers so I expect it will become standard at some point. It's based on nsec bunker. The main difference being instead of your keys being on a remote server they are on your phone. Basically, a app that supports this and send request to Amber which will sign the event with your key and then be published via whichever app you are using. communications through a NOSTR relay.
Having your keys on your phone does post some security risks in and of itself. But I always recommend a strong pin instead of face or fingerprint ID anyways. As well as being very selective with the apps you put on your phone and the permissions you give them. I personally use graphene OS because it gives me complete control over app permissions including the otherwise mandatory Google Play and Android system apps. If your phone is wide open first considered changing that. If you're unable to then maybe look for another key management solution
Amber also enables nip-06 which allows for key generation with a mnemonic backup. For anyone who's ever had a Bitcoin wallet it's the exact same thing. In fact you can even use them as a different wallet. This way you can store your key's backup safely offline as a hard copy. Write them on paper, stamp on on stainless steel, or for ultimate resilience chisel them in a granite tablet.
How?
If it's not displaying in your client here's the link: Video All zaps from this note are %100 split with greenart7c3 so show some love
greenart7c3:
npub1w4uswmv6lu9yel005l3qgheysmr7tk9uvwluddznju3nuxalevvs2d0jr5
-
@ 4ba8e86d:89d32de4
2024-10-23 00:52:22é um software de código aberto gratuito que permite aos usuários reproduzir diversos tipos de mídia, incluindo vídeos, música e podcasts, em vários sistemas operacionais, incluindo Windows, MacOS e Linux. Além disso, o VLC também está disponível para dispositivos móveis, incluindo smartphones e tablets com Android e iOS. O programa foi desenvolvido pela organização sem fins lucrativos VideoLAN e é utilizado por milhões de usuários em todo o mundo.
O desenvolvimento do VLC começou em 1996, como um projeto acadêmico liderado por estudantes da École Centrale Paris, na França. O objetivo era criar um software de streaming de vídeo que pudesse ser usado em uma rede de campus. O projeto evoluiu ao longo dos anos e se tornou um software de código aberto que permite a reprodução de diversos tipos de mídia. O VLC foi criado por Jean-Baptiste Kempf e é desenvolvido por uma equipe de voluntários em todo o mundo. Desde sua criação, o programa passou por várias versões e continua a ser atualizado regularmente para garantir sua compatibilidade com as últimas tecnologias e sistemas operacionais.
O VLC é um reprodutor de mídia que suporta uma ampla variedade de formatos de arquivo, incluindo MPEG-1, MPEG-2, MPEG-4, DivX, MP3, Ogg e muito mais. O programa é capaz de ler arquivos locais de mídia armazenados no seu computador ou no seu dispositivo móvel, bem como reproduzir mídia a partir de uma rede local ou da internet.
O VLC funciona como um software autônomo e não requer a instalação de codecs adicionais para a reprodução de arquivos de mídia. Ele vem com uma ampla variedade de codecs integrados, que permitem a reprodução de diversos formatos de arquivo sem a necessidade de instalar codecs adicionais.
O VLC resolve vários problemas relacionados à reprodução de mídia. Um dos principais problemas que o VLC resolve é a incompatibilidade de formatos de arquivo. Com o VLC, você pode reproduzir quase todos os tipos de arquivos de mídia, sem precisar se preocupar em instalar codecs adicionais ou software de conversão de arquivo.
O VLC também resolve problemas de segurança. O programa é capaz de reproduzir arquivos de mídia criptografados e suporta vários protocolos de segurança de rede, incluindo HTTPS e RTSP sobre TLS.
O VLC é uma opção confiável e eficiente para a reprodução de mídia em diversos sistemas operacionais. Ele é gratuito e de código aberto, o que significa que você pode personalizá-lo para atender às suas necessidades específicas e verificar seu código-fonte para garantir sua transparência e segurança.
O VLC é um programa leve e fácil de usar que oferece vários recursos avançados, incluindo a capacidade de reproduzir mídia em rede com segurança, equalizador de áudio, suporte a legendas e muito mais. Além disso, o VLC é uma ótima opção para quem quer reproduzir mídia em um dispositivo mais antigo, pois é capaz de rodar em sistemas operacionais mais antigos e em computadores com recursos limitados.
O VLC também é conhecido por sua estabilidade e capacidade de lidar com arquivos de mídia danificados ou incompletos. Ele é capaz de reparar arquivos de mídia corrompidos e reproduzir arquivos incompletos, permitindo que os usuários assistam a vídeos ou ouçam músicas que não funcionariam em outros reprodutores de mídia.
O VLC é um programa altamente personalizável, com uma variedade de opções de configuração disponíveis para os usuários. É possível personalizar a interface do usuário, configurar teclas de atalho personalizadas e muito mais. Isso torna o VLC uma opção ideal para usuários avançados que desejam ajustar o programa às suas necessidades específicas.
O VLC é um reprodutor de mídia altamente eficiente e confiável que permite aos usuários reproduzir diversos tipos de mídia em vários sistemas operacionais. Ele é gratuito, de código aberto e altamente personalizável, tornando-o uma opção popular entre usuários avançados e iniciantes.
Com sua ampla compatibilidade de arquivo, recursos avançados e estabilidade, o VLC é uma excelente escolha para quem procura um reprodutor de mídia simples e eficiente.
https://www.videolan.org/vlc/
https://github.com/videolan
-
@ 75d12141:3458d1e2
2024-10-23 00:20:14Chef's notes
A childhood favorite of mine! Just don't go too crazy with the scallions as you don't want to overpower the pork flavor.
Details
- ⏲️ Prep time: 15
- 🍳 Cook time: 15-20 mins
- 🍽️ Servings: 3-5
Ingredients
- 1 pound of lean ground pork
- 1 tablespoon of diced scallions
- Ground black pepper (optional)
Directions
- Lightly coat your palms with olive oil to prevent the meat from sticking to you and to assist in keeping its ball form
- Mix the ground pork and diced scallions in a large bowl
- Roll the pork into the preferred portion size until it feels like it won't fall apart
- Cook in a skillet for 15-20 mins
-
@ 4ba8e86d:89d32de4
2024-10-23 00:11:19Ele permite que os usuários descubram, baixem e instalem aplicativos em seus dispositivos Android sem depender da Google Play Store, que é a loja de aplicativos padrão do Android.
História do F-Droid
O F-Droid foi lançado em 2010 por um grupo de desenvolvedores liderados por Ciaran Gultnieks. A ideia por trás do projeto era criar uma alternativa de código aberto para a Google Play Store, que é uma plataforma centralizada controlada por uma única empresa. O F-Droid foi desenvolvido como um catálogo de aplicativos que só inclui aplicativos de código aberto que podem ser baixados e instalados gratuitamente.
Desde o lançamento, o F-Droid cresceu significativamente e agora oferece mais de 3.500 aplicativos de código aberto para Android, com uma ampla gama de categorias, incluindo jogos, educação, produtividade, privacidade e segurança. O projeto é mantido por uma comunidade de desenvolvedores voluntários e é executado sem fins lucrativos.
Os problemas que o F-Droid resolve
O F-Droid resolve vários problemas associados à Google Play Store e outras lojas de aplicativos. O primeiro é o controle centralizado de uma única empresa. A Google Play Store pode remover aplicativos que violem suas políticas, e os desenvolvedores podem ser impedidos de distribuir aplicativos por meio da loja por motivos arbitrários. O F-Droid, por outro lado, é administrado por uma comunidade sem fins lucrativos e não há restrições sobre quais aplicativos podem ser incluídos no catálogo.
Além disso, o F-Droid oferece uma alternativa para os usuários preocupados com a privacidade que não querem depender dos serviços da Google. O F-Droid é independente do Google Play Services e não rastreia os usuários ou coleta informações pessoais, tornando-se uma escolha atraente para aqueles que buscam maior privacidade.
Por que usar o F-Droid?
Há várias razões pelas quais você pode querer usar o F-Droid em vez da Google Play Store ou outras lojas de aplicativos. Aqui estão algumas delas:
-
Privacidade: Como mencionado anteriormente, o F-Droid é independente do Google Play Services e não rastreia os usuários. Isso significa que você pode baixar e instalar aplicativos sem se preocupar com a coleta de dados pessoais.
-
Segurança: O F-Droid oferece apenas aplicativos de código aberto e de software livre, o que significa que a comunidade pode examinar o código-fonte para garantir que não haja vulnerabilidades ou backdoors mal-intencionados.
-
Controle: O F-Droid permite que os usuários controlem seus próprios dispositivos Android e decidam quais aplicativos são instalados e atualizados.
-
Escolha: O F-Droid oferece uma ampla gama de aplicativos de código aberto e de software
Também é importante observar que nem todos os aplicativos disponíveis no F-Droid são completamente livres de riscos. Portanto, é sempre recomendável ler cuidadosamente as informações sobre o aplicativo e avaliar os riscos antes de baixá-lo e instalá-lo.
Aqui está o passo a passo para instalar o F-Droid em seu dispositivo Android:
-
Acesse o site oficial do F-Droid em https://f-droid.org/ usando um navegador da web no seu dispositivo Android.
-
Verifique a assinatura PGP do arquivo APK que você irá baixar. Para isso, clique no link "PGP Signature" abaixo do botão "Download F-Droid" na página inicial do site. Na página seguinte, baixe o arquivo "F-Droid.apk.asc" clicando no botão "Download". Baixe também a chave pública do desenvolvedor do F-Droid, clicando no link "PGP key of F-Droid release signing key" na mesma página. É importante verificar a assinatura antes de prosseguir com a instalação.
-
Clique no botão "Download F-Droid" para baixar o arquivo APK.
-
Abra o arquivo APK baixado em seu dispositivo Android. Se você não sabe como abrir o arquivo, vá até o gerenciador de arquivos do seu dispositivo e encontre o arquivo APK que acabou de baixar. Toque no arquivo e siga as instruções na tela para instalar o aplicativo.
-
Se aparecer uma mensagem de alerta informando que a instalação de aplicativos de fontes desconhecidas está desativada, vá até as configurações de segurança do seu dispositivo e habilite a opção "Fontes desconhecidas". Isso permitirá que você instale aplicativos de fora da Google Play Store.
-
Clique em "Instalar" e aguarde a instalação ser concluída.
-
Após a instalação, abra o aplicativo F-Droid e aguarde a inicialização do repositório de aplicativos.
-
Pronto! Agora você pode navegar e baixar aplicativos do F-Droid em seu dispositivo Android. O F-Droid oferece uma ampla variedade de aplicativos de código aberto e de software livre, e todos são gratuitos para baixar e usar. Você pode usar o F-Droid como uma alternativa à Google Play Store se estiver preocupado com privacidade, segurança ou controle de seus próprios dispositivos Android.
https://github.com/f-droid
-
-
@ 6f0ba1ef:58eb4351
2024-10-22 23:31:47eyJfaWQiOiJjYmE4N2YyNS1jZjRkLTRkNzgtYmIwZS05YjNhNzA0MWVjNGIiLCJwdWJsaWNLZXkiOiI2ZjBiYTFlZjc4YjE3ZTVkMjNkMmVjZDlkYWNlNmJhMzBlODI1NzUxMGY0NmYxNjA0NGUyZjY5ZjU4ZWI0MzUxIiwiYWRUeXBlIjoiT05MSU5FX1NFTEwiLCJjb3VudHJ5IjoiVW5pdGVkIFN0YXRlcyBvZiBBbWVyaWNhIiwiY291bnRyeUNvZGUiOiJVUyIsImxhdCI6bnVsbCwibG9uIjpudWxsLCJjdXJyZW5jeUNvZGUiOiJVU0QiLCJwYXltZW50TWV0aG9kQ29kZSI6Ik5BVElPTkFMX0JBTksiLCJwcmljZUlucHV0VHlwZSI6Ik1BUktFVCIsIm1hcmdpbiI6MCwibWFya2V0UHJpY2UiOjE1Ny4zLCJmaXhlZFByaWNlIjowLCJtaW5BbW91bnQiOjAsIm1heEFtb3VudCI6MCwibGltaXRUb0ZpYXRBbW91bnRzIjoiIiwicGF5bWVudE1ldGhvZERldGFpbCI6IiIsIm1zZyI6IiIsImFjY291bnRJbmZvIjoiIiwiZm9yVHJ1c3RlZCI6Im5vIiwidmVyaWZpZWRFbWFpbFJlcXVpcmVkIjoibm8iLCJyZXF1aXJlRmVlZGJhY2tTY29yZSI6MCwiZmlyc3RUaW1lTGltaXRBc3NldCI6MCwicGF5bWVudFdpbmRvd01pbnV0ZXMiOjYwLCJoaWRkZW4iOmZhbHNlLCJiYW5uZWQiOmZhbHNlLCJ2ZXJpZmllZCI6dHJ1ZSwiYWRkZWQiOiIyMDI0LTEwLTIyVDIzOjMxOjQ3LjM5M1oifQ==
-
@ b5b32b81:b7383d03
2024-10-22 23:27:40Just assume everyone is a spook homie Trust no one but the code homie
-
@ 4ba8e86d:89d32de4
2024-10-22 23:20:59Usar um gerenciador de senhas pode resolver vários problemas, incluindo a dificuldade de criar senhas fortes e únicas, o risco de reutilização de senhas e a possibilidade de comprometimento de contas devido a senhas fracas ou roubadas. Com o uso de um gerenciador de senhas, o usuário pode criar senhas fortes e únicas para cada conta, sem precisar se preocupar em lembrar de todas elas. Além disso,um gerenciador de senhas pode ajudar a simplificar o processo de login, economizando tempo e reduzindo a frustração.
O KeepassDX é uma versão do software de gerenciamento de senhas Keepass, desenvolvido originalmente por Dominik Reichl em 2003. O Keepass é um software gratuito e de código aberto para desktops, que permite que os usuários armazenem suas senhas em um banco de dados seguro, criptografado com algoritmos avançados. Em 2017, o desenvolvedor alemão Brian Pellmann criou o KeepassDX como uma versão para dispositivos móveis, com o objetivo de tornar mais fácil para os usuários gerenciarem suas senhas em smartphones e tablets. O KeepassDX foi criado a partir do código-fonte do Keepass e foi projetado para ser fácil de usar, seguro e personalizável. Pellmann adicionou várias funcionalidades novas ao KeepassDX, incluindo a capacidade de importar senhas de outros gerenciadores de senhas, a organização de senhas em categorias e a compatibilidade com serviços de armazenamento em nuvem para sincronização entre dispositivos.
Algumas das funcionalidades do KeepassDX incluem a criação de senhas fortes e únicas, preenchimento automático de senhas, autenticação de dois fatores e organização de senhas por categorias. O KeepassDX também oferece uma ferramenta para importação de senhas de outros gerenciadores de senhas, o que torna a transição para o uso do KeepassDX mais fácil.
Passo a passo instalação do aplicativo KeepassDX:
-
Baixe e instale o KeepassDX em seu dispositivo móvel https://play.google.com/store/apps/details?id=com.kunzisoft.keepass.free
-
Clique em [CRIAR NOVO BANCO DE DADOS]
3. Selecione o local do arquivo no meu caso é " transferência " Semelhante ao KeePassXC, o KeePassDX suporta o formato de banco de dados .kbdx (por exemplo, Passwords.kdbx) para armazenar senhas de contas importantes. Você deve salvar este arquivo em um local de fácil acesso para você, mas difícil de encontrar para outras pessoas que você não deseja visualizar as informações. Em particular, evite salvar seu arquivo de banco de dados em um serviço de armazenamento online, conectado às suas contas, pois pode ser acessado por outras pessoas.
- Digite um nome para seu banco de dados Neste exemplo, " urso " nosso banco de dados como "urso.kdbx"; você pode escolher um nome de sua preferência. Você pode usar um nome que disfarce o arquivo e o torne menos óbvio para invasores que podem tentar acessar seu telefone e exigir que você desbloqueie o banco de dados. Também é possível alterar a "extensão" no final do nome do arquivo. Por exemplo, você pode nomear o banco de dados de senhas "Revoltadeatlas.pdf" "primainterior.jpg " e o sistema operacional geralmente dará ao arquivo um ícone de "aparência normal". Lembre-se de que, se você der ao seu banco de dados de senhas um nome que não termine em ".kdbx", não poderá abrir o arquivo diretamente. Em vez disso, você terá que iniciar o KeePassDX primeiro e, em seguida, abrir seu banco de dados clicando no botão "Abrir banco de dados existente". Felizmente, o KeePassDX lembra o último banco de dados que você abriu, então você não terá que fazer isso com frequência.
https://github.com/Kunzisoft/KeePassDX
-
-
@ b5b32b81:b7383d03
2024-10-22 23:20:02Test Content
-
@ 4ba8e86d:89d32de4
2024-10-22 23:04:43O objetivo principal do Tutanota é oferecer um serviço de e-mail protegido, onde as mensagens são criptografadas de ponta a ponta. Isso significa que as mensagens são codificadas no dispositivo do remetente e permanecem criptografadas até chegarem ao dispositivo do destinatário. Somente o remetente e o destinatário têm as chaves necessárias para descriptografar as mensagens, garantindo a confidencialidade das comunicações.
Além da criptografia de ponta a ponta, o Tutanota oferece outras medidas de segurança. Os e-mails são armazenados de forma criptografada nos servidores da empresa, o que significa que mesmo se alguém obtiver acesso aos servidores, os e-mails permanecerão ilegíveis sem a chave de descriptografia do usuário. O Tutanota também permite que os usuários protejam suas contas com senhas fortes.
O Tutanota foi fundado em 2011 na Alemanha por Arne Möhle e Matthias Pfau, com o objetivo de oferecer um serviço de e-mail seguro e privado. Eles buscavam fornecer uma alternativa aos serviços tradicionais, enfatizando a proteção da privacidade e a criptografia de ponta a ponta. Desde então, o Tutanota expandiu suas funcionalidades, adicionando recursos como armazenamento criptografado, calendário e agenda. A empresa está comprometida com a proteção dos dados dos usuários e está sediada na Alemanha, seguindo o Regulamento Geral de Proteção de Dados (GDPR). O Tutanota continua a ser reconhecido como uma referência em privacidade digital e mantém seu compromisso com a segurança e a privacidade dos usuários.
Recursos de Segurança: 1. Criptografia de ponta a ponta: Uma das principais características do Tutanota é a criptografia de ponta a ponta. Isso significa que suas mensagens são codificadas no dispositivo do remetente e permanecem criptografadas durante o trânsito até o dispositivo do destinatário. Somente o remetente e o destinatário têm as chaves para descriptografar as mensagens, tornando praticamente impossível para terceiros interceptá-las e acessar seu conteúdo.
-
Armazenamento criptografado: Além da criptografia de ponta a ponta para as mensagens em trânsito, o Tutanota também armazena seus e-mails de forma criptografada em seus servidores. Isso significa que, mesmo se alguém obtiver acesso aos servidores do Tutanota, os e-mails armazenados permanecerão ilegíveis sem a chave de descriptografia do usuário.
-
Proteção por senha: O Tutanota permite que você defina uma senha segura para proteger sua conta de e-mail. Essa senha é usada para criptografar seus dados armazenados e deve ser mantida em sigilo para garantir a segurança de sua conta.
Outros Recursos e Funcionalidades: 1. Interface amigável: O Tutanota oferece uma interface de usuário intuitiva e fácil de usar, semelhante aos serviços de e-mail convencionais. Isso torna a transição para o Tutanota suave para os usuários acostumados a outras plataformas de e-mail.
-
Aplicativos móveis: O Tutanota oferece aplicativos móveis para dispositivos iOS e Android, permitindo que você acesse sua conta de e-mail de forma segura em seu smartphone ou tablet.
-
Armazenamento e anexos: O Tutanota oferece 1 GB de armazenamento gratuito para sua conta de e-mail, permitindo que você mantenha um histórico de mensagens seguro. Além disso, você pode enviar e receber anexos com segurança, pois eles também são criptografados.
-
Proteção contra spam: O Tutanota possui um filtro de spam eficiente que verifica e filtra mensagens indesejadas, garantindo que sua caixa de entrada seja livre de spam e ameaças.
-
Calendário e Contatos: Além do serviço de e-mail, o Tutanota também oferece recursos de calendário e agenda, permitindo que você organize seus compromissos de forma segura e mantenha uma lista de contatos criptografados.
-
Personalização: O Tutanota permite que você personalize sua conta de e-mail, como escolher seu próprio domínio personalizado (para usuários pagos) e criar aliases de e-mail para maior flexibilidade na comunicação.
-
Compromisso com a privacidade: O Tutanota é conhecido por sua postura pró-privacidade e compromisso com a proteção dos dados de seus usuários. A empresa está sediada na Alemanha, onde estão sujeitos a rigorosas leis de proteção de dados, e segue o Regulamento Geral de Proteção de Dados (GDPR) da União Europeia.
O Tutanota é uma opção excelente para aqueles que valorizam a privacidade e desejam proteger suas comunicações online. Com sua criptografia de ponta a ponta, armazenamento criptografado e outros recursos de segurança, o Tutanota garante que suas mensagens e dados pessoais permaneçam privados e seguros.
https://tutanota.com/pt_br/
https://github.com/tutao/tutanota
-
-
@ 4ba8e86d:89d32de4
2024-10-22 22:58:12Isto visa aumentar a privacidade dos utilizadores envolvidos, proteger contra a vigilância financeira e aumentar a fungibilidade da rede Bitcoin como um todo.
O coinjoin é uma técnica poderosa de privacidade para aqueles com um conhecimento intermediário do Bitcoin. Aprender o que é o coinjoin e como ele funciona pode ser uma ferramenta valiosa para sua caixa de ferramentas no mundo do Bitcoin. Antes de entrarmos em detalhes, é fundamental compreender o funcionamento dos endereços e das UTXOs (Unspent Transaction Outputs) do Bitcoin, bem como os princípios básicos da privacidade no Bitcoin.
Para começar, todas as transações padrão do Bitcoin são registradas publicamente na blockchain. Isso significa que se alguém identificar um endereço como pertencente a você, poderá rastrear o saldo desse endereço e acompanhar como seus Bitcoins são movimentados. No entanto, ao enviar Bitcoin para um coinjoin, você torna extremamente difícil para qualquer pessoa continuar associando UTXOs ou partes específicas de Bitcoin à sua identidade. Em essência, os coinjoins permitem que você recupere a privacidade perdida e, em certa medida, corrija erros de privacidade cometidos no passado.
Agora, vamos explorar como os coinjoins funcionam. O conceito é relativamente simples. Um coinjoin é uma colaboração sem confiança entre várias pessoas para criar uma transação. Na forma mais básica, cada pessoa envolvida contribui com uma quantia igual de Bitcoin como entrada para a transação e, em seguida, recebe essa mesma quantia de Bitcoin de volta como saída. Para ilustrar, considere este exemplo:
Imagine um coinjoin envolvendo você e mais quatro participantes. Cada pessoa retira 0,05 Bitcoin de um endereço vinculado à sua identidade e paga uma taxa para contribuir para a transação global. Após a transmissão e confirmação da transação, cada pessoa recebe 0,05 Bitcoin de volta como um novo UTXO em um novo endereço de carteira que controlam. As cinco novas addresses que recebem os UTXOs devem ser anônimas, o que significa que qualquer observador da blockchain não terá ideia de qual delas pertence a você. Mas Claro Você pode participar de múltiplos coinjoins com um número muito maior de participantes.
Aumento no número de maneiras pelas quais as coinjoins da Whirlpool podem ser interpretadas na cadeia
5 entradas/5 saídas = 1.496 interpretações (existentes)
6 entradas/6 saídas = 22.482 interpretações
7 entradas/7 saídas = 426.833 interpretações
8 entradas/8 saídas = 9.934.563 interpretações
Lembre-se de que, após submeter Bitcoins a um coinjoin, eles podem novamente ser associados à sua identidade por meio de ações subsequentes, como enviar Bitcoin para um serviço financeiro que conhece a sua identidade, o Bitcoin será naturalmente vinculado a você aos olhos desse provedor de serviços. #Bitcoin #privacidade
https://medium.com/oxt-research/understanding-bitcoin-privacy-with-oxt-part-1-4-8177a40a5923
https://medium.com/oxt-research/understanding-bitcoin-privacy-with-oxt-part-2-4-20010e0dab97
https://medium.com/oxt-research/understanding-bitcoin-privacy-with-oxt-part-3-4-9a1b2b572a8
https://medium.com/oxt-research/understanding-bitcoin-privacy-with-oxt-part-4-4-16cc0a8759d5
https://medium.com/samourai-wallet/introducing-whirlpool-surge-cycles-b5b484a1670f
https://medium.com/samourai-wallet/diving-head-first-into-whirlpool-anonymity-sets-4156a54b0bc7
https://blog.samourai.is/why-we-coinjoin/
https://docs.samourai.io/whirlpool
https://youtu.be/0YwiL978p1c?si=DRZNEH_aHKHjb2-m
-
@ 4ba8e86d:89d32de4
2024-10-22 22:48:44O uBlock Origin é uma extensão para navegadores que bloqueia anúncios e outros conteúdos indesejados em páginas da web. Ele utiliza filtros para identificar e bloquear elementos da página, como banners, pop-ups, rastreadores, scripts maliciosos e outras formas de conteúdo que possam comprometer a segurança ou a privacidade do usuário. O uBlock Origin é uma ferramenta útil para melhorar a segurança e a privacidade do usuário na internet, oferecendo controle sobre o tipo de conteúdo que é exibido nas páginas visitadas. Ele consome menos recursos do que outros bloqueadores de anúncios, o que significa que o navegador terá um desempenho melhor. Além disso, o uBlock Origin impede que empresas coletem informações sobre o comportamento online do usuário, bloqueia sites com malware e phishing.
Além disso, o uBlock Origin utiliza técnicas de otimização para minimizar o impacto no desempenho do navegador, como o carregamento de listas de filtros em segundo plano e o uso de cache para evitar a reavaliação de elementos já bloqueados.
A história do uBlock Origin remonta a 2014, quando Raymond Hill, fundador, autor original e desenvolvedor principal, criou a extensão original do uBlock. Ele iniciou o desenvolvimento da extensão a partir da bifurcação da base de código do HTTP Switchboard com uma extensão de bloqueio separada, uMatrix, que havia sido projetada anteriormente para usuários avançados. O uBlock foi criado para habilitar listas de bloqueio mantidas pela comunidade, adicionando recursos adicionais e atualizando a qualidade do código para os padrões de lançamento adequados. Inicialmente, a extensão foi lançada em junho de 2014 como uma extensão exclusiva do Chrome e Opera. A partir de 2015, a extensão do uBlock foi expandida para outros navegadores sob o nome atual de uBlock Origin. A demanda por bloqueadores “puros” com a capacidade de operar fora do programa de “publicidade aceitável” usado pelo AdBlock e outras extensões do setor levou a um rápido crescimento da popularidade do uBlock Origin. um período de 10 meses, o crescimento mais rápido entre qualquer software da indústria listado publicamente na época. A partir de 2023, o uBlock Origin continua a ser mantido e desenvolvido ativamente pelo fundador e desenvolvedor líder Raymond Hill.
Desde sua criação, o uBlock Origin tem sido mantido e desenvolvido ativamente por seu criador, Raymond Hill. A extensão foi adicionada aos repositórios para Debian 9 e Ubuntu e foi premiada com a honra de "Pick of the Month" pela Mozilla.
O uBlock Origin resolve vários problemas relacionados à navegação na internet, como:
-
Anúncios: Ele bloqueia anúncios, pop-ups e banners que muitas vezes são intrusivos e atrapalham a experiência de navegação.
-
Rastreamento: Ele impede que empresas coletem informações sobre o seu comportamento online, o que aumenta a sua privacidade e segurança.
-
Malware: Ele bloqueia sites que possuem malware, impedindo que você acabe infectando o seu dispositivo.
Experimente o uBlock Origin e tenha mais controle sobre a sua navegação na internet, com maior segurança e privacidade. O download está disponível para diversos navegadores, incluindo Google Chrome, Mozilla Firefox, Microsoft Edge e Opera.
https://ublockorigin.com/
https://github.com/gorhill/uBlock
-
-
@ 23d1d973:903f137f
2024-10-22 21:42:47--₿--
To most people, Nate Silver is synonymous with the election forecast model FiveThirtyEight. He first appeared broadly on the public’s radar during the 2008 election, and for a decade and a half he has dominated the market for statistical-political fortune-telling — a self-prescribed expert in psephology.
What fewer people know is that he’s also a successful poker player and the author of 2015 The Signal and the Noise, a popular book on statistics (…and the backdrop to my first-ever article for AIER). Having left FiveThirtyEight — now run by ABC News — he spent the last few years writing the Substack “Silver Bulletin,” now supplemented by a 500-page heavyweight of a book.
In On the Edge: The Art of Risking Everything, Silver has distilled even deeper meaning from thinking about the world through the lens of statistics and gambling. While the book is overwhelmingly about games, with the first two hundred pages exclusively dealing with poker, sports betting, the Vegas gambling scene, and the many colorful characters inhibiting these worlds, it’s fundamentally about a major fault line in American society.
In one corner, we have Silver’s own analytical and cognitive team, the River — a “sprawling ecosystem of like-minded people…a way of thinking and a mode of life.” A Riverian is characteristically independent-minded, risk-tolerant, combative and competitive, skeptical and often hyperrational. Riverians care about principles and abstract thinking, and Silver mostly sees them in the tech world of Silicon Valley, in the hallowed gambling halls of Las Vegas, or the hedge funds on Wall Street.
In the other corner, we have the Village — Silver’s term for high academia, most of the media, politics, and Washington, DC think tanks: “it’s basically the liberal establishment,” he writes in a Substack post introducing the book. In the other corner, we have the Village — Silver’s term for high academia, most of the media, politics, and Washington, DC think tanks: “it’s basically the liberal establishment,” he writes in a Substack post introducing the book.
“Villagers,” writes Silver, “see themselves as being clearly right on the most important big-picture questions, from climate change to gay and trans rights” — so arguing over the principles or abstractions involved, as skeptical Riverians are wont to do, is somewhere between time-wasting and politically dangerous. Villagers think very lowly of the Riverians’ faux desire for competition since they so often see games being rigged in their favor, and not actually risking that much; Riverians benefit from existing social hierarchies and are too blind to their own privileges.
In counter-salvo, the Riverians think Villagers are captured by social and political fads, their “claims to academic, scientific, and journalistic expertise are becoming increasingly hard to separate from Democratic political partisanship.”
The distinction between the tribes isn’t primarily about educational credentials, and doesn’t entirely map onto the red-blue divide so all-encompassing in American life; still, college degrees are largely the admission fee to the Village, which tends to be about as left-leaning as Riverian thought tends to be rich and white.
The major difference in how Villagers and Riverians think comes down to (de)coupling, abstract logical thinking, and “the ability to block out context”: it’s about cognitive discipline and ability to keep two thoughts in your head at once — to decouple pieces of information that are independent. On the left in America, says Silver, “the tendency is to add context rather than remove it based on the identity of the speaker, the historical provenance of the idea.”
The collapsing trust in American institutions isn’t some moral failure on the part of a bigoted, unpatriotic underclass, but a “reasonable reaction” given what the Villagers have been up to recently — everything from identitarian politics to the COVID debacle. Still, Silver comes out fairly optimistic: he opens his final chapter by saying that “ever since 1776, we risk-takers have been winning.”
Silver, a Democrat himself but an avid Riverian, thinks he’s in a good position to speak to both worlds, explain the virtues of one to skeptical members of the other since he spent the last decade-plus passing between those worlds. Still, he opens the book by confessing, “I still feel more at home in a casino than at a political convention,” and he often reminds the reader of how much of a Riverian he is.
The guiding principle underlying this book’s investigation is the very Riverian concept of expected value (EV) — a probabilistic calculation familiar to any student of economics. The EV of an uncertain gamble is the payoffs multiplied by its various probabilities; a coin flip that pays you $100 for heads and -$20 for tails has an expected value of $40: (100*50%)+(-20*50%).
This, affectionately termed +EV or “edge”, is what everyone in the River live for — from poker players inching out an edge over their opponents to sports betters getting the better of betting sites, to venture capital investors riding the 100x returns of successes to pay for their many duds.
With the Village-River map in mind and EV as a guiding light, Silver takes us on a journey from the strategies of poker to the ins and outs of the gambling industry, to moral philosophy, the venture capital scene, and survivorship bias in finance and poker alike. We get a fair intro to Bitcoin, but mostly as a setup to Sam Bankman-Fried and the collapse of the cryptocurrency exchange he ran.
We’re treated to a lengthy summary of x-risk debates (short for existential risks to humanity, ranging from conventional ones like nuclear war to newer concerns like climate change and runaway AI) and effective altruism — the philosophical movement that Bankman-Fried brought to widespread attention and simultaneously did so much to harm.
What feels strange is what all of this has to do with poker or sports betting. Fair enough, everyone obsessed with a game tends to see its workings replicated elsewhere. Christians see God everywhere, from Bitcoin to engineering. As an avid chess player, I often tend to see chess dynamics in life and economic affairs. Nate Silver, an on-and-off professional poker player, sees complicated philosophical arguments or behaviors in politics and technology through the lens of poker hands, relevant because it’s about “calculated risk-taking” and making calibrated predictions about future events.
As economists, we have little room to object; John von Neumann observed already in Theory of Games and Economic Behavior from 1944, the book that laid the foundations for the economics subfield game theory, that human interactions can be thought of as games — full of moves, countermoves, bluffs, risk, psychology, and the assessment of variable payoffs.
One controversial conclusion that Silver takes away from his investigation is that society needs more risk, not less: “Society would be generally better off — I’ll confidently contend — if people understood the nature of expected value and specifically the importance of low-probability, high-impact events.” Whether poker hands, public policy, or how to estimate the risk of runaway AI, On the Edge is about how to think well about a problem. That’s a worthy exercise for all of us.
--₿--
Originally published at the American Institute for Economic Research (Daily Economy).
-
@ aa8de34f:a6ffe696
2024-10-22 19:36:06Das Internet hat ein Problem. Nur wenige Leute wissen, dass dieses Problem existiert, aber hey, das liegt in der Natur ernsthafter, nicht offensichtlicher Probleme: Sie sind unsichtbar bis sie es nicht mehr sind. Das Problem mit dem Internet ist, dass Informationen frei sein wollen. Und wenn etwas FREI sein will wie in Freiheit, wird es mit genügend Zeit auch frei wie im Bier. Lass es mich erklären.
Luftverschmutzung
Jeden Tag verbrauchen wir ungeheure Mengen an Daten. Jede Sekunde jeder Minute strömen Bits und Bytes durch die Reihe von Leitungen die wir alle kennen und lieben: das Internet. Wir nehmen es als selbstverständlich hin und die meisten von uns halten das derzeitige Monetarisierungsmodell – sowie alle damit einhergehenden Übel – ebenfalls für selbstverständlich. Wir halten nur selten inne und denken über die seltsame Welt der Bits und Bytes nach. Wie wunderbar das alles ist, aber auch wie fremd. Wie es unser Leben bereits verändert hat und wie es unsere Zukunft weiter verändern wird. Woher kommen die Nullen und Einsen? Wie funktioniert das alles? Und vor allem: Wer zahlt dafür? Die Bits und Bytes, die durch unsere Glasfaserkabel rasen, sind so unsichtbar wie die Luft die wir atmen. Das ist keine schlechte Metapher wenn ich so darüber nachdenke. Solange wir keine Atemprobleme haben, müssen wir nicht anhalten und jedes einzelne Molekül, das wir einatmen, untersuchen. Ebenso lange wir keine Schwierigkeiten haben digitale Inhalte zu erstellen und zu konsumieren, brauchen wir auch nicht innezuhalten und all die verschiedenen Teile zu untersuchen, die unsere Aufmerksamkeitsökonomie am Laufen halten.\ Aufmerksamkeitsökonomie. Was für eine treffende Beschreibung. Wie wir inzwischen alle wissen sollten, ist das was wir konsumieren nicht umsonst, sondern wir bezahlen teuer dafür: mit unserer Aufmerksamkeit unter anderem.
Paying Attention – mit Aufmerksamkeit bezahlen
In der schnelllebigen Welt von heute muss man, um den Gewinn zu maximieren, die Aufmerksamkeit maximieren. Aber es ist eine eigentümliche, oberflächliche Art von Aufmerksamkeit. Es ist nicht die konzentrierte Art von Aufmerksamkeit, die tiefes Denken und sinnvolle Gespräche erfordern würden. Ich glaube, dass dies zumindest teilweise der Grund ist, warum viele Dinge so kaputt sind. Warum unser gesellschaftlicher Diskurs so zersplittert ist, unsere Politik so polarisiert, wir so gelähmt sind und unsere Analysen oft so oberflächlich sind wie unsere Wünsche.\ Die Aufmerksamkeitsökonomie hat uns fein säuberlich in Echokammern für persönliche Wahrheiten eingeteilt. Ironischerweise besteht die einzige Wahrheit die es wert ist, in der Aufmerksamkeitsökonomie verfolgt zu werden, darin, wie man eine maximale Anzahl von Menschen für eine maximale Zeitspanne maximal empört halten kann. Und das alles ohne dass die Teilnehmer merken, dass sie in einem selbst gewählten algorithmischen Gefängnis gefangen sind.
Du bist das Produkt
Die Redewendung „Wenn etwas kostenlos ist, bist du das Produkt“ kann nicht oft genug wiederholt werden. Aus dem einen oder anderen Grund erwarten wir, dass die meisten Dinge Online „kostenlos“ sind. Natürlich gibt es so etwas wie ein kostenloses Mittagessen nicht – nichts ist umsonst. Bei Online-Diensten werden deine Daten gesammelt und an den Meistbietenden verkauft, der in der Regel eine Werbeagentur oder eine Regierungsbehörde ist. Oder beides.\ Alle Big-Data-Unternehmen spionieren dich nicht nur aus, sondern nutzen auch eine Vielzahl von dunklen Mustern und unethischen Praktiken, um aus deinen Interaktionen auch den letzten Tropfen an Daten herauszuholen. Ob es sich dabei um den Facebook-Pixel, Google Analytics oder etwas anderes handelt, spielt keine Rolle. Du wirst nachverfolgt, überwacht und katalogisiert. Was du siehst, wie lange, zu welchen Zeiten, wie häufig und was du als Nächstes sehen wirst, wird sorgfältig von einem gewinnmaximierenden Algorithmus orchestriert. Profit für die Plattform, nicht für dich. Natürlich geht es in der Regel darum, dass alle davon profitieren: Nutzer, Urheber, Werbetreibende und die Plattformen gleichermaßen. Das evolutionäre Umfeld, das durch diese Anreizstrukturen geschaffen wird, führt jedoch häufig dazu, dass seichte, aufmerksamkeitsstarke und sensationslüsterne Schnipsel ausgewählt werden. Zum Zeitpunkt dieses Schreibens – Block 716.025 – ist der Inbegriff eines solchen Umfelds TikTok, eine videobasierte Dopaminmaschine, die dir das filmische Äquivalent von Heroin gemischt mit Crack zeigt. Harte Drogen für den Verstand, maßgeschneidert für deine besonderen Vorlieben. Eine wahrhaft verfluchte App. Leider unterscheiden sich die meisten dieser Plattformen nur in der Größe, nicht in der Art.
Zulässige Meinungen
„Es ist gar nicht so schlimm“, sagen wir uns. „Sieh dir all die nützlichen Informationen an!“, denken wir, während wir durch unsere Feeds scrollen, und füttern damit ungewollt die Maschine, die uns im Gegenzug mit Dopamin versorgt.\ Aber täuschen Sie sich nicht: Den verantwortlichen Unternehmen geht es nicht darum, uns mit nützlichen (oder wahrheitsgemäßen) Informationen zu versorgen. Es geht ihnen darum uns auszutricksen, damit wir die Maschine füttern.\ Wie könnte es anders sein? Man ist was man verfolgt, und man wird das worauf man optimiert wird. Aus Sicht der Plattform geht es um Klicks, nicht um Qualität. Auf den ersten Blick mag die Maximierung von Klicks und Verweildauer eine harmlose Sache sein. Schließlich muss man ja Geld verdienen um zu überleben. Es ist ja nur eine Anzeige. Wie schlimm kann es schon werden?\ Leider sind die Probleme, die damit einhergehen, zunächst unsichtbar. So wie Krebs für den Raucher unsichtbar ist, der gerade seine erste Zigarette geraucht hat, und Leberzirrhose für den Trinker unsichtbar ist, der gerade seinen ersten Drink zu sich genommen hat, so sind Deplatforming, Zensur, Polarisierung und Manipulation der öffentlichen Meinung für den Prosumenten unsichtbar, der gerade seine erste Anzeige auf einer geschlossenen Plattform gesehen hat. Wir können uns wahrscheinlich darauf einigen, dass wir das erste Inning (Stichwort Baseball) in dieser Frage hinter uns haben. Zensur ist die Norm, Deplatforming wird bejubelt, die Polarisierung ist auf einem Allzeithoch, und die öffentliche Meinung wird manuell und algorithmisch manipuliert wie nie zuvor.\ „Der Konsens ist, dass du zu dumm bist, um zu wissen was gut für dich ist und dass deine öffentliche Meinung zu unverschämt ist, um öffentlich geäußert zu werden. Schlimmer noch, es sollte gar nicht erst deine Meinung sein. Hier ist der Grund warum du falsch liegst. Hier ist eine Quelle die auf eine zulässige Meinung hinweist. Hier sind einige Experten, die mit uns übereinstimmen. Unsere intelligenten und hilfreichen Algorithmen haben die ganze Denkarbeit für dich erledigt, und sie liegen nie falsch. Genauso wenig wie die Experten.“\ Das ist die Welt, in der wir bereits leben. Es ist nicht erlaubt frei zu sprechen. Es ist nicht erlaubt frei zu denken. Du darfst dich nicht frei äußern. Dein Bild ist beleidigend und muss daher entfernt werden. Dein Meme ist zu nah an der Wahrheit oder zu kriminell lustig; deshalb müssen wir dich für ein oder zwei Wochen ins Twitter-Gefängnis stecken. Du sagst etwas mit dem wir nicht einverstanden sind; deshalb müssen wir dich auf Lebenszeit sperren – auch wenn du ein amtierender Präsident bist, wohlgemerkt. Du hast in einem Video das falsche Wort gesagt oder ein urheberrechtlich geschütztes Lied im Hintergrund abgespielt; deshalb müssen wir dir dein Einkommen wegnehmen. Du hast ein Bild von dir ohne Maske gepostet; deshalb müssen wir dich verbieten und den Behörden melden.1\ Die Tatsache, dass der obige Satz nicht mehr nur im Bereich der dystopischen Science-Fiction angesiedelt ist, sollte jeden beunruhigen. Aus dem Cyberspace entfernt – weil ich frei atmen wollte. Seltsame Zeiten.
Evolutionärer Druck
Wie ist es dazu gekommen? Wenn ich gezwungen wäre, eine kurze Antwort zu geben, würde ich Folgendes sagen: Wir sind von Protokollen zu Plattformen übergegangen, und Plattformen sind nur so gut wie ihre Anreize.\ Die Anreizstruktur der Plattformen, auf denen wir leben, ist die evolutionäre Umgebung, die das Überleben diktiert. Alles, was überleben will, muss sich daran orientieren. Das gilt natürlich für alle Bereiche der Wirtschaft. Nimm zum Beispiel Printmagazine. Wenn deine Zeitschrift kein schönes weibliches Gesicht auf der Titelseite hat, wird sie aus sehr menschlichen evolutionären Gründen nicht so oft gekauft werden wie die Zeitschriften, die ein solches Gesicht haben. Sie kann sich also nicht selbst reproduzieren und wird folglich sterben. Ähnlich verhält es sich mit einem Online-Nachrichtenportal, wenn es keine ausreichenden Werbeeinnahmen generiert und nicht in der Lage ist sich zu reproduzieren, wird es sterben. Aus diesem Grund hat jede Zeitschrift ein schönes weibliches Gesicht auf dem Cover. Und das ist der Grund, warum jede werbefinanzierte Online-Nachrichtenquelle zu Clickbait verkommt.
Eines dieser Gesichter ist anders als die anderen
Das ist auch der Grund, warum sich Feed-basierte Empfehlungsmaschinen in Spielautomaten für deine Dopaminrezeptoren verwandeln. Je länger du an deinem Bildschirm klebst, desto mehr Werbung siehst du, desto mehr Einnahmen werden für die Plattform generiert. Das ist auch der Grund, warum die meisten YouTube-Kanäle zu 7- bis 15-minütigen Kurzvideos mit Vorschaubildern verkommen, die das Gesicht von jemandem zeigen, der gerade auf ein Legostück getreten ist. Kurz genug, um dich zu überzeugen es anzusehen, lang genug, um dich vergessen zu lassen, welches Video du dir eigentlich ansehen wolltest. Wie Ratten, die in hyperpersonalisierten Skinner-Boxen auf Knöpfe drücken, werden wir in Suchtzyklen konditioniert, um die Gewinne der Aktionäre zu maximieren.
Profitmaximierung
Plattformen sind Unternehmen und für Unternehmen gibt es Anreize die Gewinne der Aktionäre zu maximieren. Gegen Gewinne ist nichts einzuwenden und gegen Aktionäre ist auch nichts einzuwenden. Ich glaube jedoch, dass die Informationsrevolution, in der wir uns befinden, die evolutionäre Landschaft in zwei Teile gespalten hat. Nennen wir diese Landschaften „breit“ und „schmal“.\ Um die Gewinne durch breit angelegte Werbung zu maximieren, müssen Kontroversen und extreme Meinungen auf ein Minimum reduziert werden. Indem man den kleinsten gemeinsamen Nenner anspricht, kommen Politik und Zensur sofort ins Spiel. Umgekehrt müssen Kontroversen und extreme Meinungen maximiert werden, wenn die Gewinne durch schmale, gezielte Werbung erzielt werden sollen. Allein dadurch, dass verschiedenen Untergruppen unterschiedliche Informationen gezeigt werden, werden Polarisierung und Fragmentierung kontinuierlich verstärkt.
Allgemeine Kohäsion vs. algorithmische Spaltung
Diese beiden Extreme sind zwei Seiten der gleichen Medaille. Es mag den Anschein haben, als ginge es um Kabelfernsehen gegen den algorithmischen Newsfeed, aber in Wirklichkeit handelt es sich um zwei unterschiedliche Ansätze, die dasselbe Ziel verfolgen: möglichst viele Menschen vor dem Bildschirm zu halten, damit sie mehr Werbung sehen. Das erste ist ein Beruhigungsmittel, das zweite ein Stimulans.\ Zugegeben, die obige Charakterisierung mag übertrieben sein, aber das Problem bleibt bestehen: Wenn wir nicht direkt für etwas bezahlen, bezahlen wir es indirekt, auf die eine oder andere Weise. Immer.
Der Punkt ist der folgende: Plattformen für freie Meinungsäußerung können nicht existieren. Es kann nur Protokolle der freien Meinungsäußerung geben. Wenn jemand kontrollieren kann, was gesagt wird, wird jemand kontrollieren, was gesagt wird. Wenn du Inhalte überwachen, filtern und zensieren kannst, wirst du Inhalte überwachen, filtern und zensieren.
Alle Plattformen werden mit diesem Problem konfrontiert, ganz gleich wie makellos ihre Absichten sind. Selbst wenn du dich anfangs als Plattform für freie Meinungsäußerung positionierst, wirst du auf lange Sicht gezwungen sein, einzugreifen und zu zensieren. Letztendlich wenn du für Inhalte, die du hostest oder übermittelst, vom Staat haftbar gemacht werden kannst, wirst du auch für Inhalte, die du hostest oder übermittelst, vom Staat haftbar gemacht werden.
Selbstzensur
Doch lange bevor die staatliche Zensur ihr hässliches Haupt erhebt, wird die abschreckende Wirkung der Selbstzensur zu spüren sein. Wenn andere wegen der Äußerung bestimmter Meinungen enttabuisiert und dämonisiert werden, werden die meisten Menschen sehr vorsichtig sein, diese Meinungen zu äußern. Bewusst und unbewusst bringen wir uns selbst langsam zum Schweigen.
Wenn es um Selbstzensur geht, spielt auch die Werbung eine Rolle. Schließlich würdest du nicht die Hand beißen, die dich füttert, oder? Im schlimmsten Fall sagen dir die Werbetreibenden und Führungskräfte, was gesagt werden darf und was nicht. Sie werden dir sagen, welche Meinungen innerhalb und welche außerhalb des Overton-Fensters liegen. Und wenn sie es nicht tun, wirst du eine fundierte Vermutung anstellen und deine Aussagen entsprechend anpassen.
Ein Problem und ein Paradoxon
Zurück zum ursprünglichen Problem: Warum können wir Informationen nicht wie eine normale Ware verkaufen? Warum führt der einfache Ansatz – Inhalte hinter eine Bezahlschranke zu stellen – zu so schlechten Ergebnissen? Ich glaube, es gibt zwei Gründe, die ich das „MTX-Problem“ und das „DRM-Paradoxon“ nennen möchte.
Das MTX-Problem, wobei MTX die Abkürzung für „mentale Transaktion“ (mental transaction) ist, bezieht sich auf das Problem der nicht reduzierbaren mentalen Transaktionskosten, die jeder Transaktion innewohnen. Jedes Mal, wenn du auf eine Bezahlschranke stößt, musst du eine bewusste Entscheidung treffen: „Möchte ich dafür bezahlen?“
Wie Szabo überzeugend darlegt, lautet die Antwort in den meisten Fällen, insbesondere wenn die Kosten gering sind, nein. Dafür gibt es keine technischen Gründe, sondern psychologische Gründe. Es stellt sich heraus, dass die Mühe, herauszufinden, ob sich diese Transaktion lohnt oder nicht – ein Prozess, der in deinem Kopf stattfindet – einfach zu groß ist. Wenn man über einen Mikrokauf nachdenken muss, sinkt die Wahrscheinlichkeit, dass man diesen Kauf tätigt, drastisch. Deshalb sind Flatrates und Abonnements der Renner: Man muss nur einmal darüber nachdenken.
Bei den kleinsten Mikrotransaktionen ist dies sogar aus rein wirtschaftlicher Sicht der Fall. Wenn man einen Stundenlohn von 20 USD zugrunde legt und zwei Sekunden lang darüber nachdenkt: „Ist das 21 Sats wert?“, kostet das etwas mehr als 1¢, also mehr als der Preis der betreffenden Mikrotransaktion.[2] Das ist sowohl psychologisch als auch wirtschaftlich nicht machbar. Dies ist, kurz gesagt, das MTX-Problem.
Aber das ist nicht das einzige Problem, das die Monetarisierung digitaler Inhalte belastet. Wie bereits erwähnt, gibt es auch das DRM-Paradoxon. DRM, kurz für „Digital-Rechte Management“ (Digital Rights Management), ist ein vergeblicher Versuch, das Kopieren von Informationen zu verhindern. Es sollte sich von selbst verstehen, dass nicht kopierbare Informationen ein Oxymoron sind, aber im Zeitalter von NFTs und vielem anderen Unsinn muss das leider ausdrücklich gesagt werden. Lass es mich also für dich buchstabieren: Man kann keine Informationen erstellen, die nicht kopiert werden können. Punkt. Oder, um es mit den Worten von Bruce Schneier zu sagen: „Der Versuch, digitale Dateien unkopierbar zu machen, ist wie der Versuch, Wasser nicht nass zu machen.“
Es liegt in der Natur der Sache, dass Informationen, wenn sie gelesen werden können, auch kopiert werden können – und zwar mit perfekter Genauigkeit. Kein noch so großer Trick oder künstliche Beschränkungen werden diese Tatsache ändern. Aus diesem Grund werden digitale Artefakte wie Filme und Musik immer kostenlos erhältlich sein. Es ist trivial für jemanden der Zugang zu diesen Artefakten hat, diese zu kopieren – zu Grenzkosten von nahezu Null, wohlgemerkt – und sie anderen zugänglich zu machen. Mit genügend Zeit und Popularität wird also jeder Film, jedes Lied und jedes Dokument für die\ Allgemeinheit kostenlos verfügbar sein. Die Natur der Information lässt kein anderes Ergebnis zu. Daher das Sprichwort: Information will frei sein.
Obwohl der Versuch etwas zu schaffen das es nicht geben kann – Informationen die nicht kopiert werden können – an sich schon paradox ist, meine ich damit nicht das DRM-Paradoxon. Was ich meine, ist etwas viel Lustigeres. Es ist wiederum psychologischer, nicht technischer Natur. Das Paradoxon ist folgendes: Inhalte werden nur dann hinter einer Bezahlschranke bleiben, wenn sie schlecht sind. Wenn sie gut sind, wird sie jemand freilassen.
Wir alle kennen das. Wenn ein Artikel tatsächlich lesenswert ist, wird jemand, der sich hinter der Bezahlschranke befindet, einen Screenshot davon machen und ihn in den sozialen Medien veröffentlichen. Wenn der Film es wert ist angeschaut zu werden, wird er auf verschiedenen Websites verfügbar sein, die Piratenschiffe als ihre Logos haben. Wenn der Song es wert ist gehört zu werden, wird er auf Streaming-Seiten kostenlos zur Verfügung stehen. Nur die schrecklichen Artikel, die obskursten Filme und die Lieder, bei denen einem die Ohren bluten, bleiben hinter Bezahlschranken.
Daraus ergibt sich das Paradoxon: Inhalte bleiben nur dann hinter Bezahlschranken, wenn sie schlecht sind. Wenn sie gut sind, werden sie freigelassen.
Ich persönlich glaube, dass das MTX-Problem ein größeres Problem darstellt als das DRM-Paradoxon. Die traditionelle Lösung für das MTX-Problem ist das Abonnementmodell, wie bei Netflix, Spotify, Amazon und so weiter. Das DRM-Paradoxon bleibt bestehen, aber es stellt sich heraus, dass dies kein Problem ist wenn man den „legitimen“ Zugang zu Informationen bequem genug gestaltet. Die Opportunitätskosten für das Herunterladen, Speichern, Pflegen und Warten einer privaten Liedersammlung sind für die meisten Menschen einfach zu hoch. Die bequemere Lösung ist für das verdammte Spotify-Abo zu bezahlen. Allerdings können wir bereits eines der Probleme erkennen, die mit dem Abonnementmodell verbunden sind. Der folgende Comic beschreibt es gut:
Comic von /u/Hoppy_Doodle
Die Verbreitung von Streaming-Plattformen zwingt dich dazu, ein Netflix-Abonnement, ein Amazon-Prime-Abonnement, ein Hulu-Abonnement, ein Disney-Plus-Abonnement, ein YouTube-Premium-Abonnement und so weiter abzuschließen. Und das war nur das Videostreaming. Den gleichen Abo-Zoo gibt es für Musik, Bücher, Spiele, Newsletter, Blogbeiträge usw.
Und was ist die Lösung?
Akzeptiere die Natur von Informationen
Die Lösung beginnt mit Akzeptanz. Der Verkauf digitaler Inhalte auf herkömmliche, transaktionale Weise funktioniert nicht oder zumindest nicht sehr gut. Eine Transaktion mit einem digitalen Foto eines Apfels ist etwas ganz anderes als eine Transaktion mit einem physischen Apfel.
George Bernard Shaw hat es am besten gesagt: „Wenn du einen Apfel hast und ich habe einen Apfel und wir tauschen diese Äpfel, dann haben wir beide immer noch jeweils einen Apfel. Aber wenn Sie eine Idee haben und ich eine Idee habe und wir diese Ideen austauschen, dann hat jeder von uns zwei Ideen.“ Da sich digitale Informationen wie eine Idee verhalten, gibt es keinen Grund, sie künstlich zu verknappen. Das gilt nicht nur in philosophischer, sondern auch technischer Hinsicht. Computer sind Kopiermaschinen. Das war schon immer so und wird auch so bleiben. Die einzige Möglichkeit, Informationen von einer Maschine auf eine andere zu übertragen, besteht darin, sie zu kopieren. Dies allein sollte die Sinnlosigkeit der Behandlung von Informationen als physische Objekte offenkundig machen.
Wenn es um die Monetarisierung von Informationen im offenen Web geht, müssen wir unsere Denkweise mit der Natur der Information in Einklang bringen. Wie oben beschrieben, sind Informationen nicht knapp, leicht zu kopieren, leicht zu verändern und wollen frei sein. Ich glaube, dass das richtige Monetarisierungsmodell diese Werte respektieren und ähnliche Eigenschaften haben muss. Es muss offen, transparent, erweiterbar und nicht zuletzt völlig freiwillig sein.
Dieses Modell hat einen Namen: Value-for-Value.
Wiederbeleben der Straßenmusik
Die Idee ist einfach, klingt aber radikal: Du stellst deine Inhalte kostenlos zur Verfügung, für jeden, ohne Zugangsbeschränkungen. Wenn die Menschen Spaß daran haben, wenn sie einen Nutzen daraus ziehen, dann machst du es den Menschen leicht etwas zurückzugeben.\ Es mag in der heutigen Zeit ungeheuerlich klingen, aber dieses Modell funktioniert seit Tausenden von Jahren. Es ist das Modell der Straßenkünstler, das Modell der Straßenmusikanten, das Modell des freiwilligen Gebens. Im Cyberspace stoßen wir jedoch nicht an die physischen Grenzen der traditionellen Straßenmusikanten. Digitale Inhalte lassen sich auf eine Art und Weise skalieren, wie es bei Darbietungen in der realen Welt nie der Fall sein wird.
Das Value-for Value-Modell stellt das traditionelle Zahlungsmodell auf den Kopf. Traditionell folgt der Genuss der Bezahlung. Beim Value-for-Value-Ansatz folgt die Zahlung dem Genuss – freiwillig. Es steht dir frei, dem Straßenmusiker zuzuhören und weiterzugehen, aber – und das ist etwas was das Publikum intuitiv weiß – wenn du willst, dass die Musik weitergeht, solltest du ein paar Münzen in den Hut werfen.
Das Schöne an diesem Modell ist, dass es die Anreize neu ausrichtet. Man versucht nicht, die Klicks, die Verweildauer oder irgendeine andere der unzähligen Kennzahlen zu maximieren. Man will dem Publikum einen Mehrwert bieten, und das ist alles. Und wenn das Publikum einen Nutzen daraus gezogen hat, wird ein gewisser Prozentsatz etwas zurückgeben. Alles, was man tun muss, ist zu fragen.
Eine wertvolle Alternative
Wir stehen erst am Anfang dieses monumentalen Wandels. Ich hoffe, dass sich das Modell „Value for Value“ weiterhin als praktikable Alternative zu Werbung, Zensur, Deplatforming und Demonetisierung durchsetzen wird.
Das Value-for-Value-Modell nimmt das „sie“ aus der Gleichung heraus. Sie filtern, sie zensieren, sie demonetisieren, sie deplatformieren. Es spielt nicht einmal eine Rolle, wer „sie“ sind. Wenn es „sie“ gibt, werden sie einen Weg finden, es zu versauen.
Value-for-Value beseitigt „sie“ und überträgt dir die Verantwortung. Du bist der Herrscher im Reich des Einen, allein verantwortlich für deine Gedanken und deine Sprache. Wenn wir Befreiung (und Erlösung) im Cyberspace wollen, müssen wir dem Einzelnen wieder die Verantwortung übertragen. Wie immer gilt: Freiheit und Unabhängigkeit erfordern Verantwortung.
In der besten aller Welten gibt es für die Schöpfer einen Anreiz, nichts anderes zu tun als zu erschaffen. Man bedient nur sich selbst und diejenigen, die an den Werken interessiert sind. Keine Mittelsmänner. Direkt, von Mensch zu Mensch, Value-for-Value.
Was uns erwartet
Zugegeben, heute ist es nicht gerade einfach, seine Infrastruktur selbst zu hosten. Es ist einschüchternd, einen eigenen Knotenpunkt (Node) zu betreiben, um Zahlungen auf selbständige Weise zu erhalten. Aber es wird nicht nur einfacher, es wird zunehmend notwendig.
Wir müssen nicht nur alles einfacher machen, sondern uns auch des oben beschriebenen MTX-Problems bewusst sein. Jeder Schritt, der es schafft, die mentalen Transaktionskosten im Ökosystem der Wertschöpfung zu reduzieren, ist ein Schritt in die richtige Richtung.
Die Wertfunktion von Podcasting 2.0 ist ein solcher Schritt. Sie ermöglicht und automatisiert Zahlungen im Minutentakt, ohne dass der Nutzer zusätzlich eingreifen muss. Sobald du eingerichtet bist, wird deine Geldbörse automatisch Zahlungen vornehmen. Ich glaube, dass weitere Iterationen dieser Idee in alle Medientypen integriert werden können, sei es Audio, Video, Bilder, das geschriebene Wort, und so weiter. Ich glaube, dass wir kurz vor der Protokollversion von Patreon stehen: alle Vorteile der Reduzierung der mentalen Transaktionskosten auf Null, ohne die Reibung und die Zensur, die einer plattformbasierten Lösung innewohnen. Ob es in Form von BOLT12 wiederkehrenden Zahlungen oder etwas ganz anderem kommen wird, bleibt abzuwarten. Ich bin jedoch zuversichtlich, dass dies zu gegebener Zeit der Fall sein wird.
Schlussfolgerung
Nicht nur unser Papiergeld ist kaputt, auch das Monetarisierungsmodell des Internets ist kaputt. Die werbebasierten Plattformen der heutigen Zeit sind auf Engagement durch Spaltung und Polarisierung optimiert und nutzen dunkle Muster und Sucht. Es wird nicht einfach sein, aus den Zwangsschleifen auszubrechen, die für uns eingerichtet wurden, aber dank des selbstverwalteten Tech-Stacks, der derzeit entsteht, gibt es eine praktikable Alternative: das Value-for-Value-Modell.
Das „Straßenmusiker“-Monetarisierungsmodell (busking monetization model) hat in der Vergangenheit viele Jahrhunderte lang funktioniert, und dank Bitcoin und dem Lightning Network bin ich zuversichtlich, dass es auch in der Zukunft noch Jahrhunderte lang funktionieren wird. Wir sind fast am Ziel. Wir müssen nur noch herausfinden, wie wir den Hut richtig auf dem Boden positionieren und wo die besten Plätze in der Stadt sind, um sozusagen aufzutreten.
Value-for-Value löst das DRM-Paradoxon in seiner Gesamtheit und wird – mit dem richtigen Maß an Automatisierung und vernünftigen Vorgaben – auch das MTX-Problem lösen. Wenn wir das richtig hinbekommen, können wir uns vielleicht aus dem evolutionären Überlebenskampf der Plattformen befreien und uns in das quasi unsterbliche Reich der Protokolle begeben.
Es gibt viel zu erforschen, viele Werkzeuge zu entwickeln und viele vorgefasste Meinungen zu zerstören. Direkt vor unseren Augen vollzieht sich ein seismischer Wandel, und ich freue mich darauf, mit euch allen auf den Wellen zu reiten. Vorwärts marsch!
Dies ist ein übersetzter Gastbeitrag von Gigi aus seinem Blog. Die geäußerten Meinungen sind ausschließlich seine eigenen und spiegeln nicht notwendigerweise die des Re-Publishers oder Aprycot Media wider. Quelle: https://aprycot.media/blog/freiheit-der-werte/#
-
@ 01aa7401:33696fab
2024-10-22 16:33:44Having a caravan is like having your own pass to adventure! It’s such a fun way to check out new spots and make awesome memories. But hey, just like any ride, your caravan needs a little TLC to keep it in good shape. Whether you're gearing up for a trip or just want it looking fresh and running smoothly, regular maintenance is key.
If you're in Doncaster and need some tips on your caravan, this guide's got all the info you need, served up nice and simple!
Looking For Caravan Repairs Doncaster? Why Caravan Maintenance is Significant Caravans, very much like vehicles, face mileage over the long run. Normal maintenance guarantees that your caravan stays in top condition, guarding you out and about and dragging out the existence of the vehicle. Overlooking minor issues can prompt more concerning issues and higher fix costs later on. The following are a couple of justifications for why caravan maintenance is significant: • Security: Ordinary checks can forestall likely perils while voyaging. • Life span: Keeping your caravan with everything looking great will broaden its life expectancy. • Resale esteem: A very much kept up with caravan will hold its worth better in the event that you at any point choose to sell it. • Solace: Repairs guarantee your caravan stays agreeable and practical for your excursions.
Common Caravan Issues Caravans are perplexing machines with different frameworks that need consideration. A few normal issues you could experience include:
- Exterior Harm: Scratches, gouges, and breaks on the bodywork can happen during movement. Assuming that left unrepaired, these issues can prompt rust or water spills.
- Tyre Wear: Caravan tires wear out over the long haul, particularly after lengthy excursions. It's vital for check for breaks, swells, or lopsided wear.
- Electrical Issues: Issues with the caravan's electrical frameworks, like lighting or electrical plugs, are normal and need proficient consideration.
- Water Framework Releases: The water framework in your caravan can foster releases, prompting clamminess or water harm.
- Brake Issues: Very much like any vehicle, the brakes on a caravan should be checked consistently for wellbeing.
Caravan Repairs Doncaster If you live in Doncaster or nearby districts, you'll find a combination of caravan fix organizations open. Whether you need a minor fix or a critical overhaul, specialists can help with keeping your caravan roadworthy. You can anticipate this:
- Bodywork Repairs: Specialists can deal with all that from little scratches to enormous imprints. They'll likewise guarantee that your caravan stays waterproof after repairs.
- Tyre Substitutions: Tire fix and substitution administrations are generally accessible. They'll assess your tires for harm, check the pneumatic stress, and suggest substitutions if necessary.
- Electrical Repairs: Gifted professionals can investigate and fix your caravan's electrical framework. Whether it's broken lighting, wiring issues, or issues with power supply, proficient repairs will get everything working in the future.
- Water Framework Repairs: Water spills are a typical issue in caravans, and experts can rapidly distinguish and fix them. They'll examine your caravan for any indications of soddenness or water harm.
- Brake and Suspension Checks: Fix administrations in Doncaster offer careful brake and suspension examinations, guaranteeing your caravan is protected to tow out and about.
- Annual Overhauling: To keep away from surprising breakdowns, many caravan fix shops in Doncaster offer normal adjusting. A yearly help regularly incorporates a full review of the bodywork, electrical frameworks, tires, brakes, and inside fittings.
What to Search for in a Caravan Repairs Doncaster Services
Finding the right caravan fix administration is fundamental to guarantee quality repairs. The following are a couple of tips on what to search for:
- Reputation: Check surveys and evaluations online to check whether the mechanics shop has a decent standing. Positive client criticism is consistently a decent indication of value administration.
- Experience: Pick a mechanics shop with experienced professionals who are know about caravans. Experience frequently implies better, faster repairs.
- Specialized Administrations: Some maintenance shops spend significant time in unambiguous regions, similar to bodywork or electrical repairs. It's consistently smart to check assuming that the shop offers the administrations you want.
- Warranty: Inquire as to whether the repairs accompany a guarantee. A guarantee guarantees that you will not need to pay for repairs in the event that something turns out badly soon after the work is finished.
-
Location: Picking a maintenance administration close to Doncaster is helpful. You can undoubtedly drop off and get your caravan without going far. DIY Caravan Repairs Doncaster: When to Tackle It Yourself Some caravan fixes are super simple to handle on your own, saving you some cash and time. But it’s important to know what you can and can’t do. If you’re comfortable with basic tools, here are a few DIY repairs you might want to give a shot:
-
Touching Up Small Scratches: You can usually buff out light scratches on the body with a vehicle scratch repair kit.
- Changing a Flat Tire: If your caravan has a flat, swapping it out is basically the same as changing a tire on your car. Ensure you have the right instruments and an extra tire close by.
- Tightening Free Fittings: Cupboards, drawers, and different fittings in your caravan can come free during movement. A straightforward screwdriver can frequently fix these issues.
- Cleaning and Keeping up with the Water Framework: Customary cleaning of your caravan's water framework can forestall holes and blockages. Utilize particular caravan water cleaning items to keep the framework liberated from soil and microorganisms. Continuously recall that wellbeing starts things out. On the off chance that you're uncertain about a maintenance or on the other hand assuming it includes electrical work, calling a professional is ideal.
Instructions to Stay away from Future Repairs
The most effective way to stay away from expensive repairs is through standard maintenance. The following are a couple of tips to assist with keeping your caravan looking great:
- Regular Cleaning: Clean both the inside and outside of your caravan routinely. Soil and garbage can cause mileage over the long run.
- Check Tires Before Each Excursion: Before you set off on an excursion, review your tires for any indications of harm. Guarantee the gaseous tension is right to keep away from victories.
- Inspect the Bodywork: After each outing, really look at your caravan's body for any indications of harm. Fixing minor scratches and imprints right off the bat can forestall greater issues later.
- Service the Brakes Every year: Brakes are a basic piece of caravan wellbeing. Ensure they're checked and adjusted every year, particularly assuming that you travel significant distances.
- Store Your Caravan Appropriately: When not being used, store your caravan in a dry, covered space. If conceivable, put resources into an excellent caravan cover to safeguard it from the components.
Final Thought Caravan repairs Doncaster are a fundamental piece of purchasing and keeping a caravan. Whether you're managing tire issues, electrical issues, or bodywork repairs, Doncaster has a lot of dependable administrations to help. Keep in mind, normal maintenance can forestall exorbitant repairs, saving your caravan street prepared for some experiences to come. By staying aware of basic repairs and maintenance, you'll have the option to appreciate calm occasions and expand the existence of your caravan. Whether you're a Do-It-Yourself lover or favor proficient assistance, dealing with your caravan will guarantee you capitalize on your venture for quite a long time into the future!
-
@ a012dc82:6458a70d
2024-10-22 14:05:23Table Of Content
-
The Background of Tesla's Bitcoin Investment
-
Tesla Believes in the Long-Term Potential of Bitcoin
-
Bitcoin Fits with Tesla's Clean Energy Vision
-
Tesla's Investment in Bitcoin is a Hedge Against Inflation
-
Bitcoin Offers Diversification for Tesla's Balance Sheet
-
Conclusion
-
FAQ
Tesla, the American electric vehicle and clean energy company, has been making headlines for its Bitcoin investment strategy. The company made headlines earlier this year when it announced that it had invested $1.5 billion in the cryptocurrency, causing the price of Bitcoin to surge. However, in the first quarter of 2021, Tesla's Bitcoin strategy remained unchanged.
The Background of Tesla's Bitcoin Investment
Before we dive into why Tesla's Bitcoin strategy hasn't changed in the first quarter of 2021, let's take a brief look at the background of the company's investment. In February 2021, Tesla announced that it had invested $1.5 billion in Bitcoin and that it would begin accepting the cryptocurrency as a form of payment for its products. The announcement caused the price of Bitcoin to surge, with the cryptocurrency reaching an all-time high of over $60,000.
Tesla Believes in the Long-Term Potential of Bitcoin
One reason why Tesla's Bitcoin strategy has remained unchanged is that the company believes in the long-term potential of the cryptocurrency. In a tweet in March 2021, CEO Elon Musk said, "I am a supporter of Bitcoin, and I believe it has a promising future." Musk has also said that he thinks Bitcoin is a good thing and that it has a lot of potential.
Bitcoin Fits with Tesla's Clean Energy Vision
Another reason why Tesla's Bitcoin strategy hasn't changed is that the cryptocurrency fits with the company's clean energy vision. Tesla is committed to reducing its carbon footprint, and Bitcoin's decentralized nature makes it an attractive option for clean energy advocates. By using Bitcoin as a form of payment, Tesla can reduce its reliance on traditional payment methods, which often involve high energy consumption.
Tesla's Investment in Bitcoin is a Hedge Against Inflation
Tesla's Bitcoin investment is also a hedge against inflation. The company's decision to invest in Bitcoin was partly motivated by concerns about the value of the US dollar. In a filing with the US Securities and Exchange Commission, Tesla said that it had made the investment to "maximize returns on our cash." By investing in Bitcoin, Tesla is protecting its cash reserves against inflation.
Bitcoin Offers Diversification for Tesla's Balance Sheet
Finally, Bitcoin offers diversification for Tesla's balance sheet. The company's investment in the cryptocurrency is a way to diversify its assets and reduce its reliance on traditional forms of investment. Bitcoin is not correlated with other asset classes, which means that it can provide a hedge against market volatility.
Conclusion
Tesla's Bitcoin strategy remained unchanged in the first quarter of 2021. The company's investment in the cryptocurrency is driven by a belief in its long-term potential, a commitment to reducing its carbon footprint, and a need to diversify its assets. While Bitcoin, like any investment, carries some level of risk, Tesla's decision to invest in the cryptocurrency is a calculated risk that has so far been profitable. As Bitcoin continues to gain acceptance as a mainstream investment, it's likely that more companies will follow in Tesla's footsteps.
FAQ
Has Tesla sold any of its Bitcoin holdings? No, Tesla has not sold any of its Bitcoin holdings in the first quarter of 2021.
Will Tesla continue to accept Bitcoin as a form of payment? Yes, Tesla will continue to accept Bitcoin as a form of payment for its products.
Does Tesla plan to invest more in Bitcoin? There is no official word from Tesla on whether the company plans to invest more in Bitcoin.
What impact does Tesla's Bitcoin strategy have on the wider cryptocurrency market? Tesla's investment in Bitcoin and its decision to accept the cryptocurrency as a form of payment has brought increased attention to the cryptocurrency market. The company's endorsement of Bitcoin has helped to legitimize the cryptocurrency and has contributed to its growing acceptance as a mainstream investment.
That's all for today
If you want more, be sure to follow us on:
NOSTR: croxroad@getalby.com
Instagram: @croxroadnews.co
Youtube: @croxroadnews
Store: https://croxroad.store
Subscribe to CROX ROAD Bitcoin Only Daily Newsletter
https://www.croxroad.co/subscribe
DISCLAIMER: None of this is financial advice. This newsletter is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. Please be careful and do your own research.
-
-
@ 9e69e420:d12360c2
2024-10-22 11:49:49Nostr Tips for Beginners
Setting Up Your Account
- Choose a Nostr client (e.g., Damus for iOS, Amethyst for Android, nostrudel for web)
- Generate your public/private key pair
- Store your private key securely (password manager, browser extension or my personal recommendation amber)
- Set up a Lightning wallet for tipping and payments (minibits recommended)
Customizing Your Profile
- Add a profile picture (use nostr.build or postimages.org)
- Complete your profile information
- Verify your account (use nostrplebs.com or nostrcheck.me for the easy route.) there are also many guides for setting up a nip-5 on a custom domain.
Finding and Interacting with Others
- Search for friends using public keys or usernames
- Explore hashtags like #grownostr or #foodstr
- Follow interesting users and check their follower lists
- Interact through comments, likes, and reposts
Understanding Relays
- Connect to reliable relays for better user experience
- Subscribe to specific relays for targeted content
- Consider using paid relays for improved performance
Hashtags "#"
- don't just "explore hashtags"
- use them.
- follow them.
- people will find you through them and you will find people to follow with them.
Best Practices
- Use text replacement shortcuts for your public key
- Experiment with different Nostr clients
- Be patient and engage with the community
- Remember Nostr's censorship-resistant nature
- Keep your software and apps updated
Remember, Nostr is evolving, so stay curious and don't hesitate to ask for help from the community when needed.
-
@ 6bae33c8:607272e8
2024-10-22 09:29:13I watched the Cardinals-Chargers game first because I had Marvin Harrison and JK Dobbins going. Also one share of James Conner. Conner did okay, but the Cardinals just can’t complete a forward pass to save their lives. They want to get the ball to Harrison, but he’s never open, and Kyler Murray is too herky-jerky, too hectic to get the offense into any kind of flow.
The Chargers are even worse. Justin Herbert only throws to fifth-tier tight ends, and the team can’t run block. Imagine watching that whole game, hoping for fantasy production.
I actually watched the other game in its entirety too because I had Zay Flowers, Roquan Smith, Isaiah Likely, Chris Godwin and Bucky Irving going. Smith got 18 tackles, but that was about it. I had hoped for more from Godwin, especially after Mike Evans got hurt. I watched until the Bucs final TD to make it 41-31 because I knew the final score (which they annoyingly post in the upper corner of the Chargers-Cardinals game) and turned it off. Obviously nothing was going to happen with two minutes left and the score already final. So it wasn’t until an hour later while reading Twitter that I realized somehow the best player on my Primetime team, Godwin, is probably out for the year.
As I said, I can’t even. And this after wasting almost two hours watching the edited versions which always fucking replay every tackle behind the line and haven’t figured out that when there’s a false start, and the announcer says it you don’t have to cut to the ref and have him formally announce it. We can tell by the down and distance on the next play. Same with holding. Edit out the refs wherever possible.
-
Kyler Murray has the fastest feet in the league. If only he could play quarterback.
-
Marvin Harrison is long and lanky like A.J. Green or Randy Moss. He’s not lightning quick or doing anything special on his routes. Accordingly, throw it deep to him and let him make a play. A 12-yard out on the sideline while covered ain’t it.
-
James Conner is a beast. He’ll probably get hurt, but he runs hard every play and is competent as a pass catcher too.
-
The Cardinals punted on 4th-and-2 from midfield at one point. I don’t care about what the math says, but when you see that, it’s a sell for fantasy players. The Texans are like that too.
-
Ladd McConkey might have been playing hurt, but they need to get the ball in his hands. He’s quick and fast.
-
Lamar Jackson and the Ravens offense is so effortless and smooth. It’s like he’s barely trying, and suddenly he has five TD passes. Derrick Henry too looks like he’s moving in slow motion, and suddenly he’s sprinting with gigantic strides, and no one can catch him. I’m ashamed to admit I passed up Jackson in one league and took Anthony Richardson a few picks later.
-
Mark Andrews is all the way back to top-three TE status. It’s him, George Kittle and Brock Bowers.
-
Rashod Bateman is a good player for the Ravens even if he’s unstartable due to target volatility. He stretches the field for them, gives them that dimension.
-
Baker Mayfield is Brett Favre .90. Not 2.0, not 1.0, but .90, a slightly worse version willing to sling the ball into traffic and scramble badly. He’s fucked with Godwin maybe done and Evans possibly missing some time.
-
All the backs looked good to me, but Mayfield seems to trust the GigaRachaad the most. Cade Otton might be a top-seven TE with Godwin and maybe Evans out too, though it looked like he got concussed three times.
-
Just a perfect cap off to one of the worst weeks of fantasy football I’ve ever experienced.
-
-
@ b7338786:fdb5bff3
2024-10-22 09:20:23I.
Simplicity is essential We always prefer the simple and short over the complex and long. When we have the choice between simple or concise, we choose the simpler solution as long as we come nearer to our goal. It is easy to pay lip service to "simplicity", but few have the courage to really embrace it, as we fear our fellow programmer's verdict. When you have the impression that an adequate solution to a problem is beyond your capabilities, simplify the problem. Complex code is not an achievement, nor does it make us better programmers. Code that can not be thrown away is automatically technical debt and a burden. If implementation artifacts influence the external appearance or usage of a program, then this is acceptable as long as it doesn't impede the program's usefulness. Don't underestimate the effort to write simple code. Complex technology can never be simplified by adding more complex technology. Creating the computational equivalent of a Rube-Goldberg device is something that you should consider a practical joke or something to be ashamed of, never something to take pride in. If you can't explain the internal structure of a software system to someone in a day, then you have a complexity problem. Some of the complexity may be necessary, but it is still a serious defect.
II.
Solve problems instead of creating them We want to solve concrete problems, not anticipate the tasks others might have in the future, so we create applications instead of frameworks. We write editors, not text-editing toolkits. games instead of engines. We do not generalize unnecessarily, as we will never be able to fully comprehend how our code might be used. Moreover, we often evade our responsibilty to solve actual problems by procrastinating and conceiving one-size-fits-all pseudo solutions for future generations that will probably never use them. So, we instead identify a problem that might be addressed by a computerized solution and do nothing but working towards that solution. We do not create abstractions for abstraction's sake but to simplify our current task.
III.
We are not smarter than others, others are usually not smarter We do not think of how others might judge our code: they struggle as much as we do and they don't have any more answers to our problems than we have. Experience is important and necessary, but creativity is, too. Only beginners think they know everything. Experience can lead to cynicism, but cynicism leads to emptyness. Doing something for a long time does not automatically lead to experience, but failing often does. True mastery transcendends problems and works by intuition trained on making mistakes. We accept that we will be journeymen eternally and that mastership is something that is given to us by grace, not effort. So we don't attempt to be wizards, we just try to solve problems. If a programmer seems to be 10x more productive as us, then perhaps because he or she masters her programming environment more than we do, doesn't bother with unnecessary things or has just a different measure for productivity.
IV.
Do everything yourself We do not use libraries, frameworks or third party packages unless we are absolutely forced to do so. Code that we didn't write we do not understand. Code that we do not understand we can not maintain. Code that we can not maintain may change from version to version or may not work with another version of a dependency of that code or the underlying platform that it runs on. Other people's code is a liability and we do not want to take responsibility for it unless we are certain that we can reimplement it ourselves, if required. Forcing others do upgrade a piece of software only to make our code work is insulting and disrespectful. If you need additional libraries beyond what is provided by default, then build them from sources.
V.
Strive for robustness We follow the +/- 10 year rule: write your software so that it can be made to work on 10 year old hardware and operating systems and on systems that will exist 10 years from now, with reasonable effort, unless your project is highly specific to brand new or obsolescent hardware. That means we will not be able to ride the newest hype in programming languages, software tools or libraries. Be grateful for that. This also means you will have to write your code in C, or in something built on top of C. That's fine, because computers were designed to run C efficiently and will do so in the future (don't listen to the evangelists, they still use C, they just added restrictions and gave it a different name). Why should you let yourself be restrained in what you can express? Shouldn't you try to learn to be more thoughtful and disciplined instead? Designing robust software means you know what you are doing and doing it in a responsible manner.
VI.
Do not think you can make computing "secure" We are humble enough to understand that computing will never be fully secure. Buggy hardware, side channel attacks and the $5 wrench will always be with us, so don't fool yourself that you can change that with clever programming. Avoid cryptography, if possible, as you should write all the code yourself and doing your own cryptography is a well known mistake. If you need privacy, do not use browsers. If you want to hide something, do not put it on the internet. We accept that we can never be sure a communication channel is safe. Beware of security consultants, their agenda is a different one. Most useful computing consists of handling key presses and mouse clicks modifying local state on your computer.
VII.
Use input devices when they make the most sense The easiest and most efficient way of designating a point on the screen is by using the mouse. Use it. If changing between keyboard and mouse is annoying to you, get a smaller keyboard. Solely keyboard-driven user interfaces are often hardly distinguishable from ideology. Solely mouse-driven user interfaces can be awkward. Consider adjusting your mouse parameters, consider getting a better mouse and use common sense. If a UI feature is hard to understand or cumbersome to use, it may be pointless, regardless of how sophisticated and aesthetically attractive it may seem. If a control in a user interface is both neither obvious in its use nor clearly documented then it is pointless and should not exist.
VIII.
Avoid all ornaments We eschew all visual gimmicks, animations and eye-candy. They do not increase the usefulness of our software, they just add complexity and show that we wasted our time and energy. In user interfaces use the most basic default and stick to it. Use black text on white background, it is easy to read and reduces the strain on the eyes in a well-lit environment. Others will try to convince you of the opposite, but they just try to rationalize their personal taste. Think of visually impaired users. Don't burden the user with needless configurations. The aesthetics of simplicity almost always turn out to be more pleasing than pretty graphics, animations or subtle color schemes. If you need graphics, you probably do not need a full graphical user interface toolkit. If you need a complex graphical user interface, then simplify it. User interfaces do not need to look nice, they just should be obvious and effortless.
IX.
Tools are just tools We use tools, we replace them, we ignore them, and we shouldn't be dependent on them. They may become obsolete, non-portable, broken, but we still have to go on. Careful thought and "printf" debugging is still the best method to find bugs. When you are really stuck, take a long walk and think about it. You will be surprised how often that works. We avoid all-powerful intermediate data formats, as plain text with clear structure is universal, easy to debug, portable and extensible. If you think you need a database consider first whether the file system is really not sufficient for your needs. The only truly crucial tool is between your ears.
X.
Be humble We are not Google. We will never need the scalability that we so often think is what makes software "robust". Machines are fast enough for quite some time, now. Making outlandish demands for scalability or performance is often confused with professionalism. We measure before we optimize and we never trust benchmarks that others have done. Hardware is cheaper than software. Algorithms of linear complexity, linked lists, statically allocated memory and global state each have their place and use. Think about how much of your decision to dismiss the straightforward aproach is based on folklore, insecurity or delusion. Portability is overrated, we don't fool ourselves that we can manage to maintain our code on all possible platforms. We maintain only what we can test on real hardware. We are not tempted into thinking we will revolutionize the world of computing - we are just tool-making primates. The sharpened flint-stone and the wheel probably had a bigger impact on civilization than our sophisticated compiler or overengineered high-performance database.
XI.
Don't work for free if you do not enjoy it When we create software for others to use, we are doing them a service (unless we expect to be compensated in one form or the other). When we provide a solution to a particular software problem, we are free to do it in the way we find adequate. If a proprietary software platforms forces us to use broken software tools and programming interfaces, we should consider to not write software for that platform, unless we are employed by the vendor or compensated in another way. If the interface with which our software has to communicate is weak, then we should think hard before putting any effort into overcoming the obstacles. If the interface causes us mental or physical pain, we stop programming against it. Too many programmers have been worn out by bad languages, tools and APIs. How can we trust platforms and enrich their ecosystem which sucked the life out of us and our fellow programmers?
XII.
Do not listen to others We never take a software methodology, school of programming or some random internet dude's "manifesto" at face value. Rules must be broken, when necessary. You decide in the end what is right and what is wrong, depending on circumstances. Every rule has its exception, Cargo cults lurk everywhere and every promise that something makes your life as a programmer easier while not acknowledging that you will have to pay for it in one way or another is a lie. "Computer science" is to the largest part religion, propaganda and hype. Principles are important, but should only guide and not control you. Only psychopaths and monks are able to live up to them to their final consequences. "Best practices" just formalize mediocrity. Innovation means diverging from the mainstream. Art means creating something that didn't exist before.
-
@ b7338786:fdb5bff3
2024-10-22 09:16:59The quiet art of attention
There comes a moment in life, often in the quietest of hours, when one realizes that the world will continue on its wayward course, indifferent to our desires or frustrations. And it is then, perhaps, that a subtle truth begins to emerge: the only thing we truly possess, the only thing we might, with enough care, exert some mastery over, is our mind. It is not a realization of resignation, but rather of liberation. For if the mind can be ordered, if it can be made still in the midst of this restless life, then we have already discovered the key to a deeper kind of freedom.
But how does one begin? It is not with grand declarations or bold, sweeping changes. That would miss the point entirely. Rather, it is with a gentle attention to the present, a deliberate shift in the way we move through the world. We begin by paying attention to what our mind does—its wanderings, its anxieties, its compulsions. It is a garden untended, overgrown with concerns that may not even be our own. And the first step is simply to watch, to observe how the mind moves, without judgment, without rush.
In this quiet observation, we begin to see patterns. The mind leaps from one thing to another, rarely resting. It is caught in a web of habits, most of which we never consciously chose. But, once we notice this, a door opens. There is space, however small, between the thoughts. And in that space, if we are patient, we can decide how to respond rather than being dragged along by every impulse or fear. This is not about control in the traditional sense, but about clarity. To act, not from reflex, but from intent.
It is a simple beginning, but one of great consequence. For when we reclaim our attention, even in this small way, we are no longer mere passengers on the journey. We become, in a sense, our own guides.
As we grow in this practice of attention, something else becomes clear: much of what occupies our thoughts is unnecessary. The mind is cluttered, filled with concerns that seem urgent but, on closer inspection, do little to serve our deeper well-being. Simplification is not just a matter of decluttering our physical surroundings—it is a way of thinking, of living. As we quiet the noise within, we see more clearly what truly matters. We focus, not on everything, but on the essentials. We pare down, not by force, but by choice.
This process of simplification is not an escape from complexity. It is, in fact, a way of engaging with it more meaningfully. There are things in life that are intricate, yes, but not everything needs our attention at once. What truly requires our effort can be approached in small steps, in manageable pieces. The mind works best when it is focused on one thing at a time, when it is allowed to give itself fully to the task at hand. In this way, the most complex of undertakings becomes simple, not because it is easy, but because we have allowed it to unfold naturally, one step after the other.
It is tempting, in moments of ambition, to think that we must change everything all at once, that the path to mastery or peace requires a sudden, dramatic shift. But this is rarely the case. In truth, most lasting changes come from small, deliberate actions. It is in the repetition of these small actions, over time, that we build strength, that we build the habits of mind that lead to deeper clarity. Just as a mountain is climbed not in great leaps but in steady, measured steps, so too is the mind brought into alignment by daily, patient attention to the way we think.
But in this process, we must remember something important: life is not meant to be rushed through. It is not a race, nor is it a problem to be solved. It is an experience to be lived, and living well requires presence. To focus on one thing deeply, to give it your full attention, is to experience it fully. And when we do this, something remarkable happens. Time, which so often feels like it is slipping through our fingers, begins to slow. Moments become rich, textured. Even the simplest of tasks takes on a new significance when approached with care, with attention.
This is the quiet art of living well. It does not demand that we abandon the world, but that we engage with it more mindfully. It asks that we slow down, that we look more closely, that we listen more carefully. For in doing so, we discover that much of what we seek—clarity, peace, even strength—was always within reach. It was simply waiting for us to stop, to pay attention, and to begin again with intention.
The mind, like a garden, requires tending. It needs patience, a steady hand, and, above all, consistency. There will be days when it seems unruly, when old habits return, and when focus feels elusive. But these days, too, are part of the process. Each small effort, each moment of renewed attention, builds upon the last. Over time, these moments accumulate, and what was once difficult becomes second nature.
And so, the journey to mastery of the mind begins not with grand gestures but with the simplest of practices: the practice of paying attention. Attention to the present, attention to what truly matters, and attention to the quiet spaces in between. In this way, step by step, thought by thought, we move closer to that elusive state of clarity, of peace, and of freedom.
-
@ b7338786:fdb5bff3
2024-10-22 09:16:36The Collapse of Self-Worth in the Digital Age
When I was twelve, I used to roller-skate in circles for hours. I was at another new school, the odd man out, bullied by my desk mate. My problems were too complex and modern to explain. So I skated across parking lots, breezeways, and sidewalks, I listened to the vibration of my wheels on brick, I learned the names of flowers, I put deserted paths to use. I decided for myself each curve I took, and by the time I rolled home, I felt lighter. One Saturday, a friend invited me to roller-skate in the park. I can still picture her in green protective knee pads, flying past. I couldn’t catch up, I had no technique. There existed another scale to evaluate roller skating, beyond joy, and as Rollerbladers and cyclists overtook me, it eclipsed my own. Soon after, I stopped skating.
Years ago, I worked in the backroom of a Tower Records. Every few hours, my face-pierced, gunk-haired co-workers would line up by my workstation, waiting to clock in or out. When we typed in our staff number at 8:59 p.m., we were off time, returned to ourselves, free like smoke.
There are no words to describe the opposite sensations of being at-our-job and being not-at-our-job even if we know the feeling of crossing that threshold by heart. But the most essential quality that makes a job a job is that when we are at work, we surrender the power to decide the worth of what we do. At-job is where our labour is appraised by an external meter: the market. At-job, our labour is never a means to itself but a means to money; its value can be expressed only as a number—relative, fluctuating, out of our control. At-job, because an outside eye measures us, the workplace is a place of surveillance. It’s painful to have your sense of worth extracted. For Marx, the poet of economics, when a person’s innate value is replaced with exchange value, it is as if we’ve been reduced to “a mere jelly.”
Not-job, or whatever name you prefer—“quitting time,” “off duty,” “downtime”—is where we restore ourselves from a mere jelly, precisely by using our internal meter to determine the criteria for success or failure. Find the best route home—not the one that optimizes cost per minute but the one that offers time enough to hear an album from start to finish. Plant a window garden, and if the plants are half dead, try again. My brother-in-law found a toy loom in his neighbour’s garbage, and nightly he weaves tiny technicolour rugs. We do these activities for the sake of doing them, and their value can’t be arrived at through an outside, top-down measure. It would be nonsensical to treat them as comparable and rank them from one to five. We can assess them only by privately and carefully attending to what they contain and, on our own, concluding their merit.
And so artmaking—the cultural industries—occupies the middle of an uneasy Venn diagram. First, the value of an artwork is internal—how well does it fulfill the vision that inspired it? Second, a piece of art is its own end. Third, a piece of art is, by definition, rare, one of a kind, nonfungible.
Yet the end point for the working artist is to create an object for sale. Once the art object enters the market, art’s intrinsic value is emptied out, compacted by the market’s logic of ranking, until there’s only relational worth, no interior worth. Two novelists I know publish essays one week apart; in a grim coincidence, each writer recounts their own version of the same traumatic life event. Which essay is better, a friend asks. I explain they’re different; different life circumstances likely shaped separate approaches. Yes, she says, but which one is better?
Igrew up a Catholic, a faithful, an anachronism to my friends. I carried my faith until my twenties, when it finally broke. Once I couldn’t gain comfort from religion anymore, I got it from writing. Sitting and building stories, side by side with millions of other storytellers who have endeavoured since the dawn of existence to forge meaning even as reality proves endlessly senseless, is the nearest thing to what it felt like back when I was a believer.
I spent my thirties writing a novel and paying the bills as low-paid part-time faculty at three different colleges. I could’ve studied law or learned to code. Instead, I manufactured sentences. Looking back, it baffles me that I had the wherewithal to commit to a project with no guaranteed financial value, as if I was under an enchantment. Working on that novel was like visiting a little town every day for four years, a place so dear and sweet. Then I sold it.
As the publication date advanced, I was awash with extrinsic measures. Only twenty years ago, there was no public, complete data on book sales. Until the introduction of BookScan in the late ’90s, you just had to take an agent’s word for it. “The track record of an author was a contestable variable that was known to some, surmised by others, and always subject to exaggeration in the interests of inflating value,” says John B. Thompson in Merchants of Culture, his ethnography of contemporary publishing.
This is hard to imagine, now that we are inundated with cold, beautiful stats, some publicized by trade publications or broadcast by authors themselves on all socials. How many publishers bid? How big is the print run? How many stops on the tour? How many reviews on Goodreads? How many mentions on Bookstagram, BookTok? How many bloggers on the blog tour? How exponential is the growth in follower count? Preorders? How many printings? How many languages in translation? How many views on the unboxing? How many mentions on most-anticipated lists? I was glued to my numbers like a day trader.
I wanted to write my publicist to ask: Should I be worried my stats aren’t higher? The question blared constantly in my head: Did gambling years I could’ve been earning towards a house pay off? But I never did. I was too embarrassed. I had trained in the religion of art, and to pay mind to the reception of my work was to be a non-believer. During my fine arts degree, we heard again and again that the only gauge for art is your own measure, and when I started teaching writing, I’d preach the same thing. Ignore whatever publications or promotions friends gain; you’re on your own journey. It’s a purportedly anti-capitalist idea, but it repackages the artist’s concern for economic security as petty ego. My feelings—caring at all—broke code. Shame sublimated everything.
And when the reception started to roll in, I’d hear good news, but gratitude lasted moments before I wanted more. A starred review from Publisher’s Weekly, but I wasn’t in “Picks of the Week.” A mention from Entertainment Weekly, but last on a click-through list. Nothing was enough. Why? What had defined my adult existence was my ability to find worth within, to build to an internal schematic, which is what artists do. Now I was a stranger to myself. I tried to fix it with box breathing videos, podcasts, reading about Anna Karenina. My partner and I were trying for another baby, but cycles kept passing, my womb couldn’t grab the egg. A kind nurse at the walk-in said: Sometimes your body is saying the time’s not right. Mine was a bad place.
A few weeks after my book release, my friends and I and our little kids took a weekend vacation. They surprised me with a three-tiered cake matching my book cover, cradled on laps, from Toronto, through a five-hour traffic jam. In all the photos from that trip, I’m staring at my phone. I can hardly remember that summer.
My scale of worth had torn off, like a roof in a hurricane, replaced with an external one. An external scale is a relative scale; so of course, nothing’s enough. There is no top.
Then I was shortlisted for a major prize. It took me on a world tour, listed me alongside authors who are certifiable geniuses. I thought my endless accounting could stop, this had to be enough for me, I could get back to who I was. But I couldn’t. In London, I bought my two-year-old a bath toy, a little boat with a Beefeater. Today at bath time, the boat still gives me a sickly feeling, like it’s from the scene of a trauma. My centre was gone.
One of at-job’s defining qualities is how efficiently output is converted into a number. In 1994, Philip Agre described this as the “capture model,” or “the deliberate reorganization of industrial work activities to allow computers to track them in real time.” Gregory Sholette, the author of Dark Matter: Art and Politics in the Age of Enterprise Culture, describes how workers in a Pennsylvania factory spent their break covering a wall of the plant with “newspaper clippings, snapshots, spent soda cans, industrial debris, trashed food containers and similar bits and pieces.” They called it “Swampwall.” It reminds me of the sculpture on a high shelf in the back of a diner where I worked, composed of unusually shaped potatoes. Its form changed with each new tuber contributed by the cook on prep shift.
Such spontaneous projects are signs of life: physical evidence of the liberating fact that not all time at work can be measured or processed into productivity. Swampwall was inutile: a means to itself. It was allowed to flourish until the company was bought out by a global corporation, at which point the massive collaborative mural was “expunged.”
Thirty years after Agre coined the capture model, workforce management technology can track every moment at work as a production target. Amazon’s Units Per Hour score, Uber’s and Lyft’s (constantly shrivelling) base fares, and Domino’s Pizza Tracker have made it possible to time all time, even in the break room or toilet stall. These are extreme examples, but they’re echoed across the work world, with the datafication of parts of performance that used to be too baggy or obscure to crunch and so were ours to keep. “Wellness” apps provided as health benefits by corporate management that track fob swipes for office workers; case management software that counts advice by the piece for legal workers; shares, hover rate, and time on site that measure media workers; leaderboards for tech employees, ranking who worked longest.
There must exist professions that are free from capture, but I’m hard pressed to find them. Even non-remote jobs, where work cannot pursue the worker home, are dogged by digital tracking: a farmer says Instagram Story views directly correlate to farm subscriptions, a server tells me her manager won’t give her the Saturday-night money shift until she has more followers. Even religious guidance can be quantified by view counts for online church services, Yelp for spirituality. One priest told the Guardian , “you have this thing about how many followers have you . . . it hits at your gut, at your heart.”
But we know all this. What we hardly talk about is how we’ve reorganized not just industrial activity but any activity to be capturable by computer, a radical expansion of what can be mined. Friendship is ground zero for the metrics of the inner world, the first unquantifiable shorn into data points: Friendster testimonials, the MySpace Top 8, friending. Likewise, the search for romance has been refigured by dating apps that sell paid-for rankings and paid access to “quality” matches. Or, if there’s an off-duty pursuit you love—giving tarot readings, polishing beach rocks—it’s a great compliment to say: “You should do that for money.” Join the passion economy, give the market final say on the value of your delights. Even engaging with art—say, encountering some uncanny reflection of yourself in a novel, or having a transformative epiphany from listening, on repeat, to the way that singer’s voice breaks over the bridge—can be spat out as a figure, on Goodreads or your Spotify year in review.
And those ascetics who disavow all socials? They are still caught in the network. Acts of pure leisure—photographing a sidewalk cat with a camera app or watching a video on how to make a curry—are transmuted into data to grade how well the app or the creators’ deliverables are delivering. If we’re not being tallied, we affect the tally of others. We are all data workers.
Twenty years ago, anti-capitalist activists campaigned against ads posted in public bathroom stalls: too invasive, there needs to be a limit to capital’s reach. Now, ads by the toilet are quaint. Clocking out is obsolete when, in the deep quiet of our minds, we lack the pay grade to determine worth.
The internet is designed to stop us from ever switching it off. It moves at the speed of light, with constantly changing metrics, fuelled by “‘ludic loops’ or repeated cycles of uncertainty, anticipation and feedback”—in other words, it works exactly like a Jackpot 6000 slot machine. (On a basic level, social media apps like Instagram operate like phone games. They’ve replaced classics like Snake or Candy Crush, except the game is your sense of self.)
The effect of gamification on artmaking has been dramatic. In Rebecca Jennings’s Vox long read on the necessity of authorly self-promotion, she interviews William Deresiewicz, whose book The Death of the Artist breaks down the harsh conditions for artists seeking an income in the digital economy. Deresiewicz used to think “selling out”—using the most sacred parts of your life and values to shill for a brand—was “evil.” Yet this economy has made it so there’s “no choice” if you want a living. The very concept of selling out, he says, “has disappeared.” A few years ago, much was made of the fact that the novelist Sally Rooney had no Twitter account—this must explain her prolific output. But the logic is back to front: it’s only top-selling authors who can afford to forgo social media. Call it Deactivation Privilege.
It’s a privilege few of us can afford, if it’s the algorithm we need to impress rather than book reviewers of old. In a nightmarish dispatch in Esquire on how hard it is for authors to find readers, Kate Dwyer argues that all authors must function like influencers now, which means a fire sale on your “private” life. As internet theorist Kyle Chayka puts it to Dwyer: “Influencers get attention by exposing parts of their life that have nothing to do with the production of culture.”
The self is the work, just ask Flaubert. But data collection’s ability to reduce the self to a figure—batted about by the fluctuations of its stock—is newly unbearable. There’s no way around it, and this self being sold alongside the work can be as painful for a writer of autofiction as it is for me, a writer of speculative fiction who invented an imaginary world.
Itell you all this not because I think we should all be very concerned about artists, but because what happens to artists is happening to all of us. As data collection technology hollows out our inner worlds, all of us experience the working artist’s plight: our lot is to numericize and monetize the most private and personal parts of our experience.
Certainly, smartphones could be too much technology for children, as Jonathan Haidt argues , and definitely, as Tim Wu says , attention is a commodity, but these ascendant theories of tech talk around the fact that something else deep inside, innermost, is being harvested too: our self-worth, or, rather, worthing.
We are not giving away our value, as a puritanical grandparent might scold; we are giving away our facility to value. We’ve been cored like apples, a dependency created, hooked on the public internet to tell us the worth.
Every notification ping holds the possibility we have merit. When we scroll, what are we looking for?
When my eldest child was in kindergarten, she loved to make art, but she detested the assignments that tried to make math fun by asking kids to draw. If I sat her down to complete one, she would stare rebelliously at her pencil or a strand of her hair rather than submit. Then one day, while drawing a group of five ants and a group of eight ants, my kindergartener started to sketch fast. She drew ants with bulbous limbs growing out of their bodies, like chains of sausages. “Bombombom!” she cried, flapping her arms up and down. “These are their muscles.” She continued to draw and mime pumping iron, giggling to herself, delighted to have planted something in her homework that couldn’t be accounted for in the metric of correct or incorrect. She had taken drawing back.
The ludic loop of the internet has automated our inner worlds: we don’t have to choose what we like, or even if we like it; the algorithm chooses for us. Take Shein, the fast fashion leviathan. While other fast fashion brands wait for high-end houses to produce designs they can replicate cheaply, Shein has completely eclipsed the runway, using AI to trawl social media for cues on what to produce next. Shein’s site operates like a casino game, using “dark patterns”—a countdown clock puts a timer on an offer, pop-ups say there’s only one item left in stock, and the scroll of outfits never ends—so you buy now, ask if you want it later.
Shein’s model is dystopic: countless reports detail how it puts its workers in obscene poverty in order to sell a reprieve to consumers who are also moneyless—a saturated plush world lasting as long as the seams in one of their dresses. Yet the day to day of Shein’s target shopper is so bleak, we strain our moral character to cosplay a life of plenty.
Automation isn’t forced upon us: we crave it, oblivion, thanks to the tech itself. As the ascendant apparatus of the labour market, it’s squeezed already dire working conditions to a suffocation point, until all we desire is the sweet fugue of scroll, our decision maker set to “off.”
After my novel came out, whenever I met an author, I would ask, with increasing frenzy, how they managed the grisly experience of work going to market. I was comforted and horrified when everyone agreed it could be dispossessing. Then they all said the same thing: “I kept writing and I felt better.” That was the advice: keep writing.
The market is the only mechanism for a piece of art to reach a pair of loving eyes. Even at a museum or library, the market had a hand in homing the item there. I didn’t understand that seeking a reader for my story meant handing over my work in the same way I sold my car on Craigslist: it’s gone from me, fully, bodily, finally. Or, as Marx says, alienated. I hated that advice to keep writing, because if I wrote another book, I’d have to go through the cycle again: slap my self on the scale like a pair of pork chops again. Now, I realize the authors I met meant something else. Yes, sell this part of your inner life but then go back in there and reinflate what’s been emptied. It’s a renewable resource.
When I grasp this, all of it becomes tolerable. It’s like letting out a line, then braiding more line. I can manage, because there’ll always be more line.
Iwill try to sell this essay to a publication, and if successful, the publication will try to sell it to readers. If you are reading this, it’s a commodity now, fluctuating and fungible, like so much digital dust.
Samia Madwar
Senior Editor, The Walrus -
@ b7338786:fdb5bff3
2024-10-22 09:13:50Kameo 🎬
What is Kameo
Kameo is a lightweight Rust library for building fault-tolerant, distributed, and asynchronous actors. It allows seamless communication between actors across nodes, providing scalability , backpressure , and panic recovery for robust distributed systems.
Feature Highlights
- Async Rust: Each actor runs as a separate Tokio task, making concurrency easy and efficient.
- Supervision: Link actors to create a fault-tolerant, self-healing actor hierarchy.
- Remote Messaging: Send messages to actors on different nodes seamlessly.
- Panic Safety: Panics are gracefully handled, allowing the system to recover and continue running.
- Backpressure Management: Supports both bounded and unbounded mpsc messaging for handling load effectively.
Use Cases
Kameo is versatile and can be applied in various domains, such as:
- Distributed Microservices: Build resilient microservices that communicate reliably over a distributed network.
- Real-Time Systems: Ideal for building chat systems, multiplayer games, or real-time monitoring dashboards where low-latency communication is essential.
- IoT Devices: Deploy lightweight actors on low-resource IoT devices for seamless networked communication.
Getting Started
Prerequisites
- Rust installed (check rustup for installation instructions)
- A basic understanding of asynchronous programming in Rust
Installation
Add kameo as a dependency in your
Cargo.toml
file:shell cargo add kameo
Example: Defining an Actor
```rust use kameo::Actor; use kameo::message::{Context, Message}; use kameo::request::MessageSend;
// Define an actor that will keep a count
[derive(Actor)]
struct Counter { count: i64, }
// Define the message for incrementing the count struct Inc { amount: i64 }
// Implement how the actor will handle incoming messages impl Message
for Counter { type Reply = i64; async fn handle(&mut self, msg: Inc, _ctx: Context<'_, Self, Self::Reply>) -> Self::Reply { self.count += msg.amount; self.count }
} ```
Spawn and message the actor.
```rust // Spawn the actor and get a reference to it let actor_ref = kameo::spawn(Counter { count: 0 });
// Use the actor reference to send a message and receive a reply let count = actor_ref.ask(Inc { amount: 42 }).send().await?; assert_eq!(count, 42); ```
Additional Resources
Contributing
Contributions are welcome! Whether you are a beginner or an experienced Rust developer, there are many ways to contribute:
- Report issues or bugs
- Improve documentation
- Submit pull requests for new features or bug fixes
- Suggest new ideas in discussions
Join our community on Discord to connect with fellow contributors!
-
@ b7338786:fdb5bff3
2024-10-22 09:11:48Hello, Perceptron: An introduction to artificial neural networks
Generative AI tools like ChatGPT and Midjournery are able to replicate (and often exceed) human-like performance on tasks like taking exams, generating text and making art.
Even to seasoned programmers, their abilities can seem magical.
But, obviously, there is no magic.
These things are “just” artificial neural networks – circuits inspired by the architecture of biological brains.
An AI-imagined image of a neural network (Midjourney)
In fact, much like real brains, when broken down to their building blocks, these systems can seem “impossibly simple” relative to what they achieve.
(Modern computing is also magical in that sense, in that all of what computers are able to do reduces to simple logical building blocks – gates that calculate basic operations with truth values, such as AND, OR and NOT.)
The purpose of this article is to give programmers without much exposure to machine learning an understanding of the key building block powering generative AI: the artificial neuron.
Toward that end, this article has three goals:
- to implement a perceptron – the simplest artificial neuron;
- to train perceptrons how to mimic AND, OR and NOT; and
- to describe the leap to full-fledged neural networks.
Outline
History makes for good pedagogy with neural networks.
The simplest possible artificial neural network contains just one very simple artificial neuron – Frank Rosenblatt’s original perceptron.
(Rosenblatt’s perceptron is in turn based on McCulloch and Pitt’s even-more-simplified artificial neuron, but we’ll skip over that, since the perceptron permits a simple training algorithm.)
We’ll create an artifical neural network that consists of a single perceptron.
We’ll demonstrate that a single perceptron can “learn” basic logical functions such as AND, OR and NOT.
As a result, neural networks inherit the computational power of digital logic circuits: suddenly, anything you can do with a logical circuit, you could also do with a neural network.
Once we’ve defined the perceptron, we’ll recreate the algorithm used to train it, a sort of “Hello World” exercise for machine learning.
This algorithm will consume examples of inputs and ouptuts to the perceptron, and it will figure out how to reconfigure the perceptron to mimic those examples.
The limits of this single-perceptron approach show up when trying to learn the Boolean function XOR.
This limitation in turn motivates the development of full-fledged artificial neural networks.
And, that development has three key conceptual parts:
- arranging multiple perceptrons in layers to improve expressiveness;
- realizing that the simple perceptron learning algorithm is now problematic; and then
- graduating to full artificial neurons to “simplify” learning.
The full technical treatment of these developments is reserved for future articles, but you will leave this article with a technical understanding of the fundamental computational abstraction driving generative AI.
More resources
If, after reading this article, you’re looking for a more comprehensive treatment, I recommend Artificial Intelligence: A Modern Approach :
This has been the default text since I was an undergraduate, yet it’s received continuous updates throughout the years, which means it covers the full breadth of classical and modern approaches to AI.
What’s a (biological) neuron?
Before we get to perceptrons and artificial neurons, it’s worth acknowledging biological neurons as their inspiration.
Biological neurons are cells that serve as the basic building block of information processing in the brain and in the nervous system more broadly.
From a computational perspective, a neuron is a transducer: a neuron transforms input signals from upstream neurons into an output signal for downstream neurons.
More specifically, a neuron tries to determine whether or not to activate its output signal (to “fire”) based on the upstream signals.
Depending on where incoming signals meet up with the neuron, some are pro-activation (excitatory) and some are anti-activation (inhibitory).
So, without going into too much detail:
- A neuron receives input signals from upstream neurons.
- The neuron combines input pro- and anti-activation signals together.
- When net “pro-activation” signal exceeds a threshold, the neuron “fires.”
- Downstream neurons receive the signal, and they repeat this process.
What’s a perceptron?
The perceptron, first introduced by Frank Rosenblatt in 1958, is the simplest form of an artificial neuron.
Much like a biological neuron, a perceptron acts like a computational transducer combining multiple inputs to produce a single output.
In the context of modern machine larning, a perceptron is a classifier.
What’s a classifier?
A classifier categorizes an input data point into one of several predefined classes.
For example, a classifier could categorize an email as
spam
ornot_spam
.Or, a classifier might categorize an image as
dog
,cat
,bear
orother
.If there are only two categories, it’s a binary classifier.
If there are more than two categories, it’s a multi-class classifier.
A single perceptron by itself is a binary classifier, and the raw output of a perceptron is 0 or 1.
Of course, you could write a classifier by hand.
Here’s a hand-written classifier that takes a single number and “classifies” it as nonnegative (returning 1) or negative (returning 0):
def is_nonnegative(n): if n >= 0: return 1 else: return 0
Machine learning often boils down to using lots of example input-output pairs to “train” these classifiers, so that they don’t have to be programmed by hand.
For this very simple classifier, here’s a table of inputs and ouptuts:
Input: 5, Classification: 1 Input: 10, Classification: 1 Input: 2.5, Classification: 1 Input: 0.01, Classification: 1 Input: 0, Classification: 1 Input: -3, Classification: 0 Input: -7.8, Classification: 0
Whether a given training algorithm can turn this into the “right” classifier – a close enough approximation of
is_nonnegative
– is a topic for a longer discussion.But, that’s the idea – don’t code; train on data.
What’s a binary linear classifier?
More specifically, a perceptron can be thought of as a “binary linear classifier.”
The term linear has several related meanings in this context:
-
A linear classifier is a type of classifier that makes its predictions based on a linear combination of the input features.
-
And, for a linear classifier, the boundary separating the classes must be “linear” – it must be representable by a point (in one dimension), a straight line (in two dimensions), a plane (in three dimensions), or a hyperplane (in higher dimensions).
(All of this will make more sense once actual inputs are used.)
So, operationally, a perceptron treats an input as a vector of features (each represented by a number) and computes a weighted sum, before applying a step function to determine the output.
Because a perceptron classifies based on linear boundaries, classes that are not “linearly separable” can’t be modeled using just one perceptron.
Overcoming this limitation later motivates the development of full artificial neural networks.
The perceptron’s simplicity makes it an excellent starting point for understanding the mechanics of artificial neural networks.
The anatomy of a perceptron
An individual perceptron is defined by three elements:
- the number of inputs it takes, n;
- a list of of n weights, one for each input; and
- a threshold to determine whether it should fire based on the input.
The operation of a perceptron has two phases:
- multiplying the inputs by the weights and summing the results; and
-
checking for activation:
-
If the sum is greater than or equal to a threshold, the perceptron outputs 1.
- If the sum is less than the threshold, the perceptron outputs 0.
It’s straightforward to implement this in Python:
def perceptron(inputs, weights, threshold): weighted_sum = sum(x * w for x, w in zip(inputs, weights)) return 1 if weighted_sum >= threshold else 0
And, then, we could re-implement
is_nonnegative
as a binary linear classifier:def is_nonnegative(x): return perceptron([x], [1], 0)
Using this definition, we can also get a perceptron to simulate logical NOT:
def not_function(x): weight = -1 threshold = -0.5 return perceptron([x], [weight], threshold) print("NOT(0):", not_function(0)) # Outputs: 1 print("NOT(1):", not_function(1)) # Outputs: 0
Learning: From examples to code
Tweaking weights by hand is an inefficient way to program perceptrons.
So, suppose now that instead of picking weights and thresholds by hand, we want to find the weights and threshold that correctly classify some example input-output data automatically.
That is, suppose we want to “train” a perceptron based on examples of inputs and desired outputs.
In particular, let’s take a look at the truth table for AND encoded as a list of input-output pairs:
and_data = [\ ((0, 0), 0),\ ((0, 1), 0),\ ((1, 0), 0), \ ((1, 1), 1)\ ]
Can we “train” a perceptron to act like this function?
Because the input points that output
0
are linearly separable from the input points that output1
, yes, we can!Graphically, we can draw a line that separates (0,0), (0,1) and (1,0) from (1,1):
To find such a line automatically, we’ll implement the perceptron learning algorithm.
The perceptron learning algorithm
The perceptron learning algorithm is an iterative process that adjusts the weights and threshold of the perceptron based on how close it’s getting to the training data.
Here’s a high-level overview of the perceptron learning algorithm:
- Initialize the weights and threshold with random values.
- For each input-output pair in the training data:
- Compute the perceptron’s output using the current weights and threshold.
- Update the weights and threshold based on the difference between the desired output and the perceptron’s output – the error.
- Repeat steps 2 and 3 until the perceptron classifies all input-output pairs correctly, or a specified number of iterations have been completed.
The update rule for the weights and threshold is simple:
- If the perceptron’s output is correct, do not change the weights or threshold.
- If the perceptron’s output is too low, increase the weights and decrease the threshold.
- If the perceptron’s output is too high, decrease the weights and increase the threshold.
To update the weights and threshold, we use a learning rate, which is a small positive constant that determines the step size of the updates.
A smaller learning rate results in smaller updates and slower convergence, while a larger learning rate results in larger updates and potentially faster convergence, but also the risk of overshooting the optimal values.
For the sake of this implementation, let’s assume that the training data comes as a list of pairs: each pair is the input (a tuple of numbers) paired with its desired output (0 or 1).
Now, let’s implement the perceptron learning algorithm in Python:
import random def train_perceptron(data, learning_rate=0.1, max_iter=1000): # max_iter is the maximum number of training cycles to attempt # until stopping, in case training never converges. # Find the number of inputs to the perceptron by looking at # the size of the first input tuple in the training data: first_pair = data[0] num_inputs = len(first_pair[0]) # Initialize the vector of weights and the threshold: weights = [random.random() for _ in range(num_inputs)] threshold = random.random() # Try at most max_iter cycles of training: for _ in range(max_iter): # Track how many inputs were wrong this time: num_errors = 0 # Loop over all the training examples: for inputs, desired_output in data: output = perceptron(inputs, weights, threshold) error = desired_output - output if error != 0: num_errors += 1 for i in range(num_inputs): weights[i] += learning_rate * error * inputs[i] threshold -= learning_rate * error if num_errors == 0: break return weights, threshold
Now, let’s train the perceptron on the
and_data
:and_weights, and_threshold = train_perceptron(and_data) print("Weights:", and_weights) print("Threshold:", and_threshold)
This will output weights and threshold values that allow the perceptron to behave like the AND function.
The values may not be unique, as there could be multiple sets of weights and threshold values that result in the same classification.
So, if you train the perceptron twice, you may get different results.
To verify that the trained perceptron works as expected, we can test it on all possible inputs:
print(perceptron((0,0),and_weights,and_threshold)) # prints 0 print(perceptron((0,1),and_weights,and_threshold)) # prints 0 print(perceptron((1,0),and_weights,and_threshold)) # prints 0 print(perceptron((1,1),and_weights,and_threshold)) # prints 1
Learning the OR Function
Now that we’ve successfully trained the perceptron for the AND function, let’s do the same for the OR function. We’ll start by encoding the truth table for OR as input-output pairs:
or_data = [\ ((0, 0), 0),\ ((0, 1), 1),\ ((1, 0), 1),\ ((1, 1), 1)\ ]
Just like with the AND function, the data points for the OR function are also linearly separable, which means that a single perceptron should be able to learn this function.
Let’s train the perceptron on the
or_data
:or_weights, or_threshold = train_perceptron(or_data) print("Weights:", or_weights) print("Threshold:", or_threshold)
This will output weights and threshold values that allow the perceptron to behave like the OR function. As before, the values may not be unique, and there could be multiple sets of weights and threshold values that result in the same classification.
Once again, we can test it on all possible inputs:
print(perceptron((0,0),or_weights,or_threshold)) # prints 0 print(perceptron((0,1),or_weights,or_threshold)) # prints 1 print(perceptron((1,0),or_weights,or_threshold)) # prints 1 print(perceptron((1,1),or_weights,or_threshold)) # prints 1
Limits of a single perceptron: XOR
Having trained the perceptron for the AND and OR functions, let’s attempt to train it for the XOR function.
The XOR function returns true if exactly one of its inputs is true, and false otherwise. First, we’ll encode the truth table for XOR as input-output pairs:
xor_data = [\ ((0, 0), 0),\ ((0, 1), 1),\ ((1, 0), 1),\ ((1, 1), 0)\ ]
Now let’s try to train the perceptron on the
xor_data
:xor_weights, xor_threshold = train_perceptron(xor_data, max_iter=10000) print("Weights:", xor_weights) print("Threshold:", xor_threshold)
For my run, I got:
Weights: [-0.19425288088361953, -0.07246046028471387] Threshold: -0.09448636811679267
Despite increasing the maximum number of iterations to 10,000, we will find that the perceptron is unable to learn the XOR function:
print(perceptron((0,0),xor_weights,xor_threshold)) # prints 0 print(perceptron((0,1),xor_weights,xor_threshold)) # prints 1 print(perceptron((1,0),xor_weights,xor_threshold)) # prints 1 print(perceptron((1,1),xor_weights,xor_threshold)) # prints 1!!
The reason for this failure is that the XOR function is not linearly separable.
Visually, this means that there is no straight line that can separate the points (0,1) and (1,0) from (0,0) and (1,1).
Try it yourself: draw a square, and then see if you can draw a single line that separates the upper left and lower right corners away from the other two.
In other words, because perceptrons are binary linear classifiers, a single perceptron is incapable of learning the XOR function.
From a perceptron to full artificial neural nets
In the previous sections, we demonstrated how a single perceptron could learn basic Boolean functions like AND, OR and NOT.
However, we also showed that a single perceptron is limited when it comes to non-linearly separable functions, like the XOR function.
To overcome these limitations and tackle more complex problems, researchers developed modern artificial neural networks (ANNs).
In this section, we will briefly discuss the key changes to the perceptron model and the learning algorithm that enable the transition to ANNs.
Multilayer Perceptron Networks
The first major change is the introduction of multiple layers of perceptrons, also known as Multilayer Perceptron (MLP) networks. MLP networks consist of an input layer, one or more hidden layers, and an output layer.
Each layer contains multiple perceptrons (also referred to as neurons or nodes). The input layer receives the input data, and the output layer produces the final result or classification.
In an MLP network, the output of a neuron in one layer becomes the input for the neurons in the next layer. The layers between the input and output layers are called hidden layers, as they do not directly interact with the input data or the final output.
By adding hidden layers, MLP networks can model more complex, non-linear relationships between inputs and outputs, effectively overcoming the limitations of single perceptrons.
Activation Functions
While the original perceptron model used a simple step function as the activation function, modern ANNs use different activation functions that allow for better learning capabilities and improved modeling of complex relationships.
Some popular activation functions include the sigmoid function, hyperbolic tangent (tanh) function, and Rectified Linear Unit (ReLU) function.
These activation functions introduce non-linearity to the neural network, which enables the network to learn and approximate non-linear functions.
In addition, they provide “differentiability” (in the sense of calculus), a critical property for training neural networks using gradient-based optimization algorithms.
Backpropagation and gradient descent
The perceptron learning algorithm is insufficient for training MLP networks, as it is a simple update rule designed for single-layer networks.
Instead, modern ANNs use the backpropagation algorithm in conjunction with gradient descent or its variants for training.
Backpropagation is an efficient method for computing the gradient of the error with respect to each weight in the network. The gradient indicates the direction and magnitude of the change in the weights needed to minimize the error.
Backpropagation works by calculating the error at the output layer and then propagating the error backward through the network, updating the weights in each layer along the way.
Gradient descent is an optimization algorithm that uses the computed gradients to update the weights and biases of the network. It adjusts the weights and biases iteratively, taking small steps in the direction of the negative gradient, aiming to minimize the error function.
Variants of gradient descent, like stochastic gradient descent (SGD) and mini-batch gradient descent, improve the convergence speed and stability of the learning process.
Onward
In short, the transition from single perceptrons to full artificial neural networks involves three key changes:
- Arranging multiple perceptrons in layers to improve expressiveness and model non-linear relationships.
- Introducing different activation functions that provide non-linearity and differentiability.
- Implementing the backpropagation algorithm and gradient descent for efficient and effective learning in multilayer networks.
With these changes, ANNs become capable of learning complex, non-linear functions and solving a wide range of problems, ultimately leading to the development of the powerful generative AI models we see today.
Future articles will delve deeper into each of these topics, exploring their theoretical foundations and practical implementations.
-
@ b7338786:fdb5bff3
2024-10-22 09:10:07Apple releases Depth Pro, an AI model that rewrites the rules of 3D vision
Apple’s AI research team has developed a new model that could significantly advance how machines perceive depth, potentially transforming industries ranging from augmented reality to autonomous vehicles.
The system, called Depth Pro , is able to generate detailed 3D depth maps from single 2D images in a fraction of a second—without relying on the camera data traditionally needed to make such predictions.
The technology, detailed in a research paper titled “Depth Pro: Sharp Monocular Metric Depth in Less Than a Second ,” is a major leap forward in the field of monocular depth estimation, a process that uses just one image to infer depth.
This could have far-reaching applications across sectors where real-time spatial awareness is key. The model’s creators, led by Aleksei Bochkovskii and Vladlen Koltun, describe Depth Pro as one of the fastest and most accurate systems of its kind.
A comparison of depth maps from Apple’s Depth Pro, Marigold, Depth Anything v2, and Metric3D v2. Depth Pro excels in capturing fine details like fur and birdcage wires, producing sharp, high-resolution depth maps in just 0.3 seconds, outperforming other models in accuracy and detail. (credit: arxiv.org)
Speed and precision, without the metadata
Monocular depth estimation has long been a challenging task, requiring either multiple images or metadata like focal lengths to accurately gauge depth.
But Depth Pro bypasses these requirements, producing high-resolution depth maps in just 0.3 seconds on a standard GPU. The model can create 2.25-megapixel maps with exceptional sharpness, capturing even minute details like hair and vegetation that are often overlooked by other methods.
“These characteristics are enabled by a number of technical contributions, including an efficient multi-scale vision transformer for dense prediction,” the researchers explain in their paper. This architecture allows the model to process both the overall context of an image and its finer details simultaneously—an enormous leap from slower, less precise models that came before it.
A comparison of depth maps from Apple’s Depth Pro, Depth Anything v2, Marigold, and Metric3D v2. Depth Pro excels in capturing fine details like the deer’s fur, windmill blades, and zebra’s stripes, delivering sharp, high-resolution depth maps in 0.3 seconds. (credit: arxiv.org)
Metric depth, zero-shot learning
What truly sets Depth Pro apart is its ability to estimate both relative and absolute depth, a capability called “metric depth.”
This means that the model can provide real-world measurements, which is essential for applications like augmented reality (AR), where virtual objects need to be placed in precise locations within physical spaces.
And Depth Pro doesn’t require extensive training on domain-specific datasets to make accurate predictions—a feature known as “zero-shot learning.” This makes the model highly versatile. It can be applied to a wide range of images, without the need for the camera-specific data usually required in depth estimation models.
“Depth Pro produces metric depth maps with absolute scale on arbitrary images ‘in the wild’ without requiring metadata such as camera intrinsics,” the authors explain. This flexibility opens up a world of possibilities, from enhancing AR experiences to improving autonomous vehicles’ ability to detect and navigate obstacles.
For those curious to experience Depth Pro firsthand, a live demo is available on the Hugging Face platform.
A comparison of depth estimation models across multiple datasets. Apple’s Depth Pro ranks highest overall with an average rank of 2.5, outperforming models like Depth Anything v2 and Metric3D in accuracy across diverse scenarios. (credit: arxiv.org)
Real-world applications: From e-commerce to autonomous vehicles
This versatility has significant implications for various industries. In e-commerce, for example, Depth Pro could allow consumers to see how furniture fits in their home by simply pointing their phone’s camera at the room. In the automotive industry, the ability to generate real-time, high-resolution depth maps from a single camera could improve how self-driving cars perceive their environment, boosting navigation and safety.
“The method should ideally produce metric depth maps in this zero-shot regime to accurately reproduce object shapes, scene layouts, and absolute scales,” the researchers write, emphasizing the model’s potential to reduce the time and cost associated with training more conventional AI models.
Tackling the challenges of depth estimation
One of the toughest challenges in depth estimation is handling what are known as “flying pixels”—pixels that appear to float in mid-air due to errors in depth mapping. Depth Pro tackles this issue head-on, making it particularly effective for applications like 3D reconstruction and virtual environments, where accuracy is paramount.
Additionally, Depth Pro excels in boundary tracing, outperforming previous models in sharply delineating objects and their edges. The researchers claim it surpasses other systems “by a multiplicative factor in boundary accuracy,” which is key for applications that require precise object segmentation, such as image matting and medical imaging.
Open-source and ready to scale
In a move that could accelerate its adoption, Apple has made Depth Pro open-source. The code, along with pre-trained model weights, is available on GitHub , allowing developers and researchers to experiment with and further refine the technology. The repository includes everything from the model’s architecture to pretrained checkpoints, making it easy for others to build on Apple’s work.
The research team is also encouraging further exploration of Depth Pro’s potential in fields like robotics, manufacturing, and healthcare. “We release code and weights at https://github.com/apple/ml-depth-pro ,” the authors write, signaling this as just the beginning for the model.
What’s next for AI depth perception
As artificial intelligence continues to push the boundaries of what’s possible, Depth Pro sets a new standard in speed and accuracy for monocular depth estimation. Its ability to generate high-quality, real-time depth maps from a single image could have wide-ranging effects across industries that rely on spatial awareness.
In a world where AI is increasingly central to decision-making and product development, Depth Pro exemplifies how cutting-edge research can translate into practical, real-world solutions. Whether it’s improving how machines perceive their surroundings or enhancing consumer experiences, the potential uses for Depth Pro are broad and varied.
As the researchers conclude, “Depth Pro dramatically outperforms all prior work in sharp delineation of object boundaries, including fine structures such as hair, fur, and vegetation.” With its open-source release, Depth Pro could soon become integral to industries ranging from autonomous driving to augmented reality—transforming how machines and people interact with 3D environments.
-
@ b7338786:fdb5bff3
2024-10-22 09:08:47Free-form Floor Plan Design using Differentiable Voronoi Diagram
Xuanyu Wu, Kenji Tojo, Nobuyuki Umetani, "Free-form Floor Plan Design using Differentiable Voronoi Diagram," In Proceedings of Pacific Graphics 2024
Abstract
Designing floor plans is difficult because various constraints must be satisfied by the layouts of the internal walls. This paper presents a novel shape representation and optimization method for designing floor plans based on the Voronoi diagrams. Our Voronoi diagram implicitly specifies the shape of the room using the distance from the Voronoi sites, thus facilitating the topological changes in the wall layout by moving these sites. Since the differentiation of the explicit wall representation is readily available, our method can incorporate various constraints, such as room areas and room connectivity, into the optimization. We demonstrate that our method can generate various floor plans while allowing users to interactively change the constraints.
How to run
The demos are written in
Rust
. If you don't have Rust on your computer, please install the Rust development environment. Here is the list of commands that generate GIF animations of convergence.The command
run --example 0_shapeA --release
results in following animations (left: random seed = 0, right: random seed = 1)
The command
run --example 1_shapeB --release
results in following animations (left: random seed = 0, right: random seed = 1)
The command
run --example 2_shapeC --release
results in following animations (left: random seed = 4, right: random seed = 7)
The command
run --example 3_duck --release
results in following animations (left: random seed = 0, right: random seed = 5) -
@ b7338786:fdb5bff3
2024-10-22 09:05:06Solar power from space?
Like nuclear fusion, the idea of space-based solar power has always seemed like a futuristic technology with an actual deployment into communities ever remaining a couple of decades away.
The concept of harvesting solar power continuously from large satellites in space—where there are no nights, no clouds, and no atmosphere to interfere with the collection of photons—is fairly simple. Large solar arrays in geostationary orbit collect solar energy and beam it back to Earth via microwaves as a continuous source of clean energy.
However, implementing this technology is not so simple. In recent years, in search of long-term power solutions and concerned about climate change, the European Space Agency has been studying space-based solar power. Some initial studies found that a plan to meet one-third of Europe's energy needs would require massive amounts of infrastructure and cost hundreds of billions of dollars. At best, such a system of very large satellites in geostationary space might come online by the middle of this century.
In short, the plan would require massive up-front costs, with no guarantee that it all would work out in the end.
A physicist with a plan
So when a physicist with a background in financial services told me he wanted to reinvent the idea of space-based solar power beginning with a relatively small investment—likely a bit north of $10 million—it's fair to say I was a bit skeptical.
That physicist is Baiju Bhatt, who co-founded the electronic trading platform Robinhood in 2013. Bhatt served as co-CEO of the company until November 2020, when he became chief creative officer. Robinhood has, at times, courted controversy, particularly during the GameStop frenzy in 2021. Bhatt left the company in the spring of this year to focus on his new space solar startup. Space, Bhatt said, has always been his true passion.
The company is called Aetherflux, and it is starting small, with only about 10 employees at present.
"It's beyond my creativity how to bootstrap something that's going to be the size of a small city in geostationary space, and I think that's one of the reasons why the idea has died on the vine," he told Ars. "I also think it's one of the reasons why there is skepticism about the idea of space-based solar power. Our approach is very different."
That approach starts in low-Earth orbit rather than 36,000 km away from the surface of the Earth. Aetherflux plans to begin with a single satellite, launching into an orbit about 500 km above the planet on a SpaceX transporter mission about 12 to 15 months from now.
This initial satellite will be based on a commercially available bus from Apex, which will produce, on average, about 1 kilowatt of power. It's a modest amount, enough electricity to power a dishwasher. This satellite will also include a high-powered infrared laser to transmit this power back to Earth. A mobile ground station, about 10 meters across, will receive the energy.
With a single satellite in low-Earth orbit, power beaming will only be available for any location on Earth for a few minutes as the spacecraft passes from horizon to horizon.
"We've spent a lot of time over the last year with folks within Department of Defense, and with some of the folks within DARPA," Bhatt said. "The idea is like a demonstration mission, which kind of establishes the core functionality."
Where is all this headed?
One of the key aspects of the test is to determine both the safety and efficiency of collecting solar energy in space, transmitting it through the atmosphere, and then producing a usable source of power on the ground.
If the demo mission works, Aetherflux plans to develop a constellation of satellites in low-Earth orbit that could provide power continuously and at greater amounts. Initially, the company seeks to deliver power in remote locations, such as disaster relief areas, off-the-grid mining operations, or forward operating bases for the military.
"If we can make that business model work, that's kind of the jumping-off point to being able to say, hey, could we put this on things like freight shipping?" Bhatt said. "Could we meaningfully address the ability to do freight shipping across large bodies of water with renewable energy?"
In the long term, there's the potential to provide a base load of power to augment the intermittent availability of terrestrial wind and solar energy—a key need if the world is to de-carbonize its electricity generation.
But that's probably putting the cart before the horse. One of the biggest challenges of space-based solar power is that it has always been theoretical. It should work. But will it work? Trying out a low-cost demonstrator mission in the next couple of years is a fine way of finally putting that question to rest.
-
@ 01aa7401:33696fab
2024-10-22 05:23:13New Jersey is home to a variety of rehab centers that offer crucial support for individuals struggling with substance use disorders. With a focus on comprehensive treatment approaches, these facilities aim to help clients achieve lasting recovery through personalized care, evidence-based therapies, and holistic practices. This article explores the types of rehab centers available in new jersey rehab center, their treatment methodologies, and resources for those seeking assistance. The Need for Rehab Centers in New Jersey
The state of New Jersey has faced significant challenges related to substance abuse, particularly with the opioid epidemic and alcohol-related issues. As a result, rehab centers have become essential in providing structured support and treatment options to help individuals overcome addiction and rebuild their lives. Types of Rehab Centers in New Jersey
Inpatient Rehab Centers: Inpatient facilities provide a highly structured environment where individuals stay at the center for a designated period, usually between 30 and 90 days. This immersive experience offers 24/7 support, medical supervision, and intensive therapy, making it ideal for those with severe addictions or co-occurring mental health disorders. Outpatient Rehab Centers: Outpatient programs allow individuals to receive treatment while living at home. Patients attend therapy sessions, support groups, and counseling several times a week, providing flexibility for those with work or family obligations. This option is often suitable for individuals with milder addictions or those transitioning from inpatient care. Detox Centers: Many rehab facilities in New Jersey offer detoxification services to help individuals safely manage withdrawal symptoms when quitting drugs or alcohol. Medical supervision during detox is crucial to ensure safety and comfort during this challenging process. Long-Term Rehab Centers: Some rehab centers provide long-term residential treatment, offering extended care for individuals who require more time to address their addiction. These programs focus on life skills development and building a supportive community to foster recovery. Specialized Rehab Centers: New Jersey also features rehab centers that cater to specific populations, such as women, adolescents, and individuals with dual diagnoses (co-occurring mental health and substance use disorders). These specialized programs tailor treatment plans to meet the unique needs of different groups.
Treatment Approaches in New Jersey Rehab Centers
Evidence-Based Therapies: Most rehab centers employ evidence-based practices such as Cognitive Behavioral Therapy (CBT) and Motivational Interviewing (MI). These therapies help individuals identify the root causes of their addiction and develop effective coping strategies. Holistic Therapies: Many facilities incorporate holistic approaches, such as yoga, mindfulness, and art therapy, to promote emotional healing and overall well-being. These therapies encourage self-exploration and help individuals build resilience in recovery. Family Involvement: Family therapy is often a vital component of the treatment process. Involving family members helps rebuild relationships and creates a supportive environment that fosters recovery. Aftercare Planning: Successful recovery requires ongoing support. Many rehab centers develop comprehensive aftercare plans, including outpatient therapy, support groups, and community resources to help individuals maintain their sobriety after treatment.
Resources for Finding a Rehab Center in New Jersey
New Jersey Department of Human Services (DHS): The DHS provides a directory of licensed rehab facilities and information on state-funded treatment options, making it easier for individuals to find help. Substance Abuse and Mental Health Services Administration (SAMHSA): SAMHSA offers a national helpline and treatment locator to help individuals find nearby rehab centers that suit their needs. Local Support Groups: Organizations such as Alcoholics Anonymous (AA) and Narcotics Anonymous (NA) provide essential support and community connections for individuals in recovery.
Taking the First Step Toward Recovery
If you or someone you know is struggling with addiction, seeking help from a rehab center can be a transformative decision. New Jersey offers a range of rehab options designed to provide the necessary support and resources for individuals to overcome their substance use disorders. Conclusion
New Jersey rehab centers play a vital role in addressing the challenges of addiction. With a variety of treatment options, evidence-based practices, and a focus on holistic well-being, these centers empower individuals to embark on their recovery journey. If you’re ready to take that first step, reach out to a local rehab center to explore your options and begin the path to healing.
-
@ 623ed218:fa549249
2024-10-22 02:04:59So... I did a thing...
...Woof!
Satoshi Vault: An Accessible Multi-Signature Wallet Solution for Enhanced Self-Custody
Abstract
Satoshi Vault aims to provide a simple, open-source, multi-platform application designed to simplify the setup and management of multi-signature (multi-sig) wallets. The application specifically targets new Bitcoin users, enhancing the security of their holdings without requiring additional hardware. By creating an accessible, intuitive interface and aligning with Web of Trust (WOT) principles, Satoshi Vault addresses a significant gap in the Bitcoin ecosystem, promoting higher security with minimal user friction.
1. Introduction
Bitcoin wallets typically take one of two forms: single-signature (single-sig) and multi-signature (multi-sig). Single-sig wallets are the default, and they are most commonly used by those new to self-custody as they are simple to set up and manage. However, single-sig wallets present a security risk as only one key is needed to access and move funds, leaving them vulnerable to theft or compromise.
Multi-sig wallets require multiple keys to authorize transactions, significantly enhancing security by distributing custody across multiple signatories. Multi-sig should be the standard for all Bitcoin self-custody setups, particularly for key recovery and backup processes, as an improvement over traditional paper seed backups.
This white paper proposes Satoshi Vault: an open-source, multi-platform application that simplifies multi-sig wallet creation and operation, making enhanced security accessible to all Bitcoin users without the need for dedicated hardware.
2. System Overview
2.1 Wallet Creation Process
Satoshi Vault provides users with a simple, guided process for creating a secure multi-sig wallet:
- Users begin by selecting the option to create a new "vault" (multi-signature wallet). The default configuration is a 2/3 multi-sig setup, although users can access advanced settings to adjust the number of signatories and the threshold required for authorizing transactions.
- The application generates a new Bitcoin key, securely encrypting it on the user’s device as a software key.
- It is important to tailor multi-sig setups to the user’s situation, skill level, and the size of their holdings. For smaller amounts (e.g., 100,000 sats), a 2/3 service-based multi-sig involving keys held on the user’s phone, a spouse’s or trusted contact's device, and the exchange where one purchases Bitcoin or a vetted third-party key holder may be appropriate, providing a balance of convenience and security. For larger holdings, users may opt for a more secure configuration with a higher number of signatories and a higher threshold for transaction authorization to enhance protection and mitigate risks.
2.2 Integration with Trusted Contacts
- Users are prompted to select a trusted contact who can generate a "Social Recovery Key." The public key from this newly generated keypair, created through a simple option on the app’s welcome page, is shared back with the user for use in the configuration of their multi-sig vault.
- An "affiliate + key assistance" system may also be used, where the additional key could be managed by the person who onboarded the user (e.g., a mentor or trusted friend), reinforcing usability and trust.
2.3 Third-Party Collaborative Custody
- Satoshi Vault offers flexibility for the third key in the multi-sig setup. While users can choose to partner with an exchange, they may also select a vetted partner of their choice. Satoshi Vault will build partnerships with respected Bitcoin support and mentorship services that offer third-party key custody as a service. These vetted partners may charge a fee for advanced support or assistance if required.
- Alternatively, if the user prefers, they can choose another trusted contact to create a second Social Recovery Key, allowing for a more private Vault configuration.
2.4 Vault Creation and Management
- With the user’s, trusted contact’s, and third-party public keys in place, Satoshi Vault creates a multi-signature vault and shares descriptor files securely with the trusted contact and the exchange.
- The app houses configuration files for each multi-sig wallet and enforces security by rejecting any attempt to add additional keys from the same multi-sig setup, ensuring that a quorum of keys cannot be consolidated on a single device.
2.5 Encrypted Cloud Backup
- To enhance redundancy, Satoshi Vault offers users the option to encrypt and store their key material in their preferred cloud service (e.g., iCloud, Google Drive). This approach leverages familiar systems providing a seamless way for users to store recovery keys.
- This minimizes the risk of loss of key material in the event of device loss or app data deletion.
3. Technical Implementation
3.1 Security Architecture
- Encryption: All communication between the app, users, trusted contacts, and exchanges is encrypted using AES-256 and other state-of-the-art encryption protocols to ensure privacy and prevent interception or tampering.
- Cross-Platform Support: Satoshi Vault is designed to function across multiple platforms (iOS, Android, Windows, Linux) to ensure broad accessibility.
- Public Key Transmission: Secure communication protocols such as Fedi, Matrix, or Nostr Gift-Wrapped Messages may be used to transmit public keys, descriptor files, and signatures, safeguarding user information.
3.2 User Experience Design
- Onboarding and Education: A setup wizard guides users through wallet creation, using simple language and visual aids to explain multi-sig benefits, vault concepts, and the roles of trusted contacts and exchanges.
- Customization: Users can access an advanced “Pro Mode” for configuring signatories beyond the default 2/3 setup, allowing for custom thresholds and advanced wallet configurations.
3.3 Node Connectivity
To maintain decentralization and minimize the risk associated with a single point of failure, Satoshi Vault will not rely on a central node for its operations. Instead, the application will offer multiple options for users:
- Custom Node Connections: Users can connect Satoshi Vault to their own Bitcoin nodes, providing maximum control and security. This approach aligns with the Bitcoin ethos of self-sovereignty and reduces reliance on third-party infrastructure.
- Public Node Selection: For users without their own nodes, the application will provide a selection of vetted, community-operated public nodes to connect with. This offers convenience while distributing trust across multiple independent entities, minimizing the risk of centralization.
- Tor Integration: To enhance privacy, Satoshi Vault will support Tor integration, enabling users to connect to nodes anonymously and reducing the risk of tracking and monitoring.
- Automatic Node Load Balancing: The app may implement a feature that automatically rotates between multiple trusted public nodes, ensuring that no single node becomes a point of dependency.
4. Security and Privacy Considerations
Satoshi Vault prioritizes security and privacy:
- Multi-Factor Authentication (MFA): To protect app access, users may set up MFA as an additional security layer.
- Local and Cloud Encryption Standards: The app uses AES-256 for local storage of keys and end-to-end encryption for cloud backups and transmission of public keys and descriptors.
- Key Recovery and Redundancy: Cloud services are employed for encrypted backups, providing redundancy while ensuring that users retain control over their key material.
5. Integration and Collaboration Opportunities
5.1 Wallet Providers
- Satoshi Vault could partner with existing open-source wallet providers, such as Nunchuk, to integrate its multi-sig functionality and expand their security offerings.
5.2 Exchange Collaboration
- By collaborating with exchanges, Satoshi Vault can provide seamless integration for obtaining Exchange Public Keys, building trust and simplifying the user experience for those securing their funds.
6. Future Development and Monetization
6.1 Feature Expansion
- Hardware Wallet Integration: Future updates will include compatibility with hardware wallets, providing additional security options for advanced users who wish to incorporate dedicated signing devices into their multi-sig setup.
- Automatic Backup Options: Users will have more automated and secure options for backing up their encrypted keys, including automated cloud backups and integration with secure storage solutions.
- Transaction Monitoring: The app will incorporate transaction monitoring services, enabling users to receive alerts and detailed information about incoming and outgoing transactions.
- Guided Key Rotation/Replacement: To maintain long-term security and minimize risks from potential key compromises, Satoshi Vault will offer a guided process for rotating or replacing keys within a multi-sig setup. The feature will provide users with step-by-step instructions on how to update their vault’s keys, ensuring a secure and seamless transition without risking access to funds.
6.2 Premium Services
- Potential premium services advanced support for users configuring complex multi-sig setups. These services may be facilitated through strategic partnerships with community-vetted organizations such as Bitcoin Mentor or The Bitcoin Way, ensuring trusted and expert guidance for users.
7. Market Considerations and Demand
While specialized hardware signing devices will continue to cater to those with heightened security demands (i.e., “paranoid crypto anarchists”), Satoshi Vault fills a gap for new entrants to the space who may not yet be ready or willing to purchase dedicated hardware. Providing robust, easy-to-use software solutions ensures that the demand for secure self-custody is met for all levels of users.
8. Conclusion
Satoshi Vault is a critical step forward in improving Bitcoin self-custody security. By offering an open-source, user-friendly application for multi-signature wallet setup and management, Satoshi Vault empowers users to protect their holdings without the need for additional hardware. Every self-custody setup should incorporate multi-sig, and Satoshi Vault provides an adaptable, secure solution to meet this need, ensuring that new users have access to robust security models that align with the ethos of Bitcoin’s financial sovereignty. By paving a path to accessible and secure Bitcoin self-custody for the coming waves of new Bitcoin participants, Satoshi Vault aims to uphold and strengthen the principles of financial freedom and autonomy in the Bitcoin ecosystem.
9. Contributors and Acknowledgements
Satoshi Vault was developed with insights and support from several key contributors in the Bitcoin community:
- Guy Swann, founder of Bitcoin Audible, provided valuable perspectives on multi-signature setups and emphasized the importance of using multi-sig as the standard for Bitcoin self-custody and recovery processes.
- We also acknowledge the efforts of the many builders and developers who have worked tirelessly to bring Bitcoin self-custody to it's present stage, creating the tools and upholding the principles that Satoshi Vault hopes to further.
- Additional input and feedback were provided by members of the Bitcoin community who share a commitment to enhancing security and accessibility for Bitcoin newcomers.
View the full repo:
https://github.com/bitcoinpup/Satoshi-Vault
originally posted at https://stacker.news/items/734826
-
@ 2348ca50:32a055a8
2024-10-22 02:02:33Thinking about getting a clean, modern vibe in your space while saving some room? Mounting your TV on drywall is a solid way to do it. Just keep in mind that drywall isn’t as sturdy as wood or concrete, so you need to take some precautions. Otherwise, your TV might come crashing down, which could mess up your wall or, even worse, hurt someone.
To make sure your TV is securely mounted on drywall, check out these simple safety tips. Let’s dive in! Click here> Looking for TV installation Sunshine Coast ** 1. Pick the Right Wall Mount**
Before you begin penetrating openings into your wall, it's vital to pick a wall mount that accommodates your TV size and weight. Wall mounts come in various kinds: fixed, shifting, and full-movement. Really look at your TV's weight in the manual and guarantee the wall mount can uphold it.
**How to choose the right mount: ** • Fixed mounts hold the TV flush against the wall.
• Shifting mounts permit you to somewhat change the point of the TV.
• Full-movement mounts give you the most adaptability, permitting the TV to turn.
Whenever you've chosen a mount, ensure it is evaluated for your TV's size and weight. Utilizing a mount that is too little or frail can be risky.
2. Find the Studs in Your Wall
Only drywall isn't sufficiently able to hold the heaviness of a TV, so it's fundamental to connect your wall mount to the wooden studs behind the drywall. Studs are the wooden shafts that help your walls. You can't see them, yet you can find them utilizing a stud locater.
How to find studs
• Run a stud locater along the wall where you need to mount your TV. The stud locater will signal or illuminate when it distinguishes a stud.
• Whenever you've tracked down a stud, mark it with a pencil.
• Normally, studs are separated 16 or 24 inches separated. Measure to find the next stud if you need more than one. Always ensure that at least two screws go into studs to safely support the weight of your TV.
3. Stay away from Electrical Wires and Lines
Avoid hitting electrical wires and lines when drilling into drywall, as it can be dangerous. Drilling into a wire may cause a short circuit or fire.
**How to avoid hitting wires or pipes: **
• Use a stud finder with an integrated wire and pipe detector. This will caution you in the event that there are any risks behind the wall.
• Make an effort not to mount your TV straight above or under electrical attachments or light switches, as this is where wires are most likely going to be.
• In case you're dubious about what's behind the drywall, you could have to direct a specialist preceding proceeding.
4. Utilize the Right Instruments and Anchors
For TV installation Sunshine coast, it is basic to utilize the right devices. You'll require a drill, screws, and anchors to hold the mount safely. Drywall secures are particularly significant in the event that you're not penetrating into studs, as they assist with circulating the load across the wall.
**Types of drywall anchors: **
• Toggle bolts: These are metal bolts that expand behind the drywall, offering strong support. They are a great option if you can’t find a stud.
• Molly bolts: These anchors also expand once inserted and are good for medium-weight TVs.
• Plastic anchors: These are not recommended for heavy TVs as they don’t provide as much support.
When possible, always try to secure your TV mount to studs for the best support. But if you need to mount on drywall alone, using heavy-duty anchors is a must.
5. Ensure the Mount is Level
Whenever you've tracked down the studs and are prepared to penetrate, it's vital to ensure the wall mount is level. A slanted TV can look odd and may put lopsided weight on the mount, making it become free over the long haul.
**How to ensure the mount is level: **
• Use a level to make sure the mount is perfectly straight before drilling.
• Hold the mount against the wall and really take a look at the level's air pocket to affirm that it's focused.
• Mark the places where you will bore with a pencil prior to making any openings.
Carving out opportunity to get your mount entirely level will keep you from being required to re-try the installation later.
6. Drill Carefully and Secure the Mount
When you're certain about the arrangement and level of your mount, you can start drilling. Make a point to utilize the right bore size in view of the screws or anchors you're utilizing. Penetrating openings that are too enormous or little can prompt unfortunate dependability.
Drilling tips:
• Begin by penetrating pilot openings into the drywall and studs. Pilot openings are more modest openings that make it simpler to embed screws.
• After the pilot openings are bored, join the mount to the wall utilizing the gave screws. Ensure the screws are firmly gotten into the studs or anchors.
• Check that the mount is still level once it’s fully attached to the wall.
7. Hang the TV on the Mount
Subsequent to getting the wall mount, now is the ideal time to hang the TV. Most mounts have sections that you join to the rear of the TV, which then, at that point, clasp or slide onto the wall mount.
Steps to hang the TV:
• Join the sections to the rear of the TV utilizing the screws furnished with the mount.
• With the assistance of someone else, lift the TV and cautiously adjust the sections to the mount on the wall.
• When the TV is on the mount, ensure it's safely joined by tenderly pulling on it. Assuming it feels free, make sure that the sections are appropriately situated in the mount.
8. Manage the Cables Safely
Now that your TV installation Sunshine coast is done, you'll have to coordinate the cables. Hanging cables look chaotic as well as represent a stumbling risk.
Cable management tips:
• Utilize link cuts or a link the board framework to run the ropes flawlessly down the wall.
• For a cleaner look, think about installing a link go through plate that conceals the links behind the drywall.
• Try not to squeeze or bowing the links, as this can harm them and make a danger.
9. Check the Stability Regularly
Indeed, even after your TV installation Sunshine coast is mounted, checking the security of the mount like clockwork is significant. Over the long haul, the screws might release, particularly assuming the TV is on a full-movement mount that you habitually change.
How to maintain the mount:
• Occasionally take a look at the screws to guarantee they are still close.
• Assuming you notice any wobbling or detachment, eliminate the TV and fix the screws.
• On the off chance that you utilized drywall secures, ensure they are as yet secure and haven't begun to haul out of the wall.
Conclusion
Installing a TV wall mount on drywall can be a basic and safe interaction whenever done accurately. By following these wellbeing tips — like tracking down the studs, keeping away from wires and lines, utilizing the right anchors, and consistently taking a look at the mount's strength — you can guarantee your TV installation Sunshine coast is safely done and your home stays safe. Keep in mind, finding opportunity to do it right whenever will first save you from likely mishaps or harm later on.
-
@ 2263d024:51b17ece
2024-10-22 00:02:21"El NIRVANA, el SAMADHI, la contemplación nihilista, la identificación con el mundo panteísta, “sentir a Dios en todas las cosas”, y cualquier otra forma de participación con el Plan de El Uno, implican a la larga la FAGOCITACIÓN en su Excelso Buche y la muerte de la conciencia individual. El Vril, en cambio, es única posibilidad de ser y es, a la vez, pura posibilidad".
"Historia Secreta de la Thulegesellchaft", Nimrod de Rosario
-
@ c73818cc:ccd5c890
2024-10-21 22:13:09Siamo entusiasti di lanciare il Centro App Telegram di Bitget, una piattaforma con oltre 600 app e bot sviluppati su Telegram Open Network (TON). Ora puoi esplorare servizi blockchain, giochi innovativi e strumenti Web3 direttamente da Telegram!
✨ I primi 1.000 utenti fortunati che partecipano potranno condividere 2.000 USDT! 🎉 📅 Periodo della promozione: 14 ottobre 2024, ore 14:00 (UTC+2) – 28 ottobre 2024, ore 13:00 (UTC+1).
💡 Iscriviti con il nostro referral per:
✅ Bonus esclusivi 🔖 Sconto del 20% sulle fee a vita 📚 Accesso al gruppo privato con un corso completo su Bitcoin e trading!
🔗 Non perdere questa occasione, unisciti ora e scopri il futuro su Telegram! 💥
Unitevi da questo link! https://t.me/tg_app_center_bot/appcenter?startapp=O9oLXzD6OX
Oppure inserendo questo codice d’invito: O9oLXzD6OX
Referral Bitget: https://bonus.bitget.com/U1JNEK
BitcoinReportItalia #Bitget #Telegram #dApp #TON #Crypto #Web3 #USDT #Trading
-
@ 3cd2ea88:bafdaceb
2024-10-21 22:09:24Siamo entusiasti di lanciare il Centro App Telegram di Bitget, una piattaforma con oltre 600 app e bot sviluppati su Telegram Open Network (TON). Ora puoi esplorare servizi blockchain, giochi innovativi e strumenti Web3 direttamente da Telegram!
✨ I primi 1.000 utenti fortunati che partecipano potranno condividere 2.000 USDT! 🎉 📅 Periodo della promozione: 14 ottobre 2024, ore 14:00 (UTC+2) – 28 ottobre 2024, ore 13:00 (UTC+1).
💡 Iscriviti con il nostro referral per:
✅ Bonus esclusivi 🔖 Sconto del 20% sulle fee a vita 📚 Accesso al gruppo privato con un corso completo su Bitcoin e trading!
🔗 Non perdere questa occasione, unisciti ora e scopri il futuro su Telegram! 💥
Unitevi da questo link! https://t.me/tg_app_center_bot/appcenter?startapp=O9oLXzD6OX
Oppure inserendo questo codice d’invito: O9oLXzD6OX
Referral Bitget: https://bonus.bitget.com/U1JNEK
-
@ 9e69e420:d12360c2
2024-10-21 20:26:54Your first NOSTR note.
Some of you may be lurking wondering how why or what for your first note here on nostr. Here's some advice from an pseudonymous douchebag. To start with, your first note should be to come out swinging. Make lots of friends maybe even some enemies. Most people get a lot of love with their first introductions post.
Make it short.
Your first note will probably (and probably should) be a kind one event. The note you're reading is a 30023 event for long form notes. (More on this some other time). Keep it short and concise. You probably could publish a book in a kind one note. But I really don't think anyone believes you should and I'm sure many relays would reject it. It will also appear without formatting which just looks ugly for anything other than a microblog.
Introduce yourself.
Just as you would in an IRL interaction. What's your name? But more importantly what do you want us to call you? What brought you here? What are your interests? It's important to be real here even if you decide to be pseudonymous. Chances are your tribe is already here. If you're the first then that makes you king of your little corner of the nostr. Which will likely pay dividends in and of itself.
Make it a banger.
Here's my first note (I should disclose this is just the first note from this npub And I have already been here for over a year at this point):
note1pnufmzwy2gj4p5qza6jk7k40nns0p3k2w0fsrqq568utjlnyphdqyg8cpq
Have fun
Don't take this or anything else too seriously. We are all the same here because we are just as unique as each other. Be as weird as you want.
** & Don't forget to use the "#indroductions" hashtag **
-
@ 2348ca50:32a055a8
2024-10-21 16:58:08Filing your taxes can feel a bit daunting, but dodging a few common slip-ups can help you avoid losing cash, getting hit with penalties, or waiting ages for your refund. Whether you're tackling your taxes solo or getting some expert assistance, small mistakes can really mess things up. In this usataxsettlement guide, we’ll go over the usual tax blunders people make and how to steer clear of them so you can hang on to more of your hard-earned cash.
- Documenting Late or Missing the Deadline One of the simplest mistakes to avoid is feeling the loss of the tax documenting cutoff time. Consistently, a huge number of taxpayers wind up suffering superfluous consequences basically on the grounds that they didn't record on time.
• Documenting Cutoff time: The tax recording cutoff time is as a rule around mid-April (April fifteenth for most years). Documenting after this date without an augmentation can bring about late expenses.
• Punishment: The IRS charges a punishment of 5% of neglected taxes for every month your return is late, up to 25%. This can add up rapidly, lessening your discount or expanding the sum you owe.
How to Avoid:
• Set updates half a month on schedule.
• Document for an expansion in the event that you can't finish your taxes by the cutoff time. You'll get six extra months, yet remember that an expansion to record isn't an augmentation to pay.
Question: Have you written in your schedule with the tax documenting deadline? Try not to allow this simple mistake to cost you extra!
- Not Organizing Your Paperwork On the off chance that you're racing to record your taxes and haven't kept your reports coordinated, mistakes are practically ensured. Legitimate record-keeping is fundamental with regards to documenting a total and precise tax return.
Common Documents You Might Need:
• W-2 structures: For representatives to report their pay.
• 1099 structures: For specialists, workers for hire, or those with venture pay.
• Receipts for deductions: Altruistic gifts, clinical costs, and business costs.
• Tax structures for contract interest, understudy loans, and so on: These can assist with lessening your taxable pay.
How to avoid:
• Make a tax envelope (physical or computerized) and accumulate records over time. Keep receipts, bank proclamations, and tax-related structures coordinated so you're prepared when now is the ideal time to record.
- Overlooking Deductions and Credits
Quite possibly of the greatest mistake individuals make isn't making the most of the deductions and tax credits accessible to them. Deductions bring down your taxable pay, while credits lessen how much tax you owe, dollar for dollar.
Common Deductions:
• Altruistic commitments: Gifts to qualified foundations can diminish your taxable pay.
• Contract interest: Mortgage holders can deduct interest paid on home advances.
• Clinical costs: Assuming your clinical expenses surpass 7.5% of your pay, you might have the option to deduct them.
Common Credits:
• Procured Personal Tax Credit (EITC): For low-to direct pay laborers, this credit can bring about huge investment funds.
• Kid Tax Credit: Guardians can get a kudos for each passing youngster.
• Training Credits: Credits like the American Open door Credit can help assuming you paid for school or other instructive costs.
How to Avoid:
• Audit what is going on and check what deductions or credits you fit the bill for.
• Use tax programming or an expert to guarantee you guarantee all that you're qualified for.
Tip: Missing a deduction or credit could mean overlooking money!
- Incorrect Filing Status
Picking some unacceptable recording status is another normal tax mistake that can influence your discount or tax bill. There are five principal documenting situations with, wedded recording together, wedded recording independently, head of family, and qualifying widow(er).
How to Avoid:
• Review each filing status to determine which one applies to you. For instance, on the off chance that you're a solitary parent giving most of help to your family, you might meet all requirements for head of family status, which offers a lower tax rate than recording as single.
• Try not to make assumptions: Twofold check with a tax proficient in the event that you're uncertain which status accommodates what is going on.
- Math Errors and Typos
Basic numerical blunders or grammatical mistakes on your tax return are surprisingly normal and can create huge issues, as postponed discounts or erroneous tax sums owed.
Common Blunders:
• Math mistakes: Expansion, deduction, or rendering numbers mistakenly.
• Grammatical errors in private data: Mistakes in your Government backed retirement number, address, or banking subtleties can create setbacks.
How to Avoid:
• Double-check your math and personal information before submitting your return.
• If using tax software, let it calculate your numbers to reduce the risk of error.
- Not Signing Your Return
It could sound basic, yet neglecting to sign your tax return is quite possibly of the most widely recognized mistake. Without a mark, your return is inadequate and won't be handled.
How to Avoid:
• Check the mark area prior to submitting, whether you're recording electronically or via mail.
• In the case of recording mutually, the two mates should sign the return.
- Not Filing at All If You Owe
Certain individuals avoid documenting taxes through and through in light of the fact that they owe money and can't pay it immediately. Be that as it may, not recording can prompt much more concerning issues, as added punishments and interest.
Penalty:
• Inability to-document punishment: 5% each month on neglected taxes, up to 25%.
• Interest: The IRS charges interest on neglected taxes, so the more you stand by, the more you'll owe.
Instructions to Avoid:
• Record your return regardless of whether you can't pay everything.
• Consider setting up an installment plan with the IRS. Filing on time and setting up a plan can prevent additional penalties.
- Forgetting About State Taxes
While you might be centered around documenting your government taxes, remember about your state taxes in the event that you live in an express that requires them. Many individuals disregard this and wind up missing the state tax cutoff time.
Step by step instructions to Avoid:
• Use tax programming that incorporates state recording or counsel a tax proficient to ensure you're meeting both government and state prerequisites.
Question: Have you checked your state tax requirements? Ignoring them could lead to penalties, even if your federal taxes are in order.
Table: Common Tax Do’s and Don’ts Do’s Don’ts Do’s& Don’ts Do record on time: Present your tax return by the deadline.
Don't record late: Avoid late punishments by submitting on time.
Do keep your documents organized: Have all necessary forms ready before filing.
Don't lose receipts: Keep all that you could require for deductions or credits.
Do guarantee every single qualified deduction/credits: Expand your refund.
Don't ignore deductions/credits: They can fundamentally bring down your tax bill.
Do twofold actually look at your return: Ensure all your data is correct.
Don’t guess on numbers: Always use accurate figures.
Do file even if you owe: Filing late can add penalties.
Don’t avoid filing: You can set up a payment plan if needed.
Wrap-Up: Stay Ahead to Dodge Tax Blunders
Tackling your taxes doesn’t have to be a headache, but steering clear of common mistakes is super important if you want to save some cash and keep your stress levels down. By filing on time, keeping your papers organized, and taking advantage of all those deductions and credits, you can either score a bigger refund or cut down what you owe.
Just make sure to double-check everything before you hit "submit," and if you’re ever feeling stuck, don’t hesitate to reach out to a tax pro. Following these tips will help you breeze through tax season without losing any of your hard-earned cash!
So, are you all set to file your taxes? Getting a jump on things can really help you avoid those costly blunders!
-
@ 48d40b7e:1fb4749c
2024-10-21 16:39:12This podcast, Advent of Computing, is a pretty amazing deep dive into the history of computing.
On this particular episode is about the advent of "higher level" languages like Fortran and eventually C.
Programmers of the day wrote assembly language and the experts enjoyed the granular level of control of computer hardware, fearing that the binary machine code generated by these new languages would be inefficient at best and horribly wrong in the worst of situations.
One Dr. Dr. even mailed in a complaint to the ACM about the pitfalls of Fortran
Funny how the more things change, the more they stay the same
Advent of Computing - Episode 121 - Arguments Against Programming
-
@ 9aa75e0d:40534393
2024-10-21 16:34:02During a routine inspection of a flight from Suriname on September 20, Dutch customs authorities intercepted 2 kilograms of cocaine, concealed within the battery of a mobility scooter. A passenger traveling from Suriname to the Netherlands, along with their checked-in mobility scooter, was subjected to further examination.
During the inspection, customs officers noticed traces of glue on the batteries of the mobility scooter. This unusual detail prompted them to open one of the batteries, where they immediately found a smaller battery and a package. Upon further investigation, the package was found to contain cocaine. Each battery contained approximately 1 kilogram of the drug, bringing the total to around 2 kilograms of cocaine.
Read the full article on SmuggleWire
-
@ e97aaffa:2ebd765d
2024-10-21 15:46:17Eu regularmente, sou questionado ou pedem para eu fazer uma previsão de preço para o Bitcoin a longo prazo. Eu sempre evito responder, não quero criar falsas expectativas.
Em primeiro lugar, quem quer entrar no Bitcoin não deve entrar na expectativa que vai enriquecer rapidamente. Quem entra com este pensamento vai perder muito dinheiro, não vai resistir à pressão e vai vender tudo na primeira queda de 20%. Bitcoin é uma moeda ética, uma filosofia, muito estudo e por fim, é uma poupança a longo prazo. Para tirar frutos a longo prazo, tem que estar mentalmente preparado para suportar e ultrapassar, centenas de quedas de 20% e algumas dezenas de quedas superiores a 50%.
As previsões que são dadas, normalmente de valores muito elevados, dão uma falsa expectativa, levando as pessoas a entrar no Bitcoin com o único objectivo de enriquecer, sem estudá-lo minimamente. Geralmente as previsões de altos valores, implicam um enorme aumento da base monetária, só que as pessoas têm uma enorme dificuldade em compreender a inflação. Por esse motivo eu evito dar previsão e quando tento explicar, gosto de dar exemplos de inflação.
Eu acredito que o bitcoin vai ultrapassar 1 milhões de dólares, mas isto é sinónimo de ficar rico, porque 1 milhão é muito dinheiro hoje, mas daqui a 10 ou 20 anos, já não é assim tanto. A moeda perde muito poder de compra, isto é essencial para compreendermos as previsão de preços. Eu sempre que faço previsões, gosto sempre de dar o exemplo: hoje com 1 milhão compramos 4 casas novas, daqui a 20 anos, apenas compramos 1, com tamanhos e localização semelhante. Assim, aquela previsão de um milhão parece muito dinheiro, mas como são previsão a muito longo prazo, não é assim tanto dinheiro. O factor da inflação é essencial sempre que falamos ou analisamos previsões de preço do Bitcoin.
Não podemos esquecer que o preço do bitcoin é similar ao conceito de física da velocidade relativa.
Velocidade Relativa: Quando dois objetos se movem em direções opostas, a velocidade relativa entre eles é a soma das suas velocidades individuais.
Ou seja, o preço do Bitcoin é o resultado da soma da sua adoção, com a perda de poder de compra do dólar. São duas variáveis em sentido opostos, por isso o bitcoin está a ganhar bastante poder de compra.
Distribuição da riqueza Global
Hoje em dia a riqueza global (valor global dos ativos) é estimada em 900 Triliões, dividida pelas seguintes classes:
- Imobiliário: 330 Triliões
- Obrigações: 300 Triliões
- Dinheiro: 120 Triliões
- Ações: 115 Triliões
- Arte: 18 Triliões
- Ouro: 16 Triliões
- Car e colecionáveis: 6 Triliões
- Bitcoin: 1 Triliões
Previsão do Michael Saylor
Este fim-de-semana, Michael Saylor divulgou uma nova previsão:
Eu gosto destes modelos que apresentam 3 cenários possíveis, um conservador(Bear), um base(Base) e um optimista(Bull). Mas faltou incluir o valor global dos ativos, é verdade que é fácil de calcular, através do valor do market cap indicado a dividir pela percentagem do ativo.
Assim o valor global total dos ativos em 2045, segundo a precisão são:
- 2024: 900 Triliões
- Bear: 3400 Triliões
- Base: 4000 Triliões
- Bull: 4600 Triliões
Isto significa um aumento aproximado 270% (bear), 340% (base) e 410% (bull). É verdade que nestes 21 anos a riqueza real de todos ativos vão aumentar, mas essa valorização será sobretudo devido à expansão da base monetária do dólar.
Desvalorização
Saylor está a apostar que vai existir uma forte desvalorização do dólar. Será plausível essa desvalorização da moeda?
Nos últimos 15 anos o Balance Sheet da Reserva Federal (EUA), 3 grandes expansão monetária, próximo dos 100%, algo similar também aconteceu na Europa.
- 2008/01 a 2008/12:
- 0.9 Triliões -> 2.2Triliões
- aumento de 144%
- 2010/09 a 2014/11:
- 2.3 Triliões -> 4.5 Triliões
- aumento de 95%
- 2019/08 a 2022/03:
- 3.7 Triliões -> 8.9 Triliões
- aumento de 140%
Nos últimos 16 anos (2008-2024) a Balance Sheet aumentou 7 vezes.
O M2 global (EU + EUA + Japão + China + Reino Unido), no mesmo período, quase triplicou, de 34.4 para 91.7 Triliões.
Se nos últimos 16 anos houve várias desvalorização da moeda, é provável que volte a repetir-se, no próximo 21 anos e talvez seja ainda mais severa.
Conclusão
Voltando à previsão do Saylor.
Assim, os 3 milhões de dólares por Bitcoin para 2045, da precisão no cenário Bear, correspondem a 860 mil dólares com o poder de compra de hoje. E 3 milhões dólares (Base) e 9.4 milhões dólares (Bear).
860 mil dólares é um valor muito interesante para o custo de vida da Europa ou EUA, mas está muito longe de ser rico. Quem não tem a noção da inflação, ao ler esta previsão fica com uma falsa expectativa que com 1 bitcoin será milionário (3 mihões de dólares) em 2045, mas não é verdade.
Eu acredito que pode alcançar o cenário Bear, mas o Bull é demasiado, para atingir os 49 milhões, a desvalorização teria que ser muito superior, é um cenário pouco provável, neste espaço de tempo.
-
@ 79be667e:16f81798
2024-10-21 15:31:39 -
@ 9aa75e0d:40534393
2024-10-21 15:22:19A large-scale cocaine-smuggling network has been dismantled by authorities in Italy, Albania, Poland, and Switzerland in a joint operation coordinated by Eurojust. The criminal organization, which had been operating for at least four years, primarily focused on selling cocaine in northern Italy, particularly around the city of Brescia.
Coordinated Action Across Europe The international operation, led by an Albanian-organized crime group (OCG), culminated in the arrest of 66 suspects. On the main action day, 45 individuals were apprehended, most of them in Italy. Prior to this, 21 additional suspects had already been arrested in connection with cocaine distribution in the Brescia area.
Eurojust, based in The Hague, played a key role in facilitating and coordinating the efforts of law enforcement agencies across the involved countries. Over 400 officers were deployed in Italy alone to execute the operation. Eurojust set up a coordination center to support the implementation of European Arrest Warrants and Mutual Legal Assistance requests, particularly towards Albania and Switzerland. Europol also provided critical support by managing the exchange of information and offering on-the-ground analytical assistance, including deploying a mobile office to Italy.
Read the full article on SmuggleWire
-
@ da23fb98:b98edc3e
2024-10-21 14:25:57New technologies emerge that enable communication and information sharing without the need for central servers. More efficient, better privacy, peer-to-peer.
Currently, we are accustomed to communicating and sharing information through centralized social networks such as Facebook, Twitter and WhatsApp. These platforms require large server farms that are financed through the sale of user data and advertisements. This centralization leads to privacy concerns, as our data is stored on third-party servers. In addition, these platforms can apply censorship and filtering to the content we share.
New technologies, such as Nostr (with apps like primal) Keet and the peer-to-peer network of Pears, enable communication and information sharing without central servers.
Nostr is a protocol that uses relays, small servers that distribute messages between users. It is an open-source protocol with low costs, giving users more control over their data. Noster is still vulnerable to takedowns of relays, which can be a single point of failure.
Keet, an application built on the peer-to-peer network of Pears, enables video and chat communication without any server. This results in high quality of service, as there is no intermediate server that can reduce the quality. Keet is fully encrypted and uses key pairs for security, eliminating the need for a central server for authentication.
Pears is a peer-to-peer network that enables a decentralized internet where users can communicate directly with each other without the intervention of servers. It is open-source and user-friendly, with automatic software distribution and no investment in central servers. Pears uses 'Pear runtime', a container that ensures that applications can run on different devices. The network benefits from a growing number of peers, as this improves performance through distributed information sharing. It uses a technology that is called Holepunch. You can see this as a modern version of Bittorent.
Adventages These new technologies offer several advantages over centralized social networks:
Privacy: Users retain control over their data as there is no central server where it is stored. Resistance to censorship: The absence of central servers makes censorship and filtering much more difficult. Higher quality of service: Peer-to-peer communication eliminates the limitations imposed by central servers, resulting in better performance and quality. Lower costs: Without the need for large server farms, costs are significantly reduced. More opportunities for developers: Open-source platforms allow developers to build innovative applications without the limitations of centralized platforms.
These emerging technologies have the potential to fundamentally change the way we communicate and share information. They offer a future where users have more control over their data, are free from censorship and can benefit from a more efficient and reliable internet.
Creation... By the way..... this document is generated using #AI with my explanation video's on the subject. Using Google notebooklm https://notebooklm.google.com/
Decentral Social Networks compared (mastodon nostr keet) https://youtu.be/3oxGLPU_WNAh Pears: A peer to peer internet, used in keet https://youtu.be/1pl7SQy93G4New technologies emerge that enable communication and information sharing without the need for central servers. More efficient, better privacy, peer-to-peer.
Currently, we are accustomed to communicating and sharing information through centralized social networks such as Facebook, Twitter and WhatsApp. These platforms require large server farms that are financed through the sale of user data and advertisements. This centralization leads to privacy concerns, as our data is stored on third-party servers. In addition, these platforms can apply censorship and filtering to the content we share.
New technologies, such as Nostr (with apps like primal) Keet and the peer-to-peer network of Pears, enable communication and information sharing without central servers.
Nostr is a protocol that uses relays, small servers that distribute messages between users. It is an open-source protocol with low costs, giving users more control over their data. Noster is still vulnerable to takedowns of relays, which can be a single point of failure.
Keet, an application built on the peer-to-peer network of Pears, enables video and chat communication without any server. This results in high quality of service, as there is no intermediate server that can reduce the quality. Keet is fully encrypted and uses key pairs for security, eliminating the need for a central server for authentication.
Pears is a peer-to-peer network that enables a decentralized internet where users can communicate directly with each other without the intervention of servers. It is open-source and user-friendly, with automatic software distribution and no investment in central servers. Pears uses 'Pear runtime', a container that ensures that applications can run on different devices. The network benefits from a growing number of peers, as this improves performance through distributed information sharing. It uses a technology that is called Holepunch. You can see this as a modern version of Bittorent.
Adventages
These new technologies offer several advantages over centralized social networks:
- Privacy: Users retain control over their data as there is no central server where it is stored.
- Resistance to censorship: The absence of central servers makes censorship and filtering much more difficult.
- Higher quality of service: Peer-to-peer communication eliminates the limitations imposed by central servers, resulting in better performance and quality.
- Lower costs: Without the need for large server farms, costs are significantly reduced.
- More opportunities for developers: Open-source platforms allow developers to build innovative applications without the limitations of centralized platforms.
These emerging technologies have the potential to fundamentally change the way we communicate and share information. They offer a future where users have more control over their data, are free from censorship and can benefit from a more efficient and reliable internet.
Creation...
By the way..... this document is generated using #AI with my explanation video's on the subject. Using Google notebooklm https://notebooklm.google.com/
- Decentral Social Networks compared (mastodon nostr keet) https://youtu.be/3oxGLPU_WNAh
- Pears: A peer to peer internet, used in keet https://youtu.be/1pl7SQy93G4
-
@ 94a90518:2698612b
2024-10-21 14:01:16"The distributional consequences of Bitcoin" https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4985877
The original promise of Nakamoto (2008) to provide the world with a better global means of payment has not materialized.
Let's see. Satoshi released the Bitcoin whitepaper on Oct 31, 2008, when the world was going through the global financial crisis: https://www.bitcoin.com/bitcoin.pdf
✅ a purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution ✅ a solution to the double-spending problem using a peer-to-peer network ✅ a record that cannot be changed without redoing the proof-of-work
That's just the foundation on top of which "a better global means of payment" are being implemented: * https://fedimint.org/ * https://cashu.space/ * https://lightning.network/
If these tools (when production-ready) "has not materialized" in a given jurisdiction – that's most likely due to the legal obstacles. The institution just states the enforced outcome in its claim.
most economists argue that the Bitcoin boom is a speculative bubble that will eventually burst
Maybe. But will it burst before the "most economists" do?
the wealth effects on consumption of early Bitcoin holders can only come at the expense of consumption of the rest of society
How is this different from "the wealth effects on consumption" of the ownership class? "400 richest Americans own about $3200B" – 5 times more than the total realized cap of Bitcoin to date which is $620B: - https://chainexposed.com/RealizedCap.html - https://mkorostoff.github.io/1-pixel-wealth/
If the price of Bitcoin rises for good, the existence of Bitcoin impoverishes both non-holders and latecomers.
Unless this is not a zero-sum game, and the Bitcoin would at least raise awareness of impoverishment via currency debasement. - https://www.lynalden.com/broken-money/
But even if it's zero-sum, again – doesn't seem much different from how people are being born into debt.
- https://youtu.be/JIc0i7CvD7k
Imagine you are preparing for a running race you've been preparing your entire life for like 20 years and then the day have come, you are at the start line, and you're getting ready. You're waiting for the for the start signal but then instead of start signal you hear an announcement of the winners. The winners are those who started 30 years before you. That is a current situation. When Boomers were on average 30 years old, they already by this age accumulated almost 20% of entire wealth in the United States. When our generation Millennials were on average 30 years old around 5 years ago we only owned 3%.
Last but not least, some words of wisdom on page 18 :)
the people never holding Bitcoin would be even worse off compared to the Latecomers
- https://youtu.be/JIc0i7CvD7k
-
@ 9e69e420:d12360c2
2024-10-21 13:52:09rewrite it in markdown
Florida's Amendment 3: A Step Towards Personal Freedom and Economic Growth
Florida voters have a historic opportunity to embrace personal liberty and economic prosperity by supporting Amendment 3 in the upcoming election. This initiative, which would legalize recreational marijuana for adults 21 and older, represents a significant stride towards individual freedom and limited government intervention.
Benefits from a Libertarian Perspective
Personal Freedom
Amendment 3 aligns with the core libertarian principle of personal autonomy. It recognizes that adults should have the right to make their own decisions about marijuana consumption without government interference. This shift away from prohibition respects individual liberty and reduces unnecessary government control over personal choices.
Economic Opportunities
Legalizing recreational marijuana would create a new, thriving industry in Florida. It's projected to generate up to $430 million annually in tax revenue. This influx of funds could potentially lead to lower taxes in other areas or improved public services without increasing the tax burden on citizens.
Criminal Justice Reform
The amendment would significantly reduce arrests for marijuana possession, particularly benefiting minorities who have been disproportionately affected by current laws. This change would allow law enforcement to focus on more serious crimes and reduce the strain on our criminal justice system.
Market Competition
While some argue that Amendment 3 could lead to monopolization, the initiative actually allows for an expanded marketplace and broader licensing. This potential for increased competition aligns with libertarian free-market principles and could lead to better products and lower prices for consumers.
A Stepping Stone to Further Reform
The Libertarian Party of Florida has endorsed Amendment 3, recognizing it as a significant step forward. While the amendment doesn't go as far as many libertarians would like – for instance, it doesn't allow for home cultivation – it's an important milestone in the journey towards full cannabis freedom.
I understand few of you here on NOSTR live in Florida and probably only a portion of them are planning to vote. But as I am NOSTR-only, this will be my only outreach outside of my IRL friends and family. So if you live in Florida and are planning to vote keep an eye out for this box.
Remember, this is just the beginning. As we've seen in other states, initial legalization often paves the way for further reforms. Your support for Amendment 3 could be the first step towards a truly free market for cannabis in Florida, with fewer regulations and more opportunities for entrepreneurs and consumers alike.
Vote yes on Amendment 3 and be part of this historic move towards greater freedom in the Sunshine State.
-
@ 6bae33c8:607272e8
2024-10-21 11:57:34Pending Monday Night, where I have a few players going, Week 7 might have been the worst fantasy one I’ve ever had in 30 years. I say this because the three NFFC teams that put up bad scores were in first, first and second place, respectively. These aren’t bad teams having bad days, but good ones.
On one of the BCLs, I had Rhamondre Stevenson, Stefon Diggs, Malik Nabers, Younghoe Koo and Anthony Richardson active. That was the one that performed best, thanks to the Bengals defense, Brock Bowers, Ja’Marr Chase and Javonte Williams, but it’s still very likely to lose.
In the other I put in Kirk Cousins over Joe Burrow, had Jordan Mason, Diggs, Dalton Kincaid, Koo and DeVonta Smith. Drake London, Chase and the Bengals defense will put it around 100 points IF James Conner does something tonight.
But the pièce de résistance is my Primetime team which stacked Texans: CJ Stroud, Diggs, Tank Dell and Dalton Schultz (Dallas Goedert was hurt, Jonnu Smith on the bench). Those four Texans combined for 19.3 points. What’s worse is my opponent had Joe Mixon (26.4 points). I also had the Packers defense (five points) in that game. So basically Mixon outscored half my team 26.4 to 24.3 in that one game.
Based Harrison Butker had only four points, and the only reason I’m at 58.3 pending Monday night is Jahmyr Gibbs’ 32. I have Chris Godwin, J.K. Dobbins and Marvin Harrison still going, but they’ll need 20-plus each to salvage it. At least I left JSN (5.6) and Mason (8.9) on the bench for Harrison. He might do worse, but you’d obviously rather have it be this way than the reverse — as of now.
I went 4-1 ATS in Circa at least, but I’m drawing dead for Q2 and probably the contest too. My only loss was the Packers -2.5 (they won by two). It was honestly one of the more annoying and disastrous games I’ve ever watched.
Seslowsky and I moved on in Survivor, but so did everyone. We had the Bills who were down 10-0, but it was never in doubt. Survivor was impossible for a month, and now it’s so easy.
-
I always get the Jaguars wrong. When I think they’ll bounce back they get blown out by the Bears. When I think they’ll collapse, they blow out the Pats. I didn’t watch the game because I was at Sasha’s basketball game, but I’m sure glad I don’t have any Travis Etienne or Christian Kirk. Both have been supplanted.
-
I had the first pick in the RotoWire Dynasty League this year, took Marvin Harrison, dealt him for Malik Nabers and a pick I eventually used on Bucky Irving. But with my second pick I wanted Brian Thomas, got swiped by the guy one slot ahead of me and had to settle for the ironically named Xavier Worthy. Worthy went higher in re-drafts this year, so it was a bit of a surprise. No one cares, but I often think about how sick of a rebuild it would have been to get both Nabers AND Thomas in one draft.
-
It seems like Drake Maye is a player, but I didn’t watch the game.
-
I thought the Falcons would shoot it out with the Seahawks, which is why I used Cousins over Burrow (who faced a weaker opponent.)
-
Bijan Robinson (and Breece Hall) are starting to earn their status as early first-round picks. Robinson passed the eye test for me too. But the Falcons couldn’t sustain drives. At least London got his as he always does.
-
Jaxon Smith-Njigba got nine yards on six targets. Maybe he’ll catch a pass more than five yards down the field if DK Metcalf misses time with a knee injury. But he’s not usable otherwise.
-
The Mason Rudolph-Calvin Ridley-DeAndre Hopkins Titans are the most pointless team of all time.
-
Amari Cooper saw only five targets and caught a TD, but I imagine he’ll start getting closer to 10 once he gets properly integrated. Keon Coleman might get freed up for big plays. Maybe Kincaid too.
-
Deshaun Watson likely tore his Achilles tendon, which solves that problem. People don’t like Watson for obvious reasons, but I didn’t see as much joy about it in my timeline as I did when the same thing happened to Aaron Rodgers in Week 1 of 2023. I guess Watson was just accused of 20-plus sexual assaults, while Rodgers made them feel like idiots for being so credulous and compliant. Personally, I hope he’s okay so I can make more massage jokes. Aaron Hernandez can’t be the only go-to in this newsletter.
-
Nick Chubb achieved penetration but otherwise had a flaccid performance. Maybe Jameis Winston can bring some Viagra to this offense. Extra strength for Jerry Jeudy who is arguably having a worse career than Henry Ruggs.
-
I know the Browns didn’t mount much of an offense, but would it kill the Bengals to give the best receiver in the NFL more than six targets per game?
-
Tank Dell (zero points) dropped a perfectly thrown TD on the Texans’ first drive. That was 12 points left on the table right away and set the stage for the rest. The Texans run the ball way too much and also played for the field goal at the end which predictably wound up costing them the game. Their pace is glacial too. In DeMeco Ryans’ defense, the offensive line couldn’t pass-block to save its life. CJ Stroud attempted only 21 passes, completing 10 for 86 yards.
-
Jordan Love spread the ball around, but he’s not targeting Christian Watson or Jayden Reed. Reed (who was my biggest regret for not drafting) has been playing through an ankle injury, so maybe that explains it somewhat. But Romeo Doubs and Dontayvion Wicks out-targeted them 16 to six. Maybe Reed’s a buy-low.
-
To understand how bad Stroud’s game was, consider even Anthony Richardson had 129 yards, and he rushed for 56. Richardson is basically a poor man’s Justin Fields right now, and Fields lost his job.
-
I picked up Jonnu Smith in a couple leagues this week, but didn’t have the foresight to start him. With Tua Tagovailoa coming back, I expect to use Smith regularly as he’s the No. 3 option.
-
Tua is just a league-average QB, but apparently there is no one in the NFL more important to his team.
-
For whatever reason Sam LaPorta is just not an important part of this offense, and it’s not because of Jameson Williams.
-
There were only eight incomplete passes in the Lions-Vikings.
-
Saquon Barkley annihilated the Giants, and they deserve it. Of course, Jalen Hurts stole two goal-line TDs. DeVonta Smith should have just sat this game out.
-
The Giants offensive line without Andrew Thomas is the nut low, and Daniel Jones is probably the last QB you’d pick to handle that. I’ve seen deer navigate highway headlights more proficiently. The only thing the Giants do well is rush the passer.
-
I can’t lie — I turned off the late games at halftime. The Indiginous Peoples were up 27-0, the Chiefs-Niners was an ugly slog, and the other game was Raiders-Rams.
-
The Chiefs are so ugly and so good. They just get the first down when they need to, no matter whether it’s the carcass of Kareem Hunt, Mecole Hardman, Noah Gray, or Patrick Mahomes scrambling. It’s barely watchable.
-
George Kittle is the TE1, but now Brandon Aiyuk is probably out for the year, no one seems to know if or when Christian McCaffrey is back, oft-injured Deebo Samuel left the game with an illness and even Mason is playing through a separated shoulder.
-
Jayden Daniels left early with a rib injury, meaning one of 12 teams in my league did ever-so-slightly worse than my CJ Stroud ones over a full game. It sounds like the injury isn’t serious, but that doesn’t mean he’ll be back next week. If there were ever a team against which to lose your star QB and have it not matter, it’s the 2024 Panthers.
-
But for Brock Bowers and Maxx Crosby, the Raiders would be as pointless as the Titans.
-
The Jets-Steelers was a weird game. The Jets had it under total control, up 15-6 late in the first half, Aaron Rodgers throws a ball off Garrett Wilson’s chest that gets picked, the Steelers score and dominate the rest of the game.
-
Davante Adams wasn’t a big factor — he turns 32 in December, keep in mind — and the Jets are now 2-5.
-
The Jets for God knows what reason take shotgun snaps on 3rd- and 4th-and-1. They have to complete a seven-yard pass to get one yard. It’s the opposite of the Eagles’ ass-smash which makes the game so much easier.
-
Russell Wilson got booed early, but turned in a nice game with deep touch throws to George Pickens and Pat Freiermuth. Najee Harris had another good game, but Jaylen Warren now looks viable too, especially now that someone can throw short passes with a modicum of touch.
-
I’m not a fan of two Monday Night games. Just overkill.
-
-
@ 6bf975a0:65de1244
2024-10-21 11:56:28Мы создаем небольшие редакции (4 человека на каждую) новостей декабря 2040 года небольшого российского города (реальный город, откуда вы родом, на выбор).
В первый день начала работы редакция рассказывает о том, каким стал её населенный пункт в 2040 году.
Медиа работает на протяжении двух недель и ежедневно выпускает новости на мультимедийной основе — от текстов до видео.
Ежедневно каждая редакция выпускает 2 информационно-новостных текста с иллюстрациями, один видеосюжет. Еженедельно — один короткий аудиоподкаст.
Техническая основа проекта
- Площадка для публикации контента редакции — специально созданный профиль в Nostr;
- Инструменты создания легенды про населённый пункт: генеративные текстовые нейросети;
- Инструменты создания медиаконтента: нейросети для генерации изображений, видео, создания музыки и звука.
Рабочая среда проекта
- Docs.iris.to — будут созданы таблицы для контент-планов по проектам для отслеживания и корректировки хода их выполнения;
- Комментарии и обсуждения проектов включены в docs.iris.to.
Критерии успешного выполненного проекта
- Разнообразие мультимедийного контента: использованы все доступные форматы. Звук, текст, изображения, видео;
- Качество созданного контента: тексты — вычитанные, отредактированные, изображения — отобранные с точки зрения соответствия действительности (люди с нормативным количеством конечностей, не зомби) etc;
- Регулярность публикации контента: редакции масс-медиа публикуют новости ежедневно.
Правила создания контента
- Сгенерированный контент не должен нарушать местное законодательство. Контент размещён в созданных авторами профилях, поэтому ответственность за его содержание несут авторы.
- Сгенерированный контент не должен призывать к насильственным или агрессивным действиям в отношении кого бы то ни было, а также не должен содержать унижающих, оскорбительных оценок любых социальных групп по любым характеристикам.
- 2040-ой — это наши надежды на будущее, возможности, которые, мы верим, будут реализованы. В будущем мы преодолеваем экологические и экономические проблемы.
Ссылки на проекты оставляйте в таблицах с контент-планами
-
@ 6bf975a0:65de1244
2024-10-21 10:06:18- Какие этические проблемы могут возникнуть в связи с широким распространением VR/AR-технологий? (Например, приватность, зависимость, манипуляция, влияние на социальные взаимодействия.) В своё время именно этические проблемы не позволили Google Glass стать мэйнстримом.
- Насколько реалистично ожидать, что VR/AR заменят традиционные способы потребления медиа? Какие факторы могут этому способствовать или препятствовать? (Например, цена, доступность, удобство использования, социальное принятие.)
- Насколько вероятно, что метавселенная станет «еще одним слоем интернета», а не его полной заменой? (Рассмотреть текущие тенденции развития интернета и технологические ограничения.)
- Какие этические дилеммы связаны с использованием ИИ для таргетированной рекламы и персонализации контента? (Например, манипуляция потребительским выбором, дискриминация, распространение дезинформации.)
- Может ли гиперперсонализация привести к «информационному изоляционизму» и снижению социальной когезии? Как этого избежать? (Рассмотреть важность «слабых связей» по Марку Грановеттеру.)
- Как унификация платформ и облачные технологии повлияют на медиаиндустрию?
- Как можно противостоять «несоциальности» современных социальных медиа и создать более инклюзивную и полезную цифровую среду?
- Децентрализация и современные социальные медиа: смогут ли новые платформы веб 3.0 потеснить в популярности существующие крупные соцсети? Либо последние позаимствуют новые технологии и задавят в зародыше новые проекты
-
@ ec42c765:328c0600
2024-10-21 07:42:482024年3月
フィリピンのセブ島へ旅行。初海外。
Nostrに投稿したらこんなリプライが
nostr:nevent1qqsff87kdxh6szf9pe3egtruwfz2uw09rzwr6zwpe7nxwtngmagrhhqc2qwq5
nostr:nevent1qqs9c8fcsw0mcrfuwuzceeq9jqg4exuncvhas5lhrvzpedeqhh30qkcstfluj
(ビットコイン関係なく普通の旅行のつもりで行ってた。というか常にビットコインのこと考えてるわけではないんだけど…)
そういえばフィリピンでビットコイン決済できるお店って多いのかな?
海外でビットコイン決済ってなんかかっこいいな!
やりたい!
ビットコイン決済してみよう! in セブ島
BTCMap でビットコイン決済できるところを探す
本場はビットコインアイランドと言われてるボラカイ島みたいだけど
セブにもそれなりにあった!
なんでもいいからビットコイン決済したいだけなので近くて買いやすい店へ
いざタピオカミルクティー屋!
ちゃんとビットコインのステッカーが貼ってある!
つたない英語とGoogle翻訳を使ってビットコイン決済できるか店員に聞いたら
店員「ビットコインで支払いはできません」
(えーーーー、なんで…ステッカー貼ってあるやん…。)
まぁなんか知らんけどできないらしい。
店員に色々質問したかったけど質問する英語力もないのでする気が起きなかった
結局、せっかく店まで足を運んだので普通に現金でタピオカミルクティーを買った
タピオカミルクティー
話題になってた時も特に興味なくて飲んでなかったので、これが初タピオカミルクティーになった
法定通貨の味がした。
どこでもいいからなんでもいいから
海外でビットコイン決済してみたい
ビットコイン決済させてくれ! in ボラカイ島
ビットコインアイランドと呼ばれるボラカイ島はめちゃくちゃビットコイン決済できるとこが多いらしい
でもやめてしまった店も多いらしい
でも300もあったならいくつかはできるとこあるやろ!
nostr:nevent1qqsw0n6utldy6y970wcmc6tymk20fdjxt6055890nh8sfjzt64989cslrvd9l
行くしかねぇ!
ビットコインアイランドへ
フィリピンの国内線だぁ
``` 行き方: Mactan-Cebu International Airport ↓飛行機 Godofredo P. Ramos Airport (Caticlan International Airport, Boracay Airport) ↓バスなど Caticlan フェリーターミナル ↓船 ボラカイ島
料金: 飛行機(受託手荷物付き) 往復 21,000円くらい 空港~ボラカイ島のホテルまで(バス、船、諸経費) 往復 3,300円くらい (klookからSouthwest Toursを利用)
このページが色々詳しい https://smaryu.com/column/d/91761/ ```
空港おりたらSouthwestのバスに乗る
事前にネットで申し込みをしている場合は5番窓口へ
港!
船!(めっちゃ速い)
ボラカイついた!
ボラカイ島の移動手段
セブの移動はgrabタクシーが使えるがボラカイにはない。
ネットで検索するとトライシクルという三輪タクシーがおすすめされている。
(トライシクル:開放的で風がきもちいい)
トライシクルの欠点はふっかけられるので値切り交渉をしないといけないところ。
最初に300phpくらいを提示され、行き先によるけど150phpくらいまでは下げられる。
これはこれで楽しい値切り交渉だけど、個人的にはトライシクルよりバスの方が気楽。
Hop On Hop Off バス:
https://www.hohoboracay.com/pass.php
一日乗り放題250phpなので往復や途中でどこか立ち寄ったりを考えるとお得。
バスは現金が使えないので事前にどこかでカードを買うか車内で買う。
私は何も知らずに乗って車内で乗務員さんから現金でカードを買った。
バスは狭い島内を数本がグルグル巡回してるので20~30分に1本くらいは来るイメージ。
逆にトライシクルは待たなくても捕まえればすぐに乗れるところがいいところかもしれない。
現実
ボラカイ島 BTC Map
BTC決済できるとこめっちゃある
さっそく店に行く!
「bitcoin accepted here」のステッカーを見つける!
店員にビットコイン支払いできるか聞く!
できないと言われる!
もう一軒行く
「bitcoin accepted here」のステッカーを見つける
店員にビットコイン支払いできるか聞く
できないと言われる
5件くらいは回った
全部できない!
悲しい
で、ネットでビットコインアイランドで検索してみると
旅行日の一か月前くらいにアップロードされた動画があったので見てみた
要約 - ビットコイン決済はpouch.phというスタートアップ企業がボラカイ島の店にシステムを導入した - ビットコインアイランドとすることで観光客が10%~30%増加つまり数百~千人程度のビットコインユーザーが来ると考えた - しかし実際には3~5人だった - 結果的に200の店舗がビットコイン決済を導入しても使われたのはごく一部だった - ビットコイン決済があまり使われないので店員がやり方を忘れてしまった - 店は関心を失いpouchのアプリを消した
https://youtu.be/uaqx6794ipc?si=Afq58BowY1ZrkwaQ
なるほどね~
しゃあないわ
聖地巡礼
動画内でpouchのオフィスだったところが紹介されていた
これは半年以上前の画像らしい
現在はオフィスが閉鎖されビットコインの看板は色あせている
おもしろいからここに行ってみよう!となった
で行ってみた
看板の色、更に薄くなってね!?
記念撮影
これはこれで楽しかった
場所はこの辺
https://maps.app.goo.gl/WhpEV35xjmUw367A8
ボラカイ島の中心部の結構いいとこ
みんな~ビットコイン(の残骸)の聖地巡礼、行こうぜ!
最後の店
Nattoさんから情報が
なんかあんまりネットでも今年になってからの情報はないような…https://t.co/hiO2R28sfO
— Natto (@madeofsoya) March 22, 2024
ここは比較的最近…?https://t.co/CHLGZuUz04もうこれで最後だと思ってダメもとで行ってみた なんだろうアジア料理屋さん?
もはや信頼度0の「bitcoin accepted here」
ビットコイン払いできますか?
店員「できますよ」
え?ほんとに?ビットコイン払いできる?
店員「できます」
できる!!!!
なんかできるらしい。
適当に商品を注文して
印刷されたQRコードを出されたので読み取る
ここでスマートに決済できればよかったのだが結構慌てた
自分は英語がわからないし相手はビットコインがわからない
それにビットコイン決済は日本で1回したことがあるだけだった
どうもライトニングアドレスのようだ
送金額はこちらで指定しないといけない
店員はフィリピンペソ建ての金額しか教えてくれない
何sats送ればいいのか分からない
ここでめっちゃ混乱した
でもウォレットの設定変えればいいと気付いた
普段円建てにしているのをフィリピンペソ建てに変更すればいいだけだった
設定を変更したら相手が提示している金額を入力して送金
送金は2、3秒で完了した
やった!
海外でビットコイン決済したぞ!
ログ
PORK CHAR SIU BUN とかいうやつを買った
普通にめっちゃおいしかった
なんかビットコイン決済できることにビビッて焦って一品しか注文しなかったけどもっと頼めばよかった
ここです。みなさん行ってください。
Bunbun Boracay
https://maps.app.goo.gl/DX8UWM8Y6sEtzYyK6
めでたしめでたし
以下、普通の観光写真
セブ島
ジンベエザメと泳いだ
スミロン島でシュノーケリング
市場の路地裏のちょっとしたダウンタウン?スラム?をビビりながら歩いた
ボホール島
なんか変な山
メガネザル
現地の子供が飛び込みを披露してくれた
ボラカイ島
ビーチ
夕日
藻
ボラカイ島にはいくつかビーチがあって宿が多いところに近い南西のビーチ、ホワイトビーチは藻が多かった(時期によるかも)
北側のプカシェルビーチは全然藻もなく、水も綺麗でめちゃくちゃよかった
プカシェルビーチ
おわり!
-
@ 6bcc27d2:b67d296e
2024-10-21 03:54:32yugoです。 この記事は「Nostrasia2024 逆アドベントカレンダー」10/19の分です。Nostrasiaの当日はリアルタイムで配信を視聴していました。Nostrを使ってアプリケーションの再発明をすべきという発表を聴き、自分だったらどんなものを作ってみたいかを考えて少し調べたり試みたりしたのでその記録を書きます。また、超簡単なものですがおそらく世界初となるvisionOS対応のNostrクライアントをつくってみたので最後の方に紹介します。
アプリケーションを再発明する話があったのは、「What is Nostr Other Stuff?」と題したkaijiさんの発表でした。
Nostrプロトコルを使って既存のアプリケーションを再発明することで、ユーザ体験を損なわずにゆるやかな分散を促すことができ、プロトコルとしてのNostrも成長していくというような内容でした。
自分はまだNostrで何かをつくった経験はなかったので、実装に必要な仕様の知識がほとんどない状態からどのようなアプリケーションをつくってみたいかを考えました。
最初に思いついたのは、Scrapboxのようなネットワーク型のナレッジベースです。自分は最近visionOS勉強会をやっており、勉強会でナレッジを共有する手段としてScrapboxの導入を検討していました。
Nostrコミュニティにも有志によるScrapboxがありますが、Nostrクライアントがあればそれを使うだろうから同じくらいの実用性を備えたクライアントはまだ存在しないのではないかという見立てでした。
長文投稿やpublic chatなどの機能を組み合わせることで実現できるだろうか。そう思っていた矢先、NIP-54のWikiという規格があることを知りました。
https://github.com/nostr-protocol/nips/blob/master/54.md
まだちゃんとは読めていないですが、Scrapboxもwikiソフトウェアだし参考になりそうと思っています。正式な仕様に組み込まれていないようで、採用しているクライアントはfiatjafによるリファレンス実装(?)のwikistrくらいしか見つかりませんでした。
Scrapboxのようなナレッジベースを志向するNostrクライアントがあれば、後述するvisionOS対応クライアントの存在もありアカウントを使いまわせて嬉しいので試してみたいです。もし他にも似たようなサービスをどなたか知っていたら教えてください。
また現在は、勉強会やワークショップ、ハッカソンなどのコラボレーションワークを支援するためのツールを自分たちでも開発しています。Apple Vision Proに搭載されているvisionOSというプラットフォームで動作します。
https://image.nostr.build/14f0c1b8fbe5ce7754825c01b09280a4c22f87bbf3c2fa6d60dd724f98919c34.png
この画面で自分が入りたいスペースを選んで共有体験を開始します。
スライドなどのコンテンツや自らのアバターを同期させることで、遠隔地にいてもまるでオフラインかのように同じ空間を共有することが可能になります。
https://image.nostr.build/cfb75d3db2a9b9cd39f502d6426d5ef4f264b3d5d693b6fc9762735d2922b85c.jpg
ということなので、急遽visionOS対応のクライアントを作ってみました。検索しても1つも事例が出てこなかったので多分まだ世界で実装しているアプリはないのではないでしょうか。
とはいえ、クライアントを名乗っているもののまだ大した機能はなく、リレーからデータを取得するだけの読み取り専用です。
https://image.nostr.build/96e088cc6a082528682989ccc12b4312f9cb6277656e491578e32a0851ce50fe.png
画像では自分のプロフィールデータをリレーから取得しています。
まだどのライブラリもvisionOSに対応していなかったりで手こずったものの仕様の勉強になりました。
ただvisionOSアプリはiOSアプリ同様NIP-7が使えないので秘密鍵を自分で保管しなくてはならず、今後どう対処すべきかわかりかねています。これから時間ある時に少しずつ調べていこうと思っていますが、ネイティブアプリの秘密鍵周りはあまりリソースが多くないようにも感じました。もしどなたかその辺の実装に詳しい方いたら教えていただけると嬉しいです。
準備ができたらそのうちコードも公開したいと思っています。
これから少しずつ色んな機能を実装しながらNostrで遊んでいきたいです!
-
@ 9e69e420:d12360c2
2024-10-21 02:44:56long form note
just a test
using habla
I guess I can just copy & paste markdown. Here is a meme to test photos insertion ![[https://i.nostr.build/ob0weHDqLkzAxrcR.jpg]]
What else can this do? Links? Let's try.... https://i.nostr.build/ob0weHDqLkzAxrcR.jpg
And a list: * Item 1 * Item 2 * Item 3
-
@ 59df1288:92e1744f
2024-10-21 02:37:05null