-
@ b99efe77:f3de3616
2025-05-04 18:43:28TEST SERGEY
TEST SERGEY TEST SERGEY TEST SERGEY TEST SERGEY TEST SERGEY TEST SERGEY
Places & Transitions
- Places:
-
Bla bla bla: some text
-
Transitions:
- start: Initializes the system.
- logTask: bla bla bla.
petrinet ;startDay () -> working ;stopDay working -> () ;startPause working -> paused ;endPause paused -> working ;goSmoke working -> smoking ;endSmoke smoking -> working ;startEating working -> eating ;stopEating eating -> working ;startCall working -> onCall ;endCall onCall -> working ;startMeeting working -> inMeetinga ;endMeeting inMeeting -> working ;logTask working -> working
-
@ c9badfea:610f861a
2025-05-04 18:39:06- Install Kiwix (it's free and open source)
- Download ZIM files from the Kiwix Library (you will find complete offline versions of Wikipedia, Stack Overflow, Bitcoin Wiki, DevDocs and many more)
- Open the downloaded ZIM files within the Kiwix app
ℹ️ You can also package any website using either Kiwix Zimit (online tool) or the Zimit Docker Container (for technical users)
ℹ️.zim
is the file format used for packaged websites -
@ 7ef5f1b1:0e0fcd27
2025-05-04 18:28:05A monthly newsletter by The 256 Foundation
May 2025
Introduction:
Welcome to the fifth newsletter produced by The 256 Foundation! April was a jam-packed month for the Foundation with events ranging from launching three grant projects to the first official Ember One release. The 256 Foundation has been laser focused on our mission to dismantle the proprietary mining empire, signing off on a productive month with the one-finger salute to the incumbent mining cartel.
[IMG-001] Hilarious meme from @CincoDoggos
Dive in to catch up on the latest news, mining industry developments, progress updates on grant projects, Actionable Advice on helping test Hydra Pool, and the current state of the Bitcoin network.
Definitions:
DOJ = Department of Justice
SDNY = Southern District of New York
BTC = Bitcoin
SD = Secure Digital
Th/s = Terahash per second
OSMU = Open Source Miners United
tx = transaction
PSBT = Partially Signed Bitcoin Transaction
FIFO = First In First Out
PPLNS = Pay Per Last N Shares
GB = Gigabyte
RAM = Random Access Memory
ASIC = Application Specific Integrated Circuit
Eh/s = Exahash per second
Ph/s = Petahash per second
News:
April 7: the first of a few notable news items that relate to the Samourai Wallet case, the US Deputy Attorney General, Todd Blanche, issued a memorandum titled “Ending Regulation By Prosecution”. The memo makes the DOJ’s position on the matter crystal clear, stating; “Specifically, the Department will no longer target virtual currency exchanges, mixing and tumbling services, and offline wallets for the acts of their end users or unwitting violations of regulations…”. However, despite the clarity from the DOJ, the SDNY (sometimes referred to as the “Sovereign District” for it’s history of acting independently of the DOJ) has yet to budge on dropping the charges against the Samourai Wallet developers. Many are baffled at the SDNY’s continued defiance of the Trump Administration’s directives, especially in light of the recent suspensions and resignations that swept through the SDNY office in the wake of several attorneys refusing to comply with the DOJ’s directive to drop the charges against New York City Mayor, Eric Adams. There is speculation that the missing piece was Trump’s pick to take the helm at the SDNY, Jay Clayton, who was yet to receive his Senate confirmation and didn’t officially start in his new role until April 22. In light of the Blanche Memo, on April 29, the prosecution and defense jointly filed a letter requesting additional time for the prosecution to determine it’s position on the matter and decide if they are going to do the right thing, comply with the DOJ, and drop the charges. Catch up on what’s at stake in this case with an appearance by Diverter on the Unbounded Podcast from April 24, the one-year anniversary of the Samourai Wallet developer’s arrest. This is the most important case facing Bitcoiners as the precedence set in this matter will have ripple effects that touch all areas of the ecosystem. The logic used by SDNY prosecutors argues that non-custodial wallet developers transfer money in the same way a frying pan transfers heat but does not “control” the heat. Essentially saying that facilitating the transfer of funds on behalf of the public by any means constitutes money transmission and thus requires a money transmitter license. All non-custodial wallets (software or hardware), node operators, and even miners would fall neatly into these dangerously generalized and vague definitions. If the SDNY wins this case, all Bitcoiners lose. Make a contribution to the defense fund here.
April 11: solo miner with ~230Th/s solves Block #891952 on Solo CK Pool, bagging 3.11 BTC in the process. This will never not be exciting to see a regular person with a modest amount of hashrate risk it all and reap all the mining reward. The more solo miners there are out there, the more often this should occur.
April 15: B10C publishes new article on mining centralization. The article analyzes the hashrate share of the currently five biggest pools and presents a Mining Centralization Index. The results demonstrate that only six pools are mining more than 95% of the blocks on the Bitcoin Network. The article goes on to explain that during the period between 2019 and 2022, the top two pools had ~35% of the network hashrate and the top six pools had ~75%. By December 2023 those numbers grew to the top two pools having 55% of the network hashrate and the top six having ~90%. Currently, the top six pools are mining ~95% of the blocks.
[IMG-002] Mining Centralization Index by @0xB10C
B10C concludes the article with a solution that is worth highlighting: “More individuals home-mining with small miners help too, however, the home-mining hashrate is currently still negligible compared to the industrial hashrate.”
April 15: As if miner centralization and proprietary hardware weren’t reason enough to focus on open-source mining solutions, leave it to Bitmain to release an S21+ firmware update that blocks connections to OCEAN and Braiins pools. This is the latest known sketchy development from Bitmain following years of shady behavior like Antbleed where miners would phone home, Covert ASIC Boost where miners could use a cryptographic trick to increase efficiency, the infamous Fork Wars, mining empty blocks, and removing the SD card slots. For a mining business to build it’s entire operation on a fragile foundation like the closed and proprietary Bitmain hardware is asking for trouble. Bitcoin miners need to remain flexible and agile and they need to be able to adapt to changes instantly – the sort of freedoms that only open-source Bitcoin mining solutions are bringing to the table.
Free & Open Mining Industry Developments:
The development will not stop until Bitcoin mining is free and open… and then it will get even better. Innovators did not disappoint in April, here are nine note-worthy events:
April 5: 256 Foundation officially launches three more grant projects. These will be covered in detail in the Grant Project Updates section but April 5 was a symbolic day to mark the official start because of the 6102 anniversary. A reminder of the asymmetric advantage freedom tech like Bitcoin empowers individuals with to protect their rights and freedoms, with open-source development being central to those ends.
April 5: Low profile ICE Tower+ for the Bitaxe Gamma 601 introduced by @Pleb_Style featuring four heat pipes, 2 copper shims, and a 60mm Noctua fan resulting in up to 2Th/s. European customers can pick up the complete upgrade kit from the Pleb Style online store for $93.00.
IMG-003] Pleb Style ICE Tower+ upgrade kit
April 8: Solo Satoshi spells out issues with Bitaxe knockoffs, like Lucky Miner, in a detailed article titled The Hidden Cost of Bitaxe Clones. This concept can be confusing for some people initially, Bitaxe is open-source, right? So anyone can do whatever they want… right? Based on the specific open-source license of the Bitaxe hardware, CERN-OHL-S, and the firmware, GPLv3, derivative works are supposed to make the source available. Respecting the license creates a feed back loop where those who benefit from the open-source work of those who came before them contribute back their own modifications and source files to the open-source community so that others can benefit from the new developments. Unfortunately, when the license is disrespected what ends up happening is that manufacturers make undocumented changes to the components in the hardware and firmware which yields unexpected results creating a number of issues like the Bitaxe overheating, not connecting to WiFi, or flat out failure. This issue gets further compounded when the people who purchased the knockoffs go to a community support forum, like OSMU, for help. There, a number of people rack their brains and spend their valuable time trying to replicate the issues only to find out that they cannot replicate the issues since the person who purchased the knockoff has something different than the known Bitaxe model and the distributor who sold the knockoff did not document those changes. The open-source licenses are maintaining the end-users’ freedom to do what they want but if the license is disrespected then that freedom vanishes along with details about whatever was changed. There is a list maintained on the Bitaxe website of legitimate distributors who uphold the open-source licenses, if you want to buy a Bitaxe, use this list to ensure the open-source community is being supported instead of leeched off of.
April 8: The Mempool Open Source Project v3.2.0 launches with a number of highlights including a new UTXO bubble chart, address poisoning detection, and a tx/PSBT preview feature. The GitHub repo can be found here if you want to self-host an instance from your own node or you can access the website here. The Mempool Open Source Project is a great blockchain explorer with a rich feature set and helpful visualization tools.
[IMG-004] Address poisoning example
April 8: @k1ix publishes bitaxe-raw, a firmware for the ESP32S3 found on Bitaxes which enables the user to send and receive raw bytes over USB serial to and from the Bitaxe. This is a helpful tool for research and development and a tool that is being leveraged at The 256 Foundation for helping with the Mujina miner firmware development. The bitaxe-raw GitHub repo can be found here.
April 14: Rev.Hodl compiles many of his homestead-meets-mining adaptations including how he cooks meat sous-vide style, heats his tap water to 150°F, runs a hashing space heater, and how he upgraded his clothes dryer to use Bitcoin miners. If you are interested in seeing some creative and resourceful home mining integrations, look no further. The fact that Rev.Hodl was able to do all this with closed-source proprietary Bitcoin mining hardware makes a very bullish case for the innovations coming down the pike once the hardware and firmware are open-source and people can gain full control over their mining appliances.
April 21: Hashpool explained on The Home Mining Podcast, an innovative Bitcoin mining pool development that trades mining shares for ecash tokens. The pool issues an “ehash” token for every submitted share, the pool uses ecash epochs to approximate the age of those shares in a FIFO order as they accrue value, a rotating key set is used to eventually expire them, and finally the pool publishes verification proofs for each epoch and each solved block. The ehash is provably not inflatable and payouts are similar to the PPLNS model. In addition to the maturity window where ehash tokens are accruing value, there is also a redemption window where the ehash tokens can be traded in to the mint for bitcoin. There is also a bitcoin++ presentation from earlier this year where @vnprc explains the architecture.
April 26: Boerst adds a new page on stratum.work for block template details, you can click on any mining pool and see the extended details and visualization of their current block template. Updates happen in real-time. The page displays all available template data including the OP_RETURN field and if the pool is merge mining, like with RSK, then that will be displayed too. Stratum dot work is a great project that offers helpful mining insights, be sure to book mark it if you haven’t already.
[IMG-005] New stratum.work live template page
April 27: Public Pool patches Nerdminer exploit that made it possible to create the impression that a user’s Nerdminer was hashing many times more than it actually was. This exploit was used by scammers trying to convince people that they had a special firmware for the Nerminer that would make it hash much better. In actuality, Public Pool just wasn’t checking to see if submitted shares were duplicates or not. The scammers would just tweak the Nerdminer firmware so that valid shares were getting submitted five times, creating the impression that the miner was hashing at five times the actual hashrate. Thankfully this has been uncovered by the open-source community and Public Pool quickly addressed it on their end.
Grant Project Updates:
Three grant projects were launched on April 5, Mujina Mining Firmware, Hydra Pool, and Libre Board. Ember One was the first fully funded grant and launched in November 2024 for a six month duration.
Ember One:
@skot9000 is the lead engineer on the Ember One and April 30 marked the conclusion of the first grant cycle after six months of development culminating in a standardized hashboard featuring a ~100W power consumption, 12-24v input voltage range, USB-C data communication, on-board temperature sensors, and a 125mm x 125mm formfactor. There are several Ember One versions on the road map, each with a different kind of ASIC chip but staying true to the standardized features listed above. The first Ember One, the 00 version, was built with the Bitmain BM1362 ASIC chips. The first official release of the Ember One, v3, is available here. v4 is already being worked on and will incorporate a few circuit safety mechanisms that are pretty exciting, like protecting the ASIC chips in the event of a power supply failure. The firmware for the USB adaptor is available here. Initial testing firmware for the Ember One 00 can be found here and full firmware support will be coming soon with Mujina. The Ember One does not have an on-board controller so a separate, USB connected, control board is required. Control board support is coming soon with the Libre Board. There is an in-depth schematic review that was recorded with Skot and Ryan, the lead developer for Mujina, you can see that video here. Timing for starting the second Ember One cycle is to be determined but the next version of the Ember One is planned to have the Intel BZM2 ASICs. Learn more at emberone.org
Mujina Mining Firmware:
@ryankuester is the lead developer for the Mujina firmware project and since the project launched on April 5, he has been working diligently to build this firmware from scratch in Rust. By using the bitaxe-raw firmware mentioned above, over the last month Ryan has been able to use a Bitaxe to simulate an Ember One so that he can start building the necessary interfaces to communicate with the range of sensors, ASICs, work handling, and API requests that will be necessary. For example, using a logic analyzer, this is what the first signs of life look like when communicating with an ASIC chip, the orange trace is a message being sent to the ASIC and the red trace below it is the ASIC responding [IMG-006]. The next step is to see if work can be sent to the ASIC and results returned. The GitHub repo for Mujina is currently set to private until a solid foundation has been built. Learn more at mujina.org
[IMG-006] First signs of life from an ASIC
Libre Board:
@Schnitzel is the lead engineer for the Libre Board project and over the last month has been modifying the Raspberry Pi Compute Module I/O Board open-source design to fit the requirements for this project. For example, removing one of the two HDMI ports, adding the 40-pin header, and adapting the voltage regulator circuit so that it can accept the same 12-24vdc range as the Ember One hashboards. The GitHub repo can be found here, although there isn’t much to look at yet as the designs are still in the works. If you have feature requests, creating an issue in the GitHub repo would be a good place to start. Learn more at libreboard.org
Hydra Pool:
@jungly is the lead developer for Hydra Pool and over the last month he has developed a working early version of Hydra Pool specifically for the upcoming Telehash #2. Forked from CK Pool, this early version has been modified so that the payout goes to the 256 Foundation bitcoin address automatically. This way, users who are supporting the funderaiser with their hashrate do not need to copy/paste in the bitcoin address, they can just use any vanity username they want. Jungly was also able to get a great looking statistics dashboard forked from CKstats and modify it so that the data is populated from the Hydra Pool server instead of website crawling. After the Telehash, the next steps will be setting up deployment scripts for running Hydra Pool on a cloud server, support for storing shares in a database, and adding PPLNS support. The 256 Foundation is only running a publicly accessible server for the Telehash and the long term goals for Hydra Pool are that the users host their own instance. The 256 Foundation has no plans on becoming a mining pool operator. The following Actionable Advice column shows you how you can help test Hydra Pool. The GitHub repo for Hydra Pool can be found here. Learn more at hydrapool.org
Actionable Advice:
The 256 Foundation is looking for testers to help try out Hydra Pool. The current instance is on a hosted bare metal server in Florida and features 64 cores and 128 GB of RAM. One tester in Europe shared that they were only experiencing ~70ms of latency which is good. If you want to help test Hydra Pool out and give any feedback, you can follow the directions below and join The 256 Foundation public forum on Telegram here.
The first step is to configure your miner so that it is pointed to the Hydra Pool server. This can look different depending on your specific miner but generally speaking, from the settings page you can add the following URL:
stratum+tcp://stratum.hydrapool.org:3333
On some miners, you don’t need the “stratum+tcp://” part or the port, “:3333”, in the URL dialog box and there may be separate dialog boxes for the port.
Use any vanity username you want, no need to add a BTC address. The test iteration of Hydra Pool is configured to payout to the 256 Foundation BTC address.
If your miner has a password field, you can just put “x” or “1234”, it doesn’t matter and this field is ignored.
Then save your changes and restart your miner. Here are two examples of what this can look like using a Futurebit Apollo and a Bitaxe:
[IMG-007] Apollo configured to Hydra Pool
[IMG-008] Bitaxe Configured to Hydra Pool
Once you get started, be sure to check stats.hydrapool.org to monitor the solo pool statistics.
[IMG-009] Ember One hashing to Hydra Pool
At the last Telehash there were over 350 entities pointing as much as 1.12Eh/s at the fundraiser at the peak. At the time the block was found there was closer to 800 Ph/s of hashrate. At this next Telehash, The 256 Foundation is looking to beat the previous records across the board. You can find all the Telehash details on the Meetup page here.
State of the Network:
Hashrate on the 14-day MA according to mempool.space increased from ~826 Eh/s to a peak of ~907 Eh/s on April 16 before cooling off and finishing the month at ~841 Eh/s, marking ~1.8% growth for the month.
[IMG-010] 2025 hashrate/difficulty chart from mempool.space
Difficulty was 113.76T at it’s lowest in April and 123.23T at it’s highest, which is a 8.3% increase for the month. But difficulty dropped with Epoch #444 just after the end of the month on May 3 bringing a -3.3% downward adjustment. All together for 2025 up to Epoch #444, difficulty has gone up ~8.5%.
According to the Hashrate Index, ASIC prices have flat-lined over the last month. The more efficient miners like the <19 J/Th models are fetching $17.29 per terahash, models between 19J/Th – 25J/Th are selling for $11.05 per terahash, and models >25J/Th are selling for $3.20 per terahash. You can expect to pay roughly $4,000 for a new-gen miner with 230+ Th/s.
[IMG-011] Miner Prices from Luxor’s Hashrate Index
Hashvalue over the month of April dropped from ~56,000 sats/Ph per day to ~52,000 sats/Ph per day, according to the new and improved Braiins Insights dashboard [IMG-012]. Hashprice started out at $46.00/Ph per day at the beginning of April and climbed to $49.00/Ph per day by the end of the month.
[IMG-012] Hashprice/Hashvalue from Braiins Insights
The next halving will occur at block height 1,050,000 which should be in roughly 1,063 days or in other words ~154,650 blocks from time of publishing this newsletter.
Conclusion:
Thank you for reading the fifth 256 Foundation newsletter. Keep an eye out for more newsletters on a monthly basis in your email inbox by subscribing at 256foundation.org. Or you can download .pdf versions of the newsletters from there as well. You can also find these newsletters published in article form on Nostr.
If you haven’t done so already, be sure to RSVP for the Texas Energy & Mining Summit (“TEMS”) in Austin, Texas on May 6 & 7 for two days of the highest Bitcoin mining and energy signal in the industry, set in the intimate Bitcoin Commons, so you can meet and mingle with the best and brightest movers and shakers in the space.
[IMG-013] TEMS 2025 flyer
While you’re at it, extend your stay and spend Cinco De Mayo with The 256 Foundation at our second fundraiser, Telehash #2. Everything is bigger in Texas, so set your expectations high for this one. All of the lead developers from the grant projects will be present to talk first-hand about how to dismantle the proprietary mining empire.
If you are interested in helping The 256 Foundation test Hydra Pool, then hopefully you found all the information you need to configure your miner in this issue.
[IMG-014] FREE SAMOURAI
If you want to continue seeing developers build free and open solutions be sure to support the Samourai Wallet developers by making a tax-deductible contribution to their legal defense fund here. The first step in ensuring a future of free and open Bitcoin development starts with freeing these developers.
Live Free or Die,
-econoalchemist
-
@ 51faaa77:2c26615b
2025-05-04 17:52:33There has been a lot of debate about a recent discussion on the mailing list and a pull request on the Bitcoin Core repository. The main two points are about whether a mempool policy regarding OP_RETURN outputs should be changed, and whether there should be a configuration option for node operators to set their own limit. There has been some controversy about the background and context of these topics and people are looking for more information. Please ask short (preferably one sentence) questions as top comments in this topic. @Murch, and maybe others, will try to answer them in a couple sentences. @Murch and myself have collected a few questions that we have seen being asked to start us off, but please add more as you see fit.
originally posted at https://stacker.news/items/971277
-
@ 90c656ff:9383fd4e
2025-05-04 17:48:58The Bitcoin network was designed to be secure, decentralized, and resistant to censorship. However, as its usage grows, an important challenge arises: scalability. This term refers to the network's ability to manage an increasing number of transactions without affecting performance or security. This challenge has sparked the speed dilemma, which involves balancing transaction speed with the preservation of decentralization and security that the blockchain or timechain provides.
Scalability is the ability of a system to increase its performance to meet higher demands. In the case of Bitcoin, this means processing a greater number of transactions per second (TPS) without compromising the network's core principles.
Currently, the Bitcoin network processes about 7 transactions per second, a number considered low compared to traditional systems, such as credit card networks, which can process thousands of transactions per second. This limit is directly due to the fixed block size (1 MB) and the average 10-minute interval for creating a new block in the blockchain or timechain.
The speed dilemma arises from the need to balance three essential elements: decentralization, security, and speed.
The Timechain/"Blockchain" Trilemma:
01 - Decentralization: The Bitcoin network is composed of thousands of independent nodes that verify and validate transactions. Increasing the block size or making them faster could raise computational requirements, making it harder for smaller nodes to participate and affecting decentralization. 02 - Security: Security comes from the mining process and block validation. Increasing transaction speed could compromise security, as it would reduce the time needed to verify each block, making the network more vulnerable to attacks. 03 - Speed: The need to confirm transactions quickly is crucial for Bitcoin to be used as a payment method in everyday life. However, prioritizing speed could affect both security and decentralization.
This dilemma requires balanced solutions to expand the network without sacrificing its core features.
Solutions to the Scalability Problem
Several solutions have been suggested to address the scalability and speed challenges in the Bitcoin network.
- On-Chain Optimization
01 - Segregated Witness (SegWit): Implemented in 2017, SegWit separates signature data from transactions, allowing more efficient use of space in blocks and increasing capacity without changing the block size. 02 - Increasing Block Size: Some proposals have suggested increasing the block size to allow more transactions per block. However, this could make the system more centralized as it would require greater computational power.
- Off-Chain Solutions
01 - Lightning Network: A second-layer solution that enables fast and low-cost transactions off the main blockchain or timechain. These transactions are later settled on the main network, maintaining security and decentralization. 02 - Payment Channels: Allow direct transactions between two users without the need to record every action on the network, reducing congestion. 03 - Sidechains: Proposals that create parallel networks connected to the main blockchain or timechain, providing more flexibility and processing capacity.
While these solutions bring significant improvements, they also present issues. For example, the Lightning Network depends on payment channels that require initial liquidity, limiting its widespread adoption. Increasing block size could make the system more susceptible to centralization, impacting network security.
Additionally, second-layer solutions may require extra trust between participants, which could weaken the decentralization and resistance to censorship principles that Bitcoin advocates.
Another important point is the need for large-scale adoption. Even with technological advancements, solutions will only be effective if they are widely used and accepted by users and developers.
In summary, scalability and the speed dilemma represent one of the greatest technical challenges for the Bitcoin network. While security and decentralization are essential to maintaining the system's original principles, the need for fast and efficient transactions makes scalability an urgent issue.
Solutions like SegWit and the Lightning Network have shown promising progress, but still face technical and adoption barriers. The balance between speed, security, and decentralization remains a central goal for Bitcoin’s future.
Thus, the continuous pursuit of innovation and improvement is essential for Bitcoin to maintain its relevance as a reliable and efficient network, capable of supporting global growth and adoption without compromising its core values.
Thank you very much for reading this far. I hope everything is well with you, and sending a big hug from your favorite Bitcoiner maximalist from Madeira. Long live freedom!
-
@ a5ee4475:2ca75401
2025-05-04 17:22:36clients #list #descentralismo #english #article #finalversion
*These clients are generally applications on the Nostr network that allow you to use the same account, regardless of the app used, keeping your messages and profile intact.
**However, you may need to meet certain requirements regarding access and account NIP for some clients, so that you can access them securely and use their features correctly.
CLIENTS
Twitter like
- Nostrmo - [source] 🌐🤖🍎💻(🐧🪟🍎)
- Coracle - Super App [source] 🌐
- Amethyst - Super App with note edit, delete and other stuff with Tor [source] 🤖
- Primal - Social and wallet [source] 🌐🤖🍎
- Iris - [source] 🌐🤖🍎
- Current - [source] 🤖🍎
- FreeFrom 🤖🍎
- Openvibe - Nostr and others (new Plebstr) [source] 🤖🍎
- Snort 🌐(🤖[early access]) [source]
- Damus 🍎 [source]
- Nos 🍎 [source]
- Nostur 🍎 [source]
- NostrBand 🌐 [info] [source]
- Yana 🤖🍎🌐💻(🐧) [source]
- Nostribe [on development] 🌐 [source]
- Lume 💻(🐧🪟🍎) [info] [source]
- Gossip - [source] 💻(🐧🪟🍎)
- Camelus [early access] 🤖 [source]
Communities
- noStrudel - Gamified Experience [info] 🌐
- Nostr Kiwi [creator] 🌐
- Satellite [info] 🌐
- Flotilla - [source] 🌐🐧
- Chachi - [source] 🌐
- Futr - Coded in haskell [source] 🐧 (others soon)
- Soapbox - Comunnity server [info] [source] 🌐
- Ditto - Soapbox comunnity server 🌐 [source] 🌐
- Cobrafuma - Nostr brazilian community on Ditto [info] 🌐
- Zapddit - Reddit like [source] 🌐
- Voyage (Reddit like) [on development] 🤖
Wiki
Search
- Advanced nostr search - Advanced note search by isolated terms related to a npub profile [source] 🌐
- Nos Today - Global note search by isolated terms [info] [source] 🌐
- Nostr Search Engine - API for Nostr clients [source]
Website
App Store
ZapStore - Permitionless App Store [source]
Audio and Video Transmission
- Nostr Nests - Audio Chats 🌐 [info]
- Fountain - Podcast 🤖🍎 [info]
- ZapStream - Live streaming 🌐 [info]
- Corny Chat - Audio Chat 🌐 [info]
Video Streaming
Music
- Tidal - Music Streaming [source] [about] [info] 🤖🍎🌐
- Wavlake - Music Streaming [source] 🌐(🤖🍎 [early access])
- Tunestr - Musical Events [source] [about] 🌐
- Stemstr - Musical Colab (paid to post) [source] [about] 🌐
Images
- Pinstr - Pinterest like [source] 🌐
- Slidestr - DeviantArt like [source] 🌐
- Memestr - ifunny like [source] 🌐
Download and Upload
Documents, graphics and tables
- Mindstr - Mind maps [source] 🌐
- Docstr - Share Docs [info] [source] 🌐
- Formstr - Share Forms [info] 🌐
- Sheetstr - Share Spreadsheets [source] 🌐
- Slide Maker - Share slides 🌐 (advice: https://zaplinks.lol/ and https://zaplinks.lol/slides/ sites are down)
Health
- Sobrkey - Sobriety and mental health [source] 🌐
- NosFabrica - Finding ways for your health data 🌐
- LazerEyes - Eye prescription by DM [source] 🌐
Forum
- OddBean - Hacker News like [info] [source] 🌐
- LowEnt - Forum [info] 🌐
- Swarmstr - Q&A / FAQ [info] 🌐
- Staker News - Hacker News like 🌐 [info]
Direct Messenges (DM)
- 0xchat 🤖🍎 [source]
- Nostr Chat 🌐🍎 [source]
- Blowater 🌐 [source]
- Anigma (new nostrgram) - Telegram based [on development] [source]
- Keychat - Signal based [🤖🍎 on development] [source]
Reading
- Highlighter - Insights with a highlighted read 🌐 [info]
- Zephyr - Calming to Read 🌐 [info]
- Flycat - Clean and Healthy Feed 🌐 [info]
- Nosta - Check Profiles [on development] 🌐 [info]
- Alexandria - e-Reader and Nostr Knowledge Base (NKB) [source]
Writing
Lists
- Following - Users list [source] 🌐
- Listr - Lists [source] 🌐
- Nostr potatoes - Movies List source 💻(numpy)
Market and Jobs
- Shopstr - Buy and Sell [source] 🌐
- Nostr Market - Buy and Sell 🌐
- Plebeian Market - Buy and Sell [source] 🌐
- Ostrich Work - Jobs [source] 🌐
- Nostrocket - Jobs [source] 🌐
Data Vending Machines - DVM (NIP90)
(Data-processing tools)
AI
Games
- Chesstr - Chess 🌐 [source]
- Jestr - Chess [source] 🌐
- Snakestr - Snake game [source] 🌐
- DEG Mods - Decentralized Game Mods [info] [source] 🌐
Customization
Like other Services
- Olas - Instagram like [source] 🤖🍎🌐
- Nostree - Linktree like 🌐
- Rabbit - TweetDeck like [info] 🌐
- Zaplinks - Nostr links 🌐
- Omeglestr - Omegle-like Random Chats [source] 🌐
General Uses
- Njump - HTML text gateway source 🌐
- Filestr - HTML midia gateway [source] 🌐
- W3 - Nostr URL shortener [source] 🌐
- Playground - Test Nostr filters [source] 🌐
- Spring - Browser 🌐
Places
- Wherostr - Travel and show where you are
- Arc Map (Mapstr) - Bitcoin Map [info]
Driver and Delivery
- RoadRunner - Uber like [on development] ⏱️
- Arcade City - Uber like [on development] ⏱️ [info]
- Nostrlivery - iFood like [on development] ⏱️
OTHER STUFF
Lightning Wallets (zap)
- Alby - Native and extension [info] 🌐
- ZBD - Gaming and Social [info] 🤖🍎
- Wallet of Satoshi [info] 🤖🍎
- Minibits - Cashu mobile wallet [info] 🤖
- Blink - Opensource custodial wallet (KYC over 1000 usd) [source] 🤖🍎
- LNbits - App and extesion [source] 🤖🍎💻
- Zeus - [info] [source] 🤖🍎
Exchange
Media Server (Upload Links)
audio, image and video
- Nostr Build - [source] 🌐
- Nostr Check - [info] [source] 🌐
- NostPic - [source] 🌐
- Sovbit 🌐
- Voidcat - [source] 🌐
Without Nip: - Pomf - Upload larger videos [source] - Catbox - [source] - x0 - [source]
Donation and payments
- Zapper - Easy Zaps [source] 🌐
- Autozap [source] 🌐
- Zapmeacoffee 🌐
- Nostr Zap 💻(numpy)
- Creatr - Creators subscription 🌐
- Geyzer - Crowdfunding [info] [source] 🌐
- Heya! - Crowdfunding [source]
Security
- Secret Border - Generate offline keys 💻(java)
- Umbrel - Your private relay [source] 🌐
Extensions
- Nos2x - Account access keys 🌐
- Nsec.app 🌐 [info]
- Lume - [info] [source] 🐧🪟🍎
- Satcom - Share files to discuss - [info] 🌐
- KeysBand - Multi-key signing [source] 🌐
Code
- Nostrify - Share Nostr Frameworks 🌐
- Git Workshop (github like) [experimental] 🌐
- Gitstr (github like) [on development] ⏱️
- Osty [on development] [info] 🌐
- Python Nostr - Python Library for Nostr
Relay Check and Cloud
- Nostr Watch - See your relay speed 🌐
- NosDrive - Nostr Relay that saves to Google Drive
Bidges and Getways
- Matrixtr Bridge - Between Matrix & Nostr
- Mostr - Between Nostr & Fediverse
- Nostrss - RSS to Nostr
- Rsslay - Optimized RSS to Nostr [source]
- Atomstr - RSS/Atom to Nostr [source]
NOT RELATED TO NOSTR
Android Keyboards
Personal notes and texts
Front-ends
- Nitter - Twitter / X without your data [source]
- NewPipe - Youtube, Peertube and others, without account & your data [source] 🤖
- Piped - Youtube web without you data [source] 🌐
Other Services
- Brave - Browser [source]
- DuckDuckGo - Search [source]
- LLMA - Meta - Meta open source AI [source]
- DuckDuckGo AI Chat - Famous AIs without Login [source]
- Proton Mail - Mail [source]
Other open source index: Degoogled Apps
Some other Nostr index on:
-
@ c7aa97dc:0d12c810
2025-05-04 17:06:47COLDCARDS’s new Co-Sign feature lets you use a multisig (2 of N) wallet where the second key (policy key) lives inside the same COLDCARD and signs only when a transaction meets the rules you set-for example:
- Maximum amount per send (e.g. 500k Sats)
- Wait time between sends, (e.g 144 blocks = 1 day)
- Only send to approved addresses,
- Only send after you provide a 2FA code
If a payment follows the rules, COLDCARD automatically signs the transaction with 2 keys which makes it feel like a single-sig wallet.
Break a rule and the device only signs with 1 key, so nothing moves unless you sign the transaction with a separate off-site recovery key.
It’s the convenience of singlesig with the guard-rails of multisig.
Use Cases Unlocked
Below you will find an overview of usecases unlocked by this security enhancing feature for everyday bitcoiners, families, and small businesses.
1. Travel Lock-Down Mode
Before you leave, set the wait-time to match the duration of your trip—say 14 days—and cap each spend at 50k sats. If someone finds the COLDCARD while you’re away, they can take only one 50k-sat nibble and then must wait the full two weeks—long after you’re back—to try again. When you notice your device is gone you can quickly restore your wallet with your backup seeds (not in your house of course) and move all the funds to a new wallet.
2. Shared-Safety Wallet for Parents or Friends
Help your parents or friends setup a COLDCARD with Co-Sign, cap each spend at 500 000 sats and enforce a 7-day gap between transactions. Everyday spending sails through; anything larger waits for your co-signature from your key. A thief can’t steal more than the capped amount per week, and your parents retains full sovereignty—if you disappear, they still hold two backup seeds and can either withdraw slowly under the limits or import those seeds into another signer and move everything at once.
3. My First COLDCARD Wallet
Give your kid a COLDCARD, but whitelist only their own addresses and set a 100k sat ceiling. They learn self-custody, yet external spends still need you to co-sign.
4. Weekend-Only Spending Wallet
Cap each withdrawal (e.g., 500k sats) and require a 72-hour gap between sends. You can still top-up Lightning channels or pay bills weekly, but attackers that have access to your device + pin will not be able to drain it immediately.
5. DIY Business Treasury
Finance staff use the COLDCARD to pay routine invoices under 0.1 BTC. Anything larger needs the co-founder’s off-site backup key.
6. Donation / Grant Disbursement Wallet
Publish the deposit address publicly, but allow outgoing payments only to a fixed list of beneficiary addresses. Even if attackers get the device, they can’t redirect funds to themselves—the policy key refuses to sign.
7. Phoenix Lightning Wallet Top-Up
Add a Phoenix Lightning wallet on-chain deposit addresses to the whitelist. The COLDCARD will co-sign only when you’re refilling channels. This is off course not limited to Phoenix wallet and can be used for any Lightning Node.
8. Deep Cold-Storage Bridge
Whitelist one or more addresses from your bitcoin vault. Day-to-day you sweep hot-wallet incoming funds (From a webshop or lightning node) into the COLDCARD, then push funds onward to deep cold storage. If the device is compromised, coins can only land safely in the vault.
9. Company Treasury → Payroll Wallets
List each employee’s salary wallet on the whitelist (watch out for address re-use) and cap the amount per send. Routine payroll runs smoothly, while attackers or rogue insiders can’t reroute funds elsewhere.
10. Phone Spending-Wallet Refills
Whitelist only some deposit addresses of your mobile wallet and set a small per-send cap. You can top up anytime, but an attacker with the device and PIN can’t drain more than the refill limit—and only to your own phone.
I hope these usecase are helpfull and I'm curious to hear what other use cases you think are possible with this co-signing feature.
For deeper technical details on how Co-Sign works, refer to the official documentation on the Coldcard website. https://coldcard.com/docs/coldcard-cosigning/
You can also watch their Video https://www.youtube.com/watch?v=MjMPDUWWegw
coldcard #coinkite #bitcoin #selfcustody #multisig #mk4 #ccq
nostr:npub1az9xj85cmxv8e9j9y80lvqp97crsqdu2fpu3srwthd99qfu9qsgstam8y8 nostr:npub12ctjk5lhxp6sks8x83gpk9sx3hvk5fz70uz4ze6uplkfs9lwjmsq2rc5ky
-
@ 90c656ff:9383fd4e
2025-05-04 17:06:06In the Bitcoin system, the protection and ownership of funds are ensured by a cryptographic model that uses private and public keys. These components are fundamental to digital security, allowing users to manage and safeguard their assets in a decentralized way. This process removes the need for intermediaries, ensuring that only the legitimate owner has access to the balance linked to a specific address on the blockchain or timechain.
Private and public keys are part of an asymmetric cryptographic system, where two distinct but mathematically linked codes are used to guarantee the security and authenticity of transactions.
Private Key = A secret code, usually represented as a long string of numbers and letters.
Functions like a password that gives the owner control over the bitcoins tied to a specific address.
Must be kept completely secret, as anyone with access to it can move the corresponding funds.
Public Key = Mathematically derived from the private key, but it cannot be used to uncover the private key.
Functions as a digital address, similar to a bank account number, and can be freely shared to receive payments.
Used to verify the authenticity of signatures generated with the private key.
Together, these keys ensure that transactions are secure and verifiable, eliminating the need for intermediaries.
The functioning of private and public keys is based on elliptic curve cryptography. When a user wants to send bitcoins, they use their private key to digitally sign the transaction. This signature is unique for each operation and proves that the sender possesses the private key linked to the sending address.
Bitcoin network nodes check this signature using the corresponding public key to ensure that:
01 - The signature is valid. 02 - The transaction has not been altered since it was signed. 03 - The sender is the legitimate owner of the funds.
If the signature is valid, the transaction is recorded on the blockchain or timechain and becomes irreversible. This process protects funds against fraud and double-spending.
The security of private keys is one of the most critical aspects of the Bitcoin system. Losing this key means permanently losing access to the funds, as there is no central authority capable of recovering it.
- Best practices for protecting private keys include:
01 - Offline storage: Keep them away from internet-connected networks to reduce the risk of cyberattacks. 02 - Hardware wallets: Physical devices dedicated to securely storing private keys. 03 - Backups and redundancy: Maintain backup copies in safe and separate locations. 04 - Additional encryption: Protect digital files containing private keys with strong passwords and encryption.
- Common threats include:
01 - Phishing and malware: Attacks that attempt to trick users into revealing their keys. 02 - Physical theft: If keys are stored on physical devices. 03 - Loss of passwords and backups: Which can lead to permanent loss of funds.
Using private and public keys gives the owner full control over their funds, eliminating intermediaries such as banks or governments. This model places the responsibility of protection on the user, which represents both freedom and risk.
Unlike traditional financial systems, where institutions can reverse transactions or freeze accounts, in the Bitcoin system, possession of the private key is the only proof of ownership. This principle is often summarized by the phrase: "Not your keys, not your coins."
This approach strengthens financial sovereignty, allowing individuals to store and move value independently and without censorship.
Despite its security, the key-based system also carries risks. If a private key is lost or forgotten, there is no way to recover the associated funds. This has already led to the permanent loss of millions of bitcoins over the years.
To reduce this risk, many users rely on seed phrases, which are a list of words used to recover wallets and private keys. These phrases must be guarded just as carefully, as they can also grant access to funds.
In summary, private and public keys are the foundation of security and ownership in the Bitcoin system. They ensure that only rightful owners can move their funds, enabling a decentralized, secure, and censorship-resistant financial system.
However, this freedom comes with great responsibility, requiring users to adopt strict practices to protect their private keys. Loss or compromise of these keys can lead to irreversible consequences, highlighting the importance of education and preparation when using Bitcoin.
Thus, the cryptographic key model not only enhances security but also represents the essence of the financial independence that Bitcoin enables.
Thank you very much for reading this far. I hope everything is well with you, and sending a big hug from your favorite Bitcoiner maximalist from Madeira. Long live freedom!
-
@ b99efe77:f3de3616
2025-05-04 16:59:35fafasdf
asdfasfasf
Places & Transitions
- Places:
-
Bla bla bla: some text
-
Transitions:
- start: Initializes the system.
- logTask: bla bla bla.
petrinet ;startDay () -> working ;stopDay working -> () ;startPause working -> paused ;endPause paused -> working ;goSmoke working -> smoking ;endSmoke smoking -> working ;startEating working -> eating ;stopEating eating -> working ;startCall working -> onCall ;endCall onCall -> working ;startMeeting working -> inMeetinga ;endMeeting inMeeting -> working ;logTask working -> working
-
@ 90c656ff:9383fd4e
2025-05-04 16:49:19The Bitcoin network is built on a decentralized infrastructure made up of devices called nodes. These nodes play a crucial role in validating, verifying, and maintaining the system, ensuring the security and integrity of the blockchain or timechain. Unlike traditional systems where a central authority controls operations, the Bitcoin network relies on the collaboration of thousands of nodes around the world, promoting decentralization and transparency.
In the Bitcoin network, a node is any computer connected to the system that participates in storing, validating, or distributing information. These devices run Bitcoin software and can operate at different levels of participation, from basic data transmission to full validation of transactions and blocks.
There are two main types of nodes:
- Full Nodes:
01 - Store a complete copy of the blockchain or timechain. 02 - Validate and verify all transactions and blocks according to the protocol rules. 03 - Ensure network security by rejecting invalid transactions or fraudulent attempts.
- Light Nodes:
01 - Store only parts of the blockchain or timechain, not the full structure. 02 - Rely on full nodes to access transaction history data. 03 - Are faster and less resource-intensive but depend on third parties for full validation.
Nodes check whether submitted transactions comply with protocol rules, such as valid digital signatures and the absence of double spending.
Only valid transactions are forwarded to other nodes and included in the next block.
Full nodes maintain an up-to-date copy of the network's entire transaction history, ensuring integrity and transparency. In case of discrepancies, nodes follow the longest and most valid chain, preventing manipulation.
Nodes transmit transaction and block data to other nodes on the network. This process ensures all participants are synchronized and up to date.
Since the Bitcoin network consists of thousands of independent nodes, it is nearly impossible for a single agent to control or alter the system.
Nodes also protect against attacks by validating information and blocking fraudulent attempts.
Full nodes are particularly important, as they act as independent auditors. They do not need to rely on third parties and can verify the entire transaction history directly.
By maintaining a full copy of the blockchain or timechain, these nodes allow anyone to validate transactions without intermediaries, promoting transparency and financial freedom.
- In addition, full nodes:
01 - Reinforce censorship resistance: No government or entity can delete or alter data recorded on the system. 02 - Preserve decentralization: The more full nodes that exist, the stronger and more secure the network becomes. 03 - Increase trust in the system: Users can independently confirm whether the rules are being followed.
Despite their value, operating a full node can be challenging, as it requires storage space, processing power, and bandwidth. As the blockchain or timechain grows, technical requirements increase, which can make participation harder for regular users.
To address this, the community continuously works on solutions, such as software improvements and scalability enhancements, to make network access easier without compromising security.
In summary, nodes are the backbone of the Bitcoin network, performing essential functions in transaction validation, verification, and distribution. They ensure the decentralization and security of the system, allowing participants to operate reliably without relying on intermediaries.
Full nodes, in particular, play a critical role in preserving the integrity of the blockchain or timechain, making the Bitcoin network resistant to censorship and manipulation.
While running a node may require technical resources, its impact on preserving financial freedom and system trust is invaluable. As such, nodes remain essential elements for the success and longevity of Bitcoin.
Thank you very much for reading this far. I hope everything is well with you, and sending a big hug from your favorite Bitcoiner maximalist from Madeira. Long live freedom!
-
@ 40b9c85f:5e61b451
2025-04-24 15:27:02Introduction
Data Vending Machines (DVMs) have emerged as a crucial component of the Nostr ecosystem, offering specialized computational services to clients across the network. As defined in NIP-90, DVMs operate on an apparently simple principle: "data in, data out." They provide a marketplace for data processing where users request specific jobs (like text translation, content recommendation, or AI text generation)
While DVMs have gained significant traction, the current specification faces challenges that hinder widespread adoption and consistent implementation. This article explores some ideas on how we can apply the reflection pattern, a well established approach in RPC systems, to address these challenges and improve the DVM ecosystem's clarity, consistency, and usability.
The Current State of DVMs: Challenges and Limitations
The NIP-90 specification provides a broad framework for DVMs, but this flexibility has led to several issues:
1. Inconsistent Implementation
As noted by hzrd149 in "DVMs were a mistake" every DVM implementation tends to expect inputs in slightly different formats, even while ostensibly following the same specification. For example, a translation request DVM might expect an event ID in one particular format, while an LLM service could expect a "prompt" input that's not even specified in NIP-90.
2. Fragmented Specifications
The DVM specification reserves a range of event kinds (5000-6000), each meant for different types of computational jobs. While creating sub-specifications for each job type is being explored as a possible solution for clarity, in a decentralized and permissionless landscape like Nostr, relying solely on specification enforcement won't be effective for creating a healthy ecosystem. A more comprehensible approach is needed that works with, rather than against, the open nature of the protocol.
3. Ambiguous API Interfaces
There's no standardized way for clients to discover what parameters a specific DVM accepts, which are required versus optional, or what output format to expect. This creates uncertainty and forces developers to rely on documentation outside the protocol itself, if such documentation exists at all.
The Reflection Pattern: A Solution from RPC Systems
The reflection pattern in RPC systems offers a compelling solution to many of these challenges. At its core, reflection enables servers to provide metadata about their available services, methods, and data types at runtime, allowing clients to dynamically discover and interact with the server's API.
In established RPC frameworks like gRPC, reflection serves as a self-describing mechanism where services expose their interface definitions and requirements. In MCP reflection is used to expose the capabilities of the server, such as tools, resources, and prompts. Clients can learn about available capabilities without prior knowledge, and systems can adapt to changes without requiring rebuilds or redeployments. This standardized introspection creates a unified way to query service metadata, making tools like
grpcurl
possible without requiring precompiled stubs.How Reflection Could Transform the DVM Specification
By incorporating reflection principles into the DVM specification, we could create a more coherent and predictable ecosystem. DVMs already implement some sort of reflection through the use of 'nip90params', which allow clients to discover some parameters, constraints, and features of the DVMs, such as whether they accept encryption, nutzaps, etc. However, this approach could be expanded to provide more comprehensive self-description capabilities.
1. Defined Lifecycle Phases
Similar to the Model Context Protocol (MCP), DVMs could benefit from a clear lifecycle consisting of an initialization phase and an operation phase. During initialization, the client and DVM would negotiate capabilities and exchange metadata, with the DVM providing a JSON schema containing its input requirements. nip-89 (or other) announcements can be used to bootstrap the discovery and negotiation process by providing the input schema directly. Then, during the operation phase, the client would interact with the DVM according to the negotiated schema and parameters.
2. Schema-Based Interactions
Rather than relying on rigid specifications for each job type, DVMs could self-advertise their schemas. This would allow clients to understand which parameters are required versus optional, what type validation should occur for inputs, what output formats to expect, and what payment flows are supported. By internalizing the input schema of the DVMs they wish to consume, clients gain clarity on how to interact effectively.
3. Capability Negotiation
Capability negotiation would enable DVMs to advertise their supported features, such as encryption methods, payment options, or specialized functionalities. This would allow clients to adjust their interaction approach based on the specific capabilities of each DVM they encounter.
Implementation Approach
While building DVMCP, I realized that the RPC reflection pattern used there could be beneficial for constructing DVMs in general. Since DVMs already follow an RPC style for their operation, and reflection is a natural extension of this approach, it could significantly enhance and clarify the DVM specification.
A reflection enhanced DVM protocol could work as follows: 1. Discovery: Clients discover DVMs through existing NIP-89 application handlers, input schemas could also be advertised in nip-89 announcements, making the second step unnecessary. 2. Schema Request: Clients request the DVM's input schema for the specific job type they're interested in 3. Validation: Clients validate their request against the provided schema before submission 4. Operation: The job proceeds through the standard NIP-90 flow, but with clearer expectations on both sides
Parallels with Other Protocols
This approach has proven successful in other contexts. The Model Context Protocol (MCP) implements a similar lifecycle with capability negotiation during initialization, allowing any client to communicate with any server as long as they adhere to the base protocol. MCP and DVM protocols share fundamental similarities, both aim to expose and consume computational resources through a JSON-RPC-like interface, albeit with specific differences.
gRPC's reflection service similarly allows clients to discover service definitions at runtime, enabling generic tools to work with any gRPC service without prior knowledge. In the REST API world, OpenAPI/Swagger specifications document interfaces in a way that makes them discoverable and testable.
DVMs would benefit from adopting these patterns while maintaining the decentralized, permissionless nature of Nostr.
Conclusion
I am not attempting to rewrite the DVM specification; rather, explore some ideas that could help the ecosystem improve incrementally, reducing fragmentation and making the ecosystem more comprehensible. By allowing DVMs to self describe their interfaces, we could maintain the flexibility that makes Nostr powerful while providing the structure needed for interoperability.
For developers building DVM clients or libraries, this approach would simplify consumption by providing clear expectations about inputs and outputs. For DVM operators, it would establish a standard way to communicate their service's requirements without relying on external documentation.
I am currently developing DVMCP following these patterns. Of course, DVMs and MCP servers have different details; MCP includes capabilities such as tools, resources, and prompts on the server side, as well as 'roots' and 'sampling' on the client side, creating a bidirectional way to consume capabilities. In contrast, DVMs typically function similarly to MCP tools, where you call a DVM with an input and receive an output, with each job type representing a different categorization of the work performed.
Without further ado, I hope this article has provided some insight into the potential benefits of applying the reflection pattern to the DVM specification.
-
@ efc19139:a370b6a8
2025-05-04 16:42:24Bitcoin has a controversial reputation, but in this essay, I argue that Bitcoin is actually a pretty cool thing; it could even be described as the hippie movement of the digital generations.
Mainstream media often portrays Bitcoin purely as speculation, with headlines focusing on price fluctuations or painting it as an environmental disaster. It has frequently been declared dead and buried, only to rise again—each time, it's labeled as highly risky and suspicious as a whole. Then there are those who find blockchain fascinating in general but dismiss Bitcoin as outdated, claiming it will soon be replaced by a new cryptocurrency (often one controlled by the very author making the argument). Let’s take a moment to consider why Bitcoin is interesting and how it can drive broad societal change, much like the hippie movement once did. Bitcoin is a global decentralized monetary system operating on a peer-to-peer network. Since nearly all of humanity lives within an economic system based on money, it’s easy to see how an overhaul of the financial system could have a profound impact across different aspects of society. Bitcoin differs from traditional money through several unique characteristics: it is scarce, neutral, decentralized, and completely permissionless. There is no central entity—such as a company—that develops and markets Bitcoin, meaning it cannot be corrupted.
Bitcoin is an open digital network, much like the internet. Due to its lack of a central governing entity and its organic origin, Bitcoin can be considered a commodity, whereas other cryptocurrencies resemble securities, comparable to stocks. Bitcoin’s decentralized nature makes it geopolitically neutral. Instead of being controlled by a central authority, it operates under predefined, unchangeable rules. No single entity in the world has the ability to arbitrarily influence decision-making within the Bitcoin network. This characteristic is particularly beneficial in today’s political climate, where global uncertainty is heightened by unpredictable leaders of major powers. The permissionless nature of Bitcoin and its built-in resistance to censorship are crucial for individuals living under unstable conditions. Bitcoin is used to raise funds for politically persecuted activists and for charitable purposes in regions where financial systems have been weaponized against political opponents or used to restrict people's ability to flee a country. These are factors that may not immediately come to mind in Western nations, where such challenges are not commonly faced. Additionally, according to the World Bank, an estimated 1.5 billion people worldwide still lack access to any form of banking services.
Mining is the only way to ensure that no one can seize control of the Bitcoin network or gain a privileged position within it. This keeps Bitcoin neutral as a protocol, meaning a set of rules without leaders. It is not governed in the same way a company is, where ownership of shares dictates control. Miners earn the right to record transactions in Bitcoin’s ledger by continuously proving that they have performed work to obtain that right. This proof-of-work algorithm is also one reason why Bitcoin has spread so organically. If recording new transactions were free, we would face a problem similar to spam: there would be an endless number of competing transactions, making it impossible to reach consensus on which should officially become part of the decentralized ledger. Mining can be seen as an auction for adding the next set of transactions, where the price is the amount of energy expended. Using energy for this purpose is the only way to ensure that mining remains globally decentralized while keeping the system open and permissionless—free from human interference. Bitcoin’s initial distribution was driven by random tech enthusiasts around the world who mined it as a hobby, using student electricity from their bedrooms. This is why Bitcoin’s spread can be considered organic, in contrast to a scenario where it was created by a precisely organized inner circle that typically would have granted itself advantages before the launch.
If energy consumption is considered concerning, the best regulatory approach would be to create optimal conditions for mining in Finland, where over half of energy production already comes from renewable sources. Modern miners are essentially datacenters, but they have a unique characteristic: they can adjust their electricity consumption seamlessly and instantly without delay. This creates synergy with renewable energy production, which often experiences fluctuations in supply. The demand flexibility offered by miners provides strong incentives to invest increasingly in renewable energy facilities. Miners can commit to long-term projects as last-resort consumers, making investments in renewables more predictable and profitable. Additionally, like other datacenters, miners produce heat as a byproduct. As a thought experiment, they could also be considered heating plants, with a secondary function of securing the Bitcoin network. In Finland, heat is naturally needed year-round. This combination of grid balancing and waste heat recovery would be key to Europe's energy self-sufficiency. Wouldn't it be great if the need to bow to fossil fuel powers for energy could be eliminated? Unfortunately, the current government has demonstrated a lack of understanding of these positive externalities by proposing tax increases on electricity. The so-called fiat monetary system also deserves criticism in Western nations, even though its flaws are not as immediately obvious as elsewhere. It is the current financial system in which certain privileged entities control the issuance of money as if by divine decree, which is what the term fiat (command) refers to. The system subtly creates and maintains inequality.
The Cantillon effect is an economic phenomenon in which entities closer to newly created money benefit at the expense of those farther away. Access to the money creation process is determined by credit ratings and loan terms, as fiat money is always debt. The Cantillon effect is a distorted version of the trickle-down theory, where the loss of purchasing power in a common currency gradually moves downward. Due to inflation, hard assets such as real estate, precious metals, and stocks become more expensive, just as food prices rise in stores. This process further enriches the wealthy while deepening poverty. The entire wealth of lower-income individuals is often held in cash or savings, which are eroded by inflation much like a borrowed bottle of Leijona liquor left out too long. Inflation is usually attributed to a specific crisis, but over the long term (spanning decades), monetary inflation—the expansion of the money supply—plays a significant role. Nobel Prize-winning economist Paul Krugman, known for his work on currencies, describes inflation in his book The Accidental Theorist as follows, loosely quoted: "It is really, really difficult to cut nominal wages. Even with low inflation, making labor cheaper would require a large portion of workers to accept wage cuts. Therefore, higher inflation leads to higher employment." Since no one wants to voluntarily give up their salary in nominal terms, the value of wages must be lowered in real terms by weakening the currency in which they are paid. Inflation effectively cuts wages—or, in other words, makes labor cheaper. This is one of the primary reasons why inflation is often said to have a "stimulating" effect on the economy.
It does seem somewhat unfair that employees effectively subsidize their employers’ labor costs to facilitate new hires, doesn’t it? Not to mention the inequities faced by the Global South in the form of neocolonialism, where Cantillon advantages are weaponized through reserve currencies like the US dollar or the French franc. This follows the exact same pattern, just on a larger scale. The Human Rights Foundation (hrf.org) has explored the interconnection between the fiat monetary system and neocolonialism in its publications, advocating for Bitcoin as part of the solution. Inflation can also be criticized from an environmental perspective. Since it raises time preference, it encourages people to make purchases sooner rather than delay them. As Krugman put it in the same book, “Extra money burns in your pocket.” Inflation thus drives consumption while reducing deliberation—it’s the fuel of the economy. If the goal from an environmental standpoint is to moderate economic activity, the first step should be to stop adding fuel to the fire. The impact of inflation on intergenerational inequality and the economic uncertainty faced by younger generations is rarely discussed. Boomers have benefited from the positive effects of the trend sparked by the Nixon shock in 1971, such as wealth accumulation in real estate and inflation-driven economic booms. Zoomers, meanwhile, are left to either fix the problems of the current system or find themselves searching for a lifeboat.
Bitcoin emerged as part of a long developmental continuum within the discussion forums of rebellious programmers known as cypherpunks, or encryption activists. It is an integral part of internet history and specifically a counterculture movement. Around Bitcoin, grassroots activists and self-organized communities still thrive, fostering an atmosphere that is welcoming, inspiring, and—above all—hopeful, which feels rare in today’s world. Although the rush of suits and traditional financial giants into Bitcoin through ETF funds a year ago may have painted it as opportunistic and dull in the headlines, delving into its history and culture reveals ever-fascinating angles and new layers within the Bitcoin sphere. Yet, at its core, Bitcoin is simply money. It possesses all seven characteristics required to meet the definition of money: it is easily divisible, transferable, recognizable, durable, fungible, uniform, and straightforward to receive. It serves as a foundation on which coders, startup enthusiasts, politicians, financial executives, activists, and anarchists alike can build. The only truly common denominator among the broad spectrum of Bitcoin users is curiosity—openness to new ideas. It merely requires the ability to recognize potential in an alternative system and a willingness to embrace fundamental change. Bitcoin itself is the most inclusive system in the world, as it is literally impossible to marginalize or exclude its users. It is a tool for peaceful and voluntary collaboration, designed so that violence and manipulation are rendered impossible in its code.
Pretty punk in the middle of an era of polarization and division, wouldn’t you say?
The original author (not me) is the organizer of the Bitcoin conference held in Helsinki, as well as a founding member and vice chairman of the Finnish Bitcoin Association. More information about the event can be found at: https://btchel.com and https://njump.me/nprofile1qqs89v5v46jcd8uzv3f7dudsvpt8ntdm3927eqypyjy37yx5l6a30fcknw5z5 ps. Zaps and sats will be forwarded to author!
originally posted at https://stacker.news/items/971219
-
@ 90c656ff:9383fd4e
2025-05-04 16:36:21Bitcoin mining is a crucial process for the operation and security of the network. It plays an important role in validating transactions and generating new bitcoins, ensuring the integrity of the blockchain or timechain-based system. This process involves solving complex mathematical calculations and requires significant computational power. Additionally, mining has economic, environmental, and technological effects that must be carefully analyzed.
Bitcoin mining is the procedure through which new units of the currency are created and added to the network. It is also responsible for verifying and recording transactions on the blockchain or timechain. This system was designed to be decentralized, eliminating the need for a central authority to control issuance or validate operations.
Participants in the process, called miners, compete to solve difficult mathematical problems. Whoever finds the solution first earns the right to add a new block to the blockchain or timechain and receives a reward in bitcoins, along with the transaction fees included in that block. This mechanism is known as Proof of Work (PoW).
The mining process is highly technical and follows a series of steps:
Transaction grouping: Transactions sent by users are collected into a pending block that awaits validation.
Solving mathematical problems: Miners must find a specific number, called a nonce, which, when combined with the block’s data, generates a cryptographic hash that meets certain required conditions. This process involves trial and error and consumes a great deal of computational power.
Block validation: When a miner finds the correct solution, the block is validated and added to the blockchain or timechain. All network nodes verify the block’s authenticity before accepting it.
Reward: The winning miner receives a bitcoin reward, in addition to the fees paid for the transactions included in the block. This reward decreases over time in an event called halving, which happens approximately every four years.
Bitcoin mining has a significant economic impact, as it creates income opportunities for individuals and companies. It also drives the development of new technologies such as specialized processors (ASICs) and modern cooling systems.
Moreover, mining supports financial inclusion by maintaining a decentralized network, enabling fast and secure global transactions. In regions with unstable economies, Bitcoin provides a viable alternative for value preservation and financial transfers.
Despite its economic benefits, Bitcoin mining is often criticized for its environmental impact. The proof-of-work process consumes large amounts of electricity, especially in areas where the energy grid relies on fossil fuels.
It’s estimated that Bitcoin mining uses as much energy as some entire countries, raising concerns about its sustainability. However, there are ongoing efforts to reduce these impacts, such as the increasing use of renewable energy sources and the exploration of alternative systems like Proof of Stake (PoS) in other decentralized networks.
Mining also faces challenges related to scalability and the concentration of computational power. Large companies and mining pools dominate the sector, which can affect the network’s decentralization.
Another challenge is the growing complexity of the mathematical problems, which requires more advanced hardware and consumes more energy over time. To address these issues, researchers are studying solutions that optimize resource use and keep the network sustainable in the long term.
In summary, Bitcoin mining is an essential process for maintaining the network and creating new units of the currency. It ensures security, transparency, and decentralization, supporting the operation of the blockchain or timechain.
However, mining also brings challenges such as high energy consumption and the concentration of resources in large pools. Even so, the pursuit of sustainable solutions and technological innovations points to a promising future, where Bitcoin continues to play a central role in the digital economy.
Thank you very much for reading this far. I hope everything is well with you, and sending a big hug from your favorite Bitcoiner maximalist from Madeira. Long live freedom!
-
@ 700c6cbf:a92816fd
2025-05-04 16:34:01Technically speaking, I should say blooms because not all of my pictures are of flowers, a lot of them, probably most, are blooming trees - but who cares, right?
It is that time of the year that every timeline on every social media is being flooded by blooms. At least in the Northern Hemisphere. I thought that this year, I wouldn't partake in it but - here I am, I just can't resist the lure of blooms when I'm out walking the neighborhood.
Spring has sprung - aaaachoo, sorry, allergies suck! - and the blooms are beautiful.
Yesterday, we had the warmest day of the year to-date. I went for an early morning walk before breakfast. Beautiful blue skies, no clouds, sunshine and a breeze. Most people turned on their aircons. We did not. We are rebels - hah!
We also had breakfast on the deck which I really enjoy during the weekend. Later I had my first session of the year painting on the deck while listening/watching @thegrinder streaming. Good times.
Today, the weather changed. Last night, we had heavy thunderstorms and rain. This morning, it is overcast with the occasional sunray peaking through or, as it is right now, raindrops falling.
We'll see what the day will bring. For me, it will definitely be: Back to painting. Maybe I'll even share some here later. But for now - this is a photo post, and here are the photos. I hope you enjoy as much as I enjoyed yesterday's walk!
Cheers, OceanBee
!(image)[https://cdn.satellite.earth/cc3fb0fa757c88a6a89823585badf7d67e32dee72b6d4de5dff58acd06d0aa36.jpg] !(image)[https://cdn.satellite.earth/7fe93c27c3bf858202185cb7f42b294b152013ba3c859544950e6c1932ede4d3.jpg] !(image)[https://cdn.satellite.earth/6cbd9fba435dbe3e6732d9a5d1f5ff0403935a4ac9d0d83f6e1d729985220e87.jpg] !(image)[https://cdn.satellite.earth/df94d95381f058860392737d71c62cd9689c45b2ace1c8fc29d108625aabf5d5.jpg] !(image)[https://cdn.satellite.earth/e483e65c3ee451977277e0cfa891ec6b93b39c7c4ea843329db7354fba255e64.jpg] !(image)[https://cdn.satellite.earth/a98fe8e1e0577e3f8218af31f2499c3390ba04dced14c2ae13f7d7435b4000d7.jpg] !(image)[https://cdn.satellite.earth/d83b01915a23eb95c3d12c644713ac47233ce6e022c5df1eeba5ff8952b99d67.jpg] !(image)[https://cdn.satellite.earth/9ee3256882e363680d8ea9bb6ed3baa5979c950cdb6e62b9850a4baea46721f3.jpg] !(image)[https://cdn.satellite.earth/201a036d52f37390d11b76101862a082febb869c8d0e58d6aafe93c72919f578.jpg] !(image)[https://cdn.satellite.earth/cd516d89591a4cf474689b4eb6a67db842991c4bf5987c219fb9083f741ce871.jpg]
-
@ a39d19ec:3d88f61e
2025-04-22 12:44:42Die Debatte um Migration, Grenzsicherung und Abschiebungen wird in Deutschland meist emotional geführt. Wer fordert, dass illegale Einwanderer abgeschoben werden, sieht sich nicht selten dem Vorwurf des Rassismus ausgesetzt. Doch dieser Vorwurf ist nicht nur sachlich unbegründet, sondern verkehrt die Realität ins Gegenteil: Tatsächlich sind es gerade diejenigen, die hinter jeder Forderung nach Rechtssicherheit eine rassistische Motivation vermuten, die selbst in erster Linie nach Hautfarbe, Herkunft oder Nationalität urteilen.
Das Recht steht über Emotionen
Deutschland ist ein Rechtsstaat. Das bedeutet, dass Regeln nicht nach Bauchgefühl oder politischer Stimmungslage ausgelegt werden können, sondern auf klaren gesetzlichen Grundlagen beruhen müssen. Einer dieser Grundsätze ist in Artikel 16a des Grundgesetzes verankert. Dort heißt es:
„Auf Absatz 1 [Asylrecht] kann sich nicht berufen, wer aus einem Mitgliedstaat der Europäischen Gemeinschaften oder aus einem anderen Drittstaat einreist, in dem die Anwendung des Abkommens über die Rechtsstellung der Flüchtlinge und der Europäischen Menschenrechtskonvention sichergestellt ist.“
Das bedeutet, dass jeder, der über sichere Drittstaaten nach Deutschland einreist, keinen Anspruch auf Asyl hat. Wer dennoch bleibt, hält sich illegal im Land auf und unterliegt den geltenden Regelungen zur Rückführung. Die Forderung nach Abschiebungen ist daher nichts anderes als die Forderung nach der Einhaltung von Recht und Gesetz.
Die Umkehrung des Rassismusbegriffs
Wer einerseits behauptet, dass das deutsche Asyl- und Aufenthaltsrecht strikt durchgesetzt werden soll, und andererseits nicht nach Herkunft oder Hautfarbe unterscheidet, handelt wertneutral. Diejenigen jedoch, die in einer solchen Forderung nach Rechtsstaatlichkeit einen rassistischen Unterton sehen, projizieren ihre eigenen Denkmuster auf andere: Sie unterstellen, dass die Debatte ausschließlich entlang ethnischer, rassistischer oder nationaler Kriterien geführt wird – und genau das ist eine rassistische Denkweise.
Jemand, der illegale Einwanderung kritisiert, tut dies nicht, weil ihn die Herkunft der Menschen interessiert, sondern weil er den Rechtsstaat respektiert. Hingegen erkennt jemand, der hinter dieser Kritik Rassismus wittert, offenbar in erster Linie die „Rasse“ oder Herkunft der betreffenden Personen und reduziert sie darauf.
Finanzielle Belastung statt ideologischer Debatte
Neben der rechtlichen gibt es auch eine ökonomische Komponente. Der deutsche Wohlfahrtsstaat basiert auf einem Solidarprinzip: Die Bürger zahlen in das System ein, um sich gegenseitig in schwierigen Zeiten zu unterstützen. Dieser Wohlstand wurde über Generationen hinweg von denjenigen erarbeitet, die hier seit langem leben. Die Priorität liegt daher darauf, die vorhandenen Mittel zuerst unter denjenigen zu verteilen, die durch Steuern, Sozialabgaben und Arbeit zum Erhalt dieses Systems beitragen – nicht unter denen, die sich durch illegale Einreise und fehlende wirtschaftliche Eigenleistung in das System begeben.
Das ist keine ideologische Frage, sondern eine rein wirtschaftliche Abwägung. Ein Sozialsystem kann nur dann nachhaltig funktionieren, wenn es nicht unbegrenzt belastet wird. Würde Deutschland keine klaren Regeln zur Einwanderung und Abschiebung haben, würde dies unweigerlich zur Überlastung des Sozialstaates führen – mit negativen Konsequenzen für alle.
Sozialpatriotismus
Ein weiterer wichtiger Aspekt ist der Schutz der Arbeitsleistung jener Generationen, die Deutschland nach dem Zweiten Weltkrieg mühsam wieder aufgebaut haben. Während oft betont wird, dass die Deutschen moralisch kein Erbe aus der Zeit vor 1945 beanspruchen dürfen – außer der Verantwortung für den Holocaust –, ist es umso bedeutsamer, das neue Erbe nach 1945 zu respektieren, das auf Fleiß, Disziplin und harter Arbeit beruht. Der Wiederaufbau war eine kollektive Leistung deutscher Menschen, deren Früchte nicht bedenkenlos verteilt werden dürfen, sondern vorrangig denjenigen zugutekommen sollten, die dieses Fundament mitgeschaffen oder es über Generationen mitgetragen haben.
Rechtstaatlichkeit ist nicht verhandelbar
Wer sich für eine konsequente Abschiebepraxis ausspricht, tut dies nicht aus rassistischen Motiven, sondern aus Respekt vor der Rechtsstaatlichkeit und den wirtschaftlichen Grundlagen des Landes. Der Vorwurf des Rassismus in diesem Kontext ist daher nicht nur falsch, sondern entlarvt eine selektive Wahrnehmung nach rassistischen Merkmalen bei denjenigen, die ihn erheben.
-
@ a39d19ec:3d88f61e
2025-03-18 17:16:50Nun da das deutsche Bundesregime den Ruin Deutschlands beschlossen hat, der sehr wahrscheinlich mit dem Werkzeug des Geld druckens "finanziert" wird, kamen mir so viele Gedanken zur Geldmengenausweitung, dass ich diese für einmal niedergeschrieben habe.
Die Ausweitung der Geldmenge führt aus klassischer wirtschaftlicher Sicht immer zu Preissteigerungen, weil mehr Geld im Umlauf auf eine begrenzte Menge an Gütern trifft. Dies lässt sich in mehreren Schritten analysieren:
1. Quantitätstheorie des Geldes
Die klassische Gleichung der Quantitätstheorie des Geldes lautet:
M • V = P • Y
wobei:
- M die Geldmenge ist,
- V die Umlaufgeschwindigkeit des Geldes,
- P das Preisniveau,
- Y die reale Wirtschaftsleistung (BIP).Wenn M steigt und V sowie Y konstant bleiben, muss P steigen – also Inflation entstehen.
2. Gütermenge bleibt begrenzt
Die Menge an real produzierten Gütern und Dienstleistungen wächst meist nur langsam im Vergleich zur Ausweitung der Geldmenge. Wenn die Geldmenge schneller steigt als die Produktionsgütermenge, führt dies dazu, dass mehr Geld für die gleiche Menge an Waren zur Verfügung steht – die Preise steigen.
3. Erwartungseffekte und Spekulation
Wenn Unternehmen und Haushalte erwarten, dass mehr Geld im Umlauf ist, da eine zentrale Planung es so wollte, können sie steigende Preise antizipieren. Unternehmen erhöhen ihre Preise vorab, und Arbeitnehmer fordern höhere Löhne. Dies kann eine sich selbst verstärkende Spirale auslösen.
4. Internationale Perspektive
Eine erhöhte Geldmenge kann die Währung abwerten, wenn andere Länder ihre Geldpolitik stabil halten. Eine schwächere Währung macht Importe teurer, was wiederum Preissteigerungen antreibt.
5. Kritik an der reinen Geldmengen-Theorie
Der Vollständigkeit halber muss erwähnt werden, dass die meisten modernen Ökonomen im Staatsauftrag argumentieren, dass Inflation nicht nur von der Geldmenge abhängt, sondern auch von der Nachfrage nach Geld (z. B. in einer Wirtschaftskrise). Dennoch zeigt die historische Erfahrung, dass eine unkontrollierte Geldmengenausweitung langfristig immer zu Preissteigerungen führt, wie etwa in der Hyperinflation der Weimarer Republik oder in Simbabwe.
-
@ 90c656ff:9383fd4e
2025-05-04 16:24:21Blockchain or timechain is a new technology that has changed the way data and transactions are recorded and stored. Its decentralized and highly secure structure provides transparency and trust, making it a widely used system for digital operations. This technology is essential for creating financial systems and digital records that cannot be altered.
What is blockchain or timechain? Blockchain or timechain is essentially a distributed digital ledger designed to record transactions in a sequential and unchangeable manner. It is made up of blocks linked in a chain, each containing a set of information such as transactions, timestamps, and a unique identifier called a hash.
These blocks are organized in chronological order, ensuring the integrity of records over time. The term timechain, used synonymously, emphasizes this temporal aspect of the system, where each block is linked to the previous one, forming a chain of events that cannot be tampered with.
The validation of blocks in blockchain or timechain is carried out through a process called mining. Network participants, known as miners, use powerful computers to solve complex mathematical problems. This process, known as proof of work, is necessary to validate transactions and add a new block to the chain.
Each block contains:
Verified Transactions – A set of operations approved by the network.
Previous Block Hash – A unique code that connects the new block to the previous one, ensuring continuity and security.
Nonce – A number used in the mining process to generate the block's hash.
Once a block is validated, it is permanently added to the blockchain or timechain, and all nodes (participating computers) in the network update their copies of this ledger.
One of the main benefits of blockchain or timechain is the security provided by its decentralized model. Unlike traditional systems that rely on central servers, it distributes its data across thousands of computers around the world.
Immutability is guaranteed by cryptographic techniques and the chained structure of blocks. Any attempt to alter a block would require modifying all subsequent blocks, which is virtually impossible due to the massive computational power required.
Additionally, the use of cryptographic algorithms makes the system resistant to fraud and manipulation. This model enables trust, even in environments without intermediaries or central authorities.
Blockchain or timechain is transparent, as anyone can access the full history of transactions recorded on the network. This creates a system that is auditable and reliable.
However, the privacy of participants is protected, since transactions are recorded through anonymous digital addresses without revealing personal identities. This balance between transparency and privacy makes the system secure and flexible.
The use of blockchain or timechain goes beyond financial transactions. It is useful in areas such as smart contracts, asset registration, supply chains, and online voting. Its ability to create permanent and verifiable records enables innovative solutions across various industries.
For example, in product tracking systems, blockchain or timechain ensures data authenticity by recording each stage of the production and distribution process. This reduces fraud and increases operational efficiency.
Advantages and Challenges Among the main advantages of blockchain or timechain, we can highlight:
Decentralization – Elimination of intermediaries, reducing costs and increasing efficiency.
Security – Protection against fraud and digital attacks.
Transparency – Public and verifiable record of all transactions.
Immutability – Assurance that data cannot be modified after being recorded.
However, there are still challenges to be addressed, such as scalability, as the continuous growth of the network may require greater storage and processing capacity. Additionally, regulatory issues and widespread adoption demand ongoing improvements.
In summary, blockchain or timechain is an innovative technology that changes the way data and transactions are stored, ensuring security, transparency, and efficiency. Its decentralization removes the dependency on intermediaries, making it a trustworthy and tamper-resistant system.
Despite technical and regulatory challenges, blockchain or timechain continues to evolve, demonstrating its potential in various areas beyond the financial sector. Its promise of transparency and immutability is already shaping the future of digital systems, establishing itself as a fundamental base for the modern economy and digital trust.
Thank you very much for reading this far. I hope everything is well with you, and sending a big hug from your favorite Bitcoiner maximalist from Madeira. Long live freedom!
-
@ 502ab02a:a2860397
2025-05-04 15:48:26วันอาทิตย์ เพื่อนใหม่เยอะพอสมควร น่าจะพอที่จะแนะนำให้รู้จัก โรงบ่มสุขภาพ ขอเล่าผ่านเพลง "บ่ม" เพื่อรวบบทความ #ตัวหนังสือมีเสียง ไว้ด้วยเลยแล้วกันครับ
โรงบ่มสุขภาพ Healthy Hut - โรงบ่มสุขภาพ คือการรวมตัวกันของบุคลากรที่มี content และ ความเชี่ยวชาญ ด้านต่างๆ ทำกิจกรรมหลากหลายรูปแบบ ตั้งแต่แคมป์สุขภาพ พักผ่อนกายใจเรียนรู้การปรับสมดุลร่างกาย, การเรียนรู้พื้นฐาน Nutrition ต่างๆ ลองดูผลงานได้ในเพจครับ และโรงบ่มฯ ก็จะยังคงมีกิจกรรมให้ทุกคนได้เข้าร่วมอยู่เสมอ ดังนั้นไปกดไลค์เพจไว้เพื่อไม่ให้พลาดข้าวสาร เอ้ย ข่าวสาร
โรงบ่มฯนั้น ประกอบด้วย 👨🏻⚕️พี่หมอป๊อบ DietDoctor Thailand ที่เรารู้จักกันดี อาจารย์ของพวกเรา 🏋🏻♀️ 🏅พี่หนึ่ง จาก หนึ่งคีโตแด๊ดดี้ มาเป็น Nueng The One ผมอุปโลกให้ก่อนเลยว่า คนไทยคนแรกที่ทำเนื้อหาการกินคีโต เป็นภาษาไทย แบบมีบันทึกสาธารณะให้ตามศึกษา 👨🏻⚕️หมอเอก หมออ้วนในดงลดน้ำหนัก กับหลักการใช้ชีวิตแบบ IFF สาย Fasting ที่ย้ำว่าหัวใจอีกห้องของ Fasting คือ Feeding กระดุมเม็ดแรกของการฟาส ที่คนมักลืม 🧘🏻♀️ครูบอม เทพโยคะอีกท่านนึงของไทย กับศาสตร์โยคะ Anusara Yoga หนึ่งเดียวในไทย 🧗♂️โค้ชแมท สารานุกรมสุขภาพที่มีชีวิต นิ่งแต่คมกริบ ถ้าเป็นเกมส์ก็สาย สไนเปอร์ ยิงน้อยแต่ Head Shot 🧔🏻แอ๊ดหนวด ฉายา Salt Daddy เจ้าแห่งเกลือแร่ ประจำกลุ่ม IF-Mix Fasting Diet Thailand (Keto # Low Carb # Plant Base # High Fat) 👭👩🏻🤝👨🏼👩🏻🤝👨🏼 รวมถึงทีมงาน กัลยาณมิตรสุขภาพ ที่มีความรู้ในด้านสุขภาพและมากประสบการณ์ 🧑🏻🍳 ผมและตำรับเอ๋เองก็ยินดีมากๆที่ได้ร่วมทีมสุขภาพนี้กับเขาด้วย
จะเห็นได้ว่า แต่ละท่านในโรงบ่มฯ นั้นหล่อหลอมมาจากความต่างเสียด้วยซ้ำไป ตั้งแต่เริ่มก่อร่างโรงบ่ม เราก็ตั้งไว้แล้วว่า ชีวะ ควรมีความหลากหลาย การวางพื้นฐานสุขภาพควรมาจาก "แต่ละคน" ไม่ใช่ one size fit all บันทึกกิจกรรมโรงบ่มผมมีโพสไว้ https://www.facebook.com/share/p/19DUibHrbw/
นั่นเป็นเหตุผลที่ผมเกิดแรงบันดาลใจในการทำเพลง "บ่ม" ขึ้นมาเพื่อเป็น Theme Song ครับ แก่นของเพลงนี้มีไม่กี่อย่างครับ ผมเริ่มคิดจากคำว่า "ความต่าง" เพราะไม่ว่าจะกี่ปีกี่ชาติ วงการสุขภาพ ก็จะมีแนวความคิดประเภท ฉันถูกเธอผิด อยู่ตลอดเวลาเพราะมันเป็นธรรมชาติมนุษย์ครับ มนุษย์เราทุกคนมีอีโก้ การยอมรับในความต่าง การหลอมรวมความต่าง ผมคิดว่ามันเป็นการ "บ่ม" ให้สุกงอมได้
เวลาที่เนื้อหาแบบนี้ ผมก็อดคิดถึงวงที่ผมรักเสมอไม่ได้เลย นั่นคือ เฉลียง แม้ความสามารถจะห่างไกลกันลิบลับ แต่ผมก็อยากทำสไตล์เฉลียงกับเขาบ้างครั้งหนึ่งหรือหลายๆครั้งในชีวิต จึงเลือกแนวเพลงออกมาทาง แจ๊ส สวิง มีเครื่องเป่า คาริเนต เป็นตัวเด่น
ท่อนแรกของเพลงจึงเริ่มด้วย "ต่างทั้งความคิด ต่างทั้งความฝัน ต่างเผ่าต่างพันธุ์ จะต่างกันแค่ไหน หนึ่งเมล็ด จากหลากผล แต่ละคน ก็ปนไป แล้วเพราะเหตุใด ใยต้องไปแค่ทางเดียว" เพื่อปูให้คนฟังเริ่มเปิดรับว่า สิ่งที่ต้องการจะสื่อต่อไปคืออะไร
ส่วนคำย้ำนั้นผมแตกมาจาก คำสอนของพระพุทธเจ้า เกี่ยวกับ "คิดเห็นเป็นไป" ซึ่งจริงๆผมเขียนไว้ในโพสนึงแต่ตอนนั้นยังไม่ได้ทำคอลัมน์ #ตัวหนังสือมีเสียง ขอไม่เขียนซ้ำ อ่านได้ที่นี่ครับ https://www.facebook.com/share/p/18tFCFaRLn/ ท่อนที่ว่าจึงเขียนไว้ว่า "บ่มบ่ม... บ่มให้คิด บ่มบ่ม...บ่มให้เห็น บ่มบ่ม...บ่มให้เป็น บ่มบ่ม...บ่มให้ไป ไปเป็น ตัวของตัวเอง" ใช้ความซน สไตล์ rock&roll ผสมแจ๊สนิด ที่เขามักเล่นร้องโต้กับคอรัส นึกถึงยุคทีวีขาวดำ 5555
เนื้อร้องท่อนนี้ เป็นการบอกว่า โรงบ่มคืออะไรทำอะไร เพราะโรงบ่ม เราไม่ได้รักษา เราไม่ได้บังคับไดเอทว่าต้องใช้อะไร เราเปิดตาให้มอง เปิดหูให้ฟัง เปิดปากให้ถาม ถึงธรรมชาติในตัวเรา แล้วบ่มออกไปให้เบ่งบานในเส้นทางของแต่ละคนครับ
แล้วผมก็พยายามอีกครั้งที่จะสื่อถึงการยอมรับความต่าง ให้ติดหูเอาไว้ โดยเฉพาะคำที่ผมพูดบ่อยมากๆ "ชีวะ คือชีวิต" จนมาเป็นท่อน bridge นี้ครับ "อะไรที่ไม่คล้าย นั้นใช่ไม่ดี เพราะชีวะ ก็คือชีวี บ่มให้ดี จะมีความงาม ตามที่ควรเห็น ตามที่ควรเป็น"
เพลงนี้สามารถฟังตัวเต็มได้ทุกแพลทฟอร์มเพลงทั้ง youtube music, spotify, apple music, แผ่นเสียง tiktok ⌨️ แค่ค้นชื่อ "Heretong Teera Siri" ครับ
📺 youtube link นี้ https://youtu.be/BvIsTAsG00E?si=MzA-WfCTNQnWy6b1 📻 Spotify link นี้ https://open.spotify.com/album/08HydgrXmUAew6dgXIDNTf?si=7flQOqDAQbGe2bC0hx3T2A
ความลับคือ จริงๆแล้วเพลงนี้มี 3 version ถ้ากดใน spotify แล้วจะเห็นทั้งอัลบั้มครับ
📀เนื้อเพลง "บ่ม"📀 song by : HereTong Teera Siri ต่างทั้งความคิด ต่างทั้งความฝัน ต่างเผ่าต่างพันธุ์ จะต่างกันแค่ไหน
หนึ่งเมล็ด จากหลากผล แต่ละคน ก็ปนไป แล้วเพราะเหตุใด ใยต้องไปแค่ทางเดียว
บ่มบ่ม... บ่มให้คิด บ่มบ่ม...บ่มให้เห็น บ่มบ่ม...บ่มให้เป็น บ่มบ่ม...บ่มให้ไป ไปเป็น ตัวของตัวเอง
ต่างทั้งลองลิ้ม ต่างทั้งรับรู้ ต่างที่มุมดู ก็ถมไป หนึ่งชีวิต มีความหมาย ที่หลากหลาย ไม่คล้ายกัน แล้วเพราะเหตุใด ใยต้องเป็นเช่นทุกคน
บ่มบ่ม... บ่มให้คิด บ่มบ่ม...บ่มให้เห็น บ่มบ่ม...บ่มให้เป็น บ่มบ่ม...บ่มให้ไป ไปตาม ทางที่เลือกเดิน
เพราะชีวิต คือความหลากหลาย เพราะโลกนี้ ไม่เคยห่างหาย อะไรที่ไม่คล้าย นั้นใช่ไม่ดี เพราะชีวะ ก็คือชีวี บ่มให้ดี จะมีความงาม ตามที่ควรเห็น ตามที่ควรเป็น
บ่มบ่ม... บ่มให้คิด บ่มบ่ม...บ่มให้เห็น บ่มบ่ม...บ่มให้เป็น บ่มบ่ม...บ่มให้ไป ไปตาม ทางที่เลือกเดิน
เพราะชีวะ ก็คือชีวี บ่มให้ดี จะมีความงาม ตามที่ควรเห็น ตามที่ควรเป็น บ่มให้เธอชื่น บ่มให้เธอชม ชีวิตรื่นรมย์ ได้สมใจ บ่มให้ยั่งยืน
ตัวหนังสือมีเสียง #pirateketo
โรงบ่มสุขภาพ #siamstr
-
@ 0d97beae:c5274a14
2025-01-11 16:52:08This article hopes to complement the article by Lyn Alden on YouTube: https://www.youtube.com/watch?v=jk_HWmmwiAs
The reason why we have broken money
Before the invention of key technologies such as the printing press and electronic communications, even such as those as early as morse code transmitters, gold had won the competition for best medium of money around the world.
In fact, it was not just gold by itself that became money, rulers and world leaders developed coins in order to help the economy grow. Gold nuggets were not as easy to transact with as coins with specific imprints and denominated sizes.
However, these modern technologies created massive efficiencies that allowed us to communicate and perform services more efficiently and much faster, yet the medium of money could not benefit from these advancements. Gold was heavy, slow and expensive to move globally, even though requesting and performing services globally did not have this limitation anymore.
Banks took initiative and created derivatives of gold: paper and electronic money; these new currencies allowed the economy to continue to grow and evolve, but it was not without its dark side. Today, no currency is denominated in gold at all, money is backed by nothing and its inherent value, the paper it is printed on, is worthless too.
Banks and governments eventually transitioned from a money derivative to a system of debt that could be co-opted and controlled for political and personal reasons. Our money today is broken and is the cause of more expensive, poorer quality goods in the economy, a larger and ever growing wealth gap, and many of the follow-on problems that have come with it.
Bitcoin overcomes the "transfer of hard money" problem
Just like gold coins were created by man, Bitcoin too is a technology created by man. Bitcoin, however is a much more profound invention, possibly more of a discovery than an invention in fact. Bitcoin has proven to be unbreakable, incorruptible and has upheld its ability to keep its units scarce, inalienable and counterfeit proof through the nature of its own design.
Since Bitcoin is a digital technology, it can be transferred across international borders almost as quickly as information itself. It therefore severely reduces the need for a derivative to be used to represent money to facilitate digital trade. This means that as the currency we use today continues to fare poorly for many people, bitcoin will continue to stand out as hard money, that just so happens to work as well, functionally, along side it.
Bitcoin will also always be available to anyone who wishes to earn it directly; even China is unable to restrict its citizens from accessing it. The dollar has traditionally become the currency for people who discover that their local currency is unsustainable. Even when the dollar has become illegal to use, it is simply used privately and unofficially. However, because bitcoin does not require you to trade it at a bank in order to use it across borders and across the web, Bitcoin will continue to be a viable escape hatch until we one day hit some critical mass where the world has simply adopted Bitcoin globally and everyone else must adopt it to survive.
Bitcoin has not yet proven that it can support the world at scale. However it can only be tested through real adoption, and just as gold coins were developed to help gold scale, tools will be developed to help overcome problems as they arise; ideally without the need for another derivative, but if necessary, hopefully with one that is more neutral and less corruptible than the derivatives used to represent gold.
Bitcoin blurs the line between commodity and technology
Bitcoin is a technology, it is a tool that requires human involvement to function, however it surprisingly does not allow for any concentration of power. Anyone can help to facilitate Bitcoin's operations, but no one can take control of its behaviour, its reach, or its prioritisation, as it operates autonomously based on a pre-determined, neutral set of rules.
At the same time, its built-in incentive mechanism ensures that people do not have to operate bitcoin out of the good of their heart. Even though the system cannot be co-opted holistically, It will not stop operating while there are people motivated to trade their time and resources to keep it running and earn from others' transaction fees. Although it requires humans to operate it, it remains both neutral and sustainable.
Never before have we developed or discovered a technology that could not be co-opted and used by one person or faction against another. Due to this nature, Bitcoin's units are often described as a commodity; they cannot be usurped or virtually cloned, and they cannot be affected by political biases.
The dangers of derivatives
A derivative is something created, designed or developed to represent another thing in order to solve a particular complication or problem. For example, paper and electronic money was once a derivative of gold.
In the case of Bitcoin, if you cannot link your units of bitcoin to an "address" that you personally hold a cryptographically secure key to, then you very likely have a derivative of bitcoin, not bitcoin itself. If you buy bitcoin on an online exchange and do not withdraw the bitcoin to a wallet that you control, then you legally own an electronic derivative of bitcoin.
Bitcoin is a new technology. It will have a learning curve and it will take time for humanity to learn how to comprehend, authenticate and take control of bitcoin collectively. Having said that, many people all over the world are already using and relying on Bitcoin natively. For many, it will require for people to find the need or a desire for a neutral money like bitcoin, and to have been burned by derivatives of it, before they start to understand the difference between the two. Eventually, it will become an essential part of what we regard as common sense.
Learn for yourself
If you wish to learn more about how to handle bitcoin and avoid derivatives, you can start by searching online for tutorials about "Bitcoin self custody".
There are many options available, some more practical for you, and some more practical for others. Don't spend too much time trying to find the perfect solution; practice and learn. You may make mistakes along the way, so be careful not to experiment with large amounts of your bitcoin as you explore new ideas and technologies along the way. This is similar to learning anything, like riding a bicycle; you are sure to fall a few times, scuff the frame, so don't buy a high performance racing bike while you're still learning to balance.
-
@ a5ee4475:2ca75401
2025-05-04 15:45:12lists #descentralismo #compilation #english
*Some of these lists are still being updated, so the latest versions of them will only be visible in Amethyst.
nostr:naddr1qq245dz5tqe8w46swpphgmr4f3047s6629t45qg4waehxw309aex2mrp0yhxgctdw4eju6t09upzpf0wg36k3g3hygndv3cp8f2j284v0hfh4dqgqjj3yxnreck2w4qpqvzqqqr4guxde6sl
nostr:nevent1qqsxdpwadkswdrc602m6qdhyq7n33lf3wpjtdjq2adkw4y3h38mjcrqpr9mhxue69uhkxmmzwfskvatdvyhxxmmd9aex2mrp0ypzpf0wg36k3g3hygndv3cp8f2j284v0hfh4dqgqjj3yxnreck2w4qpqvzqqqqqqysn06gs
nostr:nevent1qqs0swpxdqfknups697205qg5mpw2e370g5vet07gkexe9n0k05h5qspz4mhxue69uhhyetvv9ujumn0wd68ytnzvuhsyg99aez8269zxu3zd4j8qya92fg7437ax745pqz22ys6v08zef65qypsgqqqqqqshr37wh
Markdown Uses for Some Clients
nostr:nevent1qqsv54qfgtme38r2tl9v6ghwfj09gdjukealstkzc77mwujr56tgfwsppemhxue69uhkummn9ekx7mp0qgsq37tg2603tu0cqdrxs30e2n5t8p87uenf4fvfepdcvr7nllje5zgrqsqqqqqpkdvta4
Other Links
nostr:nevent1qqsrm6ywny5r7ajakpppp0lt525n0s33x6tyn6pz0n8ws8k2tqpqracpzpmhxue69uhkummnw3ezumt0d5hsygp6e5ns0nv3dun430jky25y4pku6ylz68rz6zs7khv29q6rj5peespsgqqqqqqsmfwa78
-
@ a5ee4475:2ca75401
2025-05-04 15:14:32lista #descentralismo #compilado #portugues
*Algumas destas listas ainda estão sendo trocadas e serão traduzidas para português, portanto as versões mais recentes delas só estarão visíveis no Amethyst.
Clients do Nostr e Outras Coisas
nostr:naddr1qq245dz5tqe8w46swpphgmr4f3047s6629t45qg4waehxw309aex2mrp0yhxgctdw4eju6t09upzpf0wg36k3g3hygndv3cp8f2j284v0hfh4dqgqjj3yxnreck2w4qpqvzqqqr4guxde6sl
Modelos de IA e Ferramentas
nostr:nevent1qqsxdpwadkswdrc602m6qdhyq7n33lf3wpjtdjq2adkw4y3h38mjcrqpr9mhxue69uhkxmmzwfskvatdvyhxxmmd9aex2mrp0ypzpf0wg36k3g3hygndv3cp8f2j284v0hfh4dqgqjj3yxnreck2w4qpqvzqqqqqqysn06gs
Comunidades Lusófonas de Bitcoin
nostr:nevent1qqsqnmtverj2fetqwhsv9n2ny8h9ujhyqqrk0fsn4a02w8sf4cqddzqpzamhxue69uhhyetvv9ujumn0wd68ytnzv9hxgtczyzj7u3r4dz3rwg3x6erszwj4y502clwn026qsp99zgdx8n3v5a2qzqcyqqqqqqgypv6z5
Profissionais Brasileiros no Nostr
nostr:nevent1qqsvqnlx7sqeczv5r7pmmd6zzca3l0ru4856n3j7lhjfv3atq40lfdcpr9mhxue69uhkxmmzwfskvatdvyhxxmmd9aex2mrp0ypzpf0wg36k3g3hygndv3cp8f2j284v0hfh4dqgqjj3yxnreck2w4qpqvzqqqqqqylf6kr4
Comunidades em Português no Nostr
nostr:nevent1qqsy47d6z0qzshfqt5sgvtck8jmhdjnsfkyacsvnqe8m7euuvp4nm0gpzpmhxue69uhkummnw3ezumt0d5hsyg99aez8269zxu3zd4j8qya92fg7437ax745pqz22ys6v08zef65qypsgqqqqqqsw4vudx
Grupos em Português no Nostr
nostr:nevent1qqs98kldepjmlxngupsyth40n0h5lw7z5ut5w4scvh27alc0w86tevcpzpmhxue69uhkummnw3ezumt0d5hsygy7fff8g6l23gp5uqtuyqwkqvucx6mhe7r9h7v6wyzzj0v6lrztcspsgqqqqqqs3ndneh
Games Open Source
nostr:nevent1qqs0swpxdqfknups697205qg5mpw2e370g5vet07gkexe9n0k05h5qspz4mhxue69uhhyetvv9ujumn0wd68ytnzvuhsyg99aez8269zxu3zd4j8qya92fg7437ax745pqz22ys6v08zef65qypsgqqqqqqshr37wh
Formatação de Texto no Amethyst
nostr:nevent1qqs0vquevt0pe9h5a2dh8csufdksazp6czz3vjk3wfspp68uqdez00cprpmhxue69uhkummnw3ezuendwsh8w6t69e3xj730qgs2tmjyw452ydezymtywqf625j3atra6datgzqy55fp5c7w9jn4gqgrqsqqqqqpd658r3
Outros Links
nostr:nevent1qqsrm6ywny5r7ajakpppp0lt525n0s33x6tyn6pz0n8ws8k2tqpqracpzpmhxue69uhkummnw3ezumt0d5hsygp6e5ns0nv3dun430jky25y4pku6ylz68rz6zs7khv29q6rj5peespsgqqqqqqsmfwa78
-
@ 37fe9853:bcd1b039
2025-01-11 15:04:40yoyoaa
-
@ 62033ff8:e4471203
2025-01-11 15:00:24收录的内容中 kind=1的部分,实话说 质量不高。 所以我增加了kind=30023 长文的article,但是更新的太少,多个relays 的服务器也没有多少长文。
所有搜索nostr如果需要产生价值,需要有高质量的文章和新闻。 而且现在有很多机器人的文章充满着浪费空间的作用,其他作用都用不上。
https://www.duozhutuan.com 目前放的是给搜索引擎提供搜索的原材料。没有做UI给人类浏览。所以看上去是粗糙的。 我并没有打算去做一个发microblog的 web客户端,那类的客户端太多了。
我觉得nostr社区需要解决的还是应用。如果仅仅是microblog 感觉有点够呛
幸运的是npub.pro 建站这样的,我觉得有点意思。
yakihonne 智能widget 也有意思
我做的TaskQ5 我自己在用了。分布式的任务系统,也挺好的。
-
@ c4b5369a:b812dbd6
2025-04-15 07:26:16Offline transactions with Cashu
Over the past few weeks, I've been busy implementing offline capabilities into nutstash. I think this is one of the key value propositions of ecash, beinga a bearer instrument that can be used without internet access.
It does however come with limitations, which can lead to a bit of confusion. I hope this article will clear some of these questions up for you!
What is ecash/Cashu?
Ecash is the first cryptocurrency ever invented. It was created by David Chaum in 1983. It uses a blind signature scheme, which allows users to prove ownership of a token without revealing a link to its origin. These tokens are what we call ecash. They are bearer instruments, meaning that anyone who possesses a copy of them, is considered the owner.
Cashu is an implementation of ecash, built to tightly interact with Bitcoin, more specifically the Bitcoin lightning network. In the Cashu ecosystem,
Mints
are the gateway to the lightning network. They provide the infrastructure to access the lightning network, pay invoices and receive payments. Instead of relying on a traditional ledger scheme like other custodians do, the mint issues ecash tokens, to represent the value held by the users.How do normal Cashu transactions work?
A Cashu transaction happens when the sender gives a copy of his ecash token to the receiver. This can happen by any means imaginable. You could send the token through email, messenger, or even by pidgeon. One of the common ways to transfer ecash is via QR code.
The transaction is however not finalized just yet! In order to make sure the sender cannot double-spend their copy of the token, the receiver must do what we call a
swap
. A swap is essentially exchanging an ecash token for a new one at the mint, invalidating the old token in the process. This ensures that the sender can no longer use the same token to spend elsewhere, and the value has been transferred to the receiver.What about offline transactions?
Sending offline
Sending offline is very simple. The ecash tokens are stored on your device. Thus, no internet connection is required to access them. You can litteraly just take them, and give them to someone. The most convenient way is usually through a local transmission protocol, like NFC, QR code, Bluetooth, etc.
The one thing to consider when sending offline is that ecash tokens come in form of "coins" or "notes". The technical term we use in Cashu is
Proof
. It "proofs" to the mint that you own a certain amount of value. Since these proofs have a fixed value attached to them, much like UTXOs in Bitcoin do, you would need proofs with a value that matches what you want to send. You can mix and match multiple proofs together to create a token that matches the amount you want to send. But, if you don't have proofs that match the amount, you would need to go online and swap for the needed proofs at the mint.Another limitation is, that you cannot create custom proofs offline. For example, if you would want to lock the ecash to a certain pubkey, or add a timelock to the proof, you would need to go online and create a new custom proof at the mint.
Receiving offline
You might think: well, if I trust the sender, I don't need to be swapping the token right away!
You're absolutely correct. If you trust the sender, you can simply accept their ecash token without needing to swap it immediately.
This is already really useful, since it gives you a way to receive a payment from a friend or close aquaintance without having to worry about connectivity. It's almost just like physical cash!
It does however not work if the sender is untrusted. We have to use a different scheme to be able to receive payments from someone we don't trust.
Receiving offline from an untrusted sender
To be able to receive payments from an untrusted sender, we need the sender to create a custom proof for us. As we've seen before, this requires the sender to go online.
The sender needs to create a token that has the following properties, so that the receciver can verify it offline:
- It must be locked to ONLY the receiver's public key
- It must include an
offline signature proof
(DLEQ proof) - If it contains a timelock & refund clause, it must be set to a time in the future that is acceptable for the receiver
- It cannot contain duplicate proofs (double-spend)
- It cannot contain proofs that the receiver has already received before (double-spend)
If all of these conditions are met, then the receiver can verify the proof offline and accept the payment. This allows us to receive payments from anyone, even if we don't trust them.
At first glance, this scheme seems kinda useless. It requires the sender to go online, which defeats the purpose of having an offline payment system.
I beleive there are a couple of ways this scheme might be useful nonetheless:
-
Offline vending machines: Imagine you have an offline vending machine that accepts payments from anyone. The vending machine could use this scheme to verify payments without needing to go online itself. We can assume that the sender is able to go online and create a valid token, but the receiver doesn't need to be online to verify it.
-
Offline marketplaces: Imagine you have an offline marketplace where buyers and sellers can trade goods and services. Before going to the marketplace the sender already knows where he will be spending the money. The sender could create a valid token before going to the marketplace, using the merchants public key as a lock, and adding a refund clause to redeem any unspent ecash after it expires. In this case, neither the sender nor the receiver needs to go online to complete the transaction.
How to use this
Pretty much all cashu wallets allow you to send tokens offline. This is because all that the wallet needs to do is to look if it can create the desired amount from the proofs stored locally. If yes, it will automatically create the token offline.
Receiving offline tokens is currently only supported by nutstash (experimental).
To create an offline receivable token, the sender needs to lock it to the receiver's public key. Currently there is no refund clause! So be careful that you don't get accidentally locked out of your funds!
The receiver can then inspect the token and decide if it is safe to accept without a swap. If all checks are green, they can accept the token offline without trusting the sender.
The receiver will see the unswapped tokens on the wallet homescreen. They will need to manually swap them later when they are online again.
Later when the receiver is online again, they can swap the token for a fresh one.
Summary
We learned that offline transactions are possible with ecash, but there are some limitations. It either requires trusting the sender, or relying on either the sender or receiver to be online to verify the tokens, or create tokens that can be verified offline by the receiver.
I hope this short article was helpful in understanding how ecash works and its potential for offline transactions.
Cheers,
Gandlaf
-
@ 6e0ea5d6:0327f353
2025-05-04 14:53:42Amico mio, ascolta bene!
Without hesitation, the woman you attract with lies is not truly yours. Davvero, she is the temporary property of the illusion you’ve built to seduce her. And every illusion, sooner or later, crumbles.
Weak men sell inflated versions of themselves. They talk about what they don’t have, promise what they can’t sustain, adorn their empty selves with words that are nothing more than a coat of paint. And they do this thinking that, later, they’ll be able to "show who they really are." Fatal mistake, cazzo!
The truth, amico mio, is not something that appears at the end. It is what holds up the whole beginning.
The woman who approaches a lie may smile at first — but she is smiling at the theater, not at the actor. When the curtains fall, what she will see is not a man. It will be a character tired of performing, begging for love from a self-serving audience in the front row.
That’s why I always point out that lying to win a woman’s heart is the same as sabotaging your own nature. The woman who comes through an invented version of you will be the first to leave when the veil of lies tears apart. Not out of cruelty, but out of consistency with her own interest. Fine... She didn’t leave you, but rather, that version of yourself never truly existed to be left behind.
A worthy man presents himself without deceptive adornments. And those who stay, stay because they know exactly who they are choosing as a man. That’s what differentiates forged seduction from the convenience of love built on honor, loyalty, and respect.
Ah, amico mio, I remember well. It was lunch on an autumn day in Catania. Mediterranean heat, and the Nero D'Avola wine from midday clinging to the lips like dried blood. Sitting in the shade of a lemon tree planted right by my grandfather's vineyard entrance, my uncle — the oldest of my father’s brothers — spoke little, but when he called us to sit by his side, all the nephews would quiet down to listen. And in my youth, he told me something that has never left my mind.
“In Sicily, the woman who endures the silence of a man about his business is more loyal than the one who is enchanted by speeches about what he does or how much he earns. Perchè, figlio mio, the first one has seen the truth. The second one, only a false shine.”
Thank you for reading, my friend!
If this message resonated with you, consider leaving your "🥃" as a token of appreciation.
A toast to our family!
-
@ 23b0e2f8:d8af76fc
2025-01-08 18:17:52Necessário
- Um Android que você não use mais (a câmera deve estar funcionando).
- Um cartão microSD (opcional, usado apenas uma vez).
- Um dispositivo para acompanhar seus fundos (provavelmente você já tem um).
Algumas coisas que você precisa saber
- O dispositivo servirá como um assinador. Qualquer movimentação só será efetuada após ser assinada por ele.
- O cartão microSD será usado para transferir o APK do Electrum e garantir que o aparelho não terá contato com outras fontes de dados externas após sua formatação. Contudo, é possível usar um cabo USB para o mesmo propósito.
- A ideia é deixar sua chave privada em um dispositivo offline, que ficará desligado em 99% do tempo. Você poderá acompanhar seus fundos em outro dispositivo conectado à internet, como seu celular ou computador pessoal.
O tutorial será dividido em dois módulos:
- Módulo 1 - Criando uma carteira fria/assinador.
- Módulo 2 - Configurando um dispositivo para visualizar seus fundos e assinando transações com o assinador.
No final, teremos:
- Uma carteira fria que também servirá como assinador.
- Um dispositivo para acompanhar os fundos da carteira.
Módulo 1 - Criando uma carteira fria/assinador
-
Baixe o APK do Electrum na aba de downloads em https://electrum.org/. Fique à vontade para verificar as assinaturas do software, garantindo sua autenticidade.
-
Formate o cartão microSD e coloque o APK do Electrum nele. Caso não tenha um cartão microSD, pule este passo.
- Retire os chips e acessórios do aparelho que será usado como assinador, formate-o e aguarde a inicialização.
- Durante a inicialização, pule a etapa de conexão ao Wi-Fi e rejeite todas as solicitações de conexão. Após isso, você pode desinstalar aplicativos desnecessários, pois precisará apenas do Electrum. Certifique-se de que Wi-Fi, Bluetooth e dados móveis estejam desligados. Você também pode ativar o modo avião.\ (Curiosidade: algumas pessoas optam por abrir o aparelho e danificar a antena do Wi-Fi/Bluetooth, impossibilitando essas funcionalidades.)
- Insira o cartão microSD com o APK do Electrum no dispositivo e instale-o. Será necessário permitir instalações de fontes não oficiais.
- No Electrum, crie uma carteira padrão e gere suas palavras-chave (seed). Anote-as em um local seguro. Caso algo aconteça com seu assinador, essas palavras permitirão o acesso aos seus fundos novamente. (Aqui entra seu método pessoal de backup.)
Módulo 2 - Configurando um dispositivo para visualizar seus fundos e assinando transações com o assinador.
-
Criar uma carteira somente leitura em outro dispositivo, como seu celular ou computador pessoal, é uma etapa bastante simples. Para este tutorial, usaremos outro smartphone Android com Electrum. Instale o Electrum a partir da aba de downloads em https://electrum.org/ ou da própria Play Store. (ATENÇÃO: O Electrum não existe oficialmente para iPhone. Desconfie se encontrar algum.)
-
Após instalar o Electrum, crie uma carteira padrão, mas desta vez escolha a opção Usar uma chave mestra.
- Agora, no assinador que criamos no primeiro módulo, exporte sua chave pública: vá em Carteira > Detalhes da carteira > Compartilhar chave mestra pública.
-
Escaneie o QR gerado da chave pública com o dispositivo de consulta. Assim, ele poderá acompanhar seus fundos, mas sem permissão para movimentá-los.
-
Para receber fundos, envie Bitcoin para um dos endereços gerados pela sua carteira: Carteira > Addresses/Coins.
-
Para movimentar fundos, crie uma transação no dispositivo de consulta. Como ele não possui a chave privada, será necessário assiná-la com o dispositivo assinador.
- No assinador, escaneie a transação não assinada, confirme os detalhes, assine e compartilhe. Será gerado outro QR, desta vez com a transação já assinada.
- No dispositivo de consulta, escaneie o QR da transação assinada e transmita-a para a rede.
Conclusão
Pontos positivos do setup:
- Simplicidade: Basta um dispositivo Android antigo.
- Flexibilidade: Funciona como uma ótima carteira fria, ideal para holders.
Pontos negativos do setup:
- Padronização: Não utiliza seeds no padrão BIP-39, você sempre precisará usar o electrum.
- Interface: A aparência do Electrum pode parecer antiquada para alguns usuários.
Nesse ponto, temos uma carteira fria que também serve para assinar transações. O fluxo de assinar uma transação se torna: Gerar uma transação não assinada > Escanear o QR da transação não assinada > Conferir e assinar essa transação com o assinador > Gerar QR da transação assinada > Escanear a transação assinada com qualquer outro dispositivo que possa transmiti-la para a rede.
Como alguns devem saber, uma transação assinada de Bitcoin é praticamente impossível de ser fraudada. Em um cenário catastrófico, você pode mesmo que sem internet, repassar essa transação assinada para alguém que tenha acesso à rede por qualquer meio de comunicação. Mesmo que não queiramos que isso aconteça um dia, esse setup acaba por tornar essa prática possível.
-
@ 207ad2a0:e7cca7b0
2025-01-07 03:46:04Quick context: I wanted to check out Nostr's longform posts and this blog post seemed like a good one to try and mirror. It's originally from my free to read/share attempt to write a novel, but this post here is completely standalone - just describing how I used AI image generation to make a small piece of the work.
Hold on, put your pitchforks down - outside of using Grammerly & Emacs for grammatical corrections - not a single character was generated or modified by computers; a non-insignificant portion of my first draft originating on pen & paper. No AI is ~~weird and crazy~~ imaginative enough to write like I do. The only successful AI contribution you'll find is a single image, the map, which I heavily edited. This post will go over how I generated and modified an image using AI, which I believe brought some value to the work, and cover a few quick thoughts about AI towards the end.
Let's be clear, I can't draw, but I wanted a map which I believed would improve the story I was working on. After getting abysmal results by prompting AI with text only I decided to use "Diffuse the Rest," a Stable Diffusion tool that allows you to provide a reference image + description to fine tune what you're looking for. I gave it this Microsoft Paint looking drawing:
and after a number of outputs, selected this one to work on:
The image is way better than the one I provided, but had I used it as is, I still feel it would have decreased the quality of my work instead of increasing it. After firing up Gimp I cropped out the top and bottom, expanded the ocean and separated the landmasses, then copied the top right corner of the large landmass to replace the bottom left that got cut off. Now we've got something that looks like concept art: not horrible, and gets the basic idea across, but it's still due for a lot more detail.
The next thing I did was add some texture to make it look more map like. I duplicated the layer in Gimp and applied the "Cartoon" filter to both for some texture. The top layer had a much lower effect strength to give it a more textured look, while the lower layer had a higher effect strength that looked a lot like mountains or other terrain features. Creating a layer mask allowed me to brush over spots to display the lower layer in certain areas, giving it some much needed features.
At this point I'd made it to where I felt it may improve the work instead of detracting from it - at least after labels and borders were added, but the colors seemed artificial and out of place. Luckily, however, this is when PhotoFunia could step in and apply a sketch effect to the image.
At this point I was pretty happy with how it was looking, it was close to what I envisioned and looked very visually appealing while still being a good way to portray information. All that was left was to make the white background transparent, add some minor details, and add the labels and borders. Below is the exact image I wound up using:
Overall, I'm very satisfied with how it turned out, and if you're working on a creative project, I'd recommend attempting something like this. It's not a central part of the work, but it improved the chapter a fair bit, and was doable despite lacking the talent and not intending to allocate a budget to my making of a free to read and share story.
The AI Generated Elephant in the Room
If you've read my non-fiction writing before, you'll know that I think AI will find its place around the skill floor as opposed to the skill ceiling. As you saw with my input, I have absolutely zero drawing talent, but with some elbow grease and an existing creative direction before and after generating an image I was able to get something well above what I could have otherwise accomplished. Outside of the lowest common denominators like stock photos for the sole purpose of a link preview being eye catching, however, I doubt AI will be wholesale replacing most creative works anytime soon. I can assure you that I tried numerous times to describe the map without providing a reference image, and if I used one of those outputs (or even just the unedited output after providing the reference image) it would have decreased the quality of my work instead of improving it.
I'm going to go out on a limb and expect that AI image, text, and video is all going to find its place in slop & generic content (such as AI generated slop replacing article spinners and stock photos respectively) and otherwise be used in a supporting role for various creative endeavors. For people working on projects like I'm working on (e.g. intended budget $0) it's helpful to have an AI capable of doing legwork - enabling projects to exist or be improved in ways they otherwise wouldn't have. I'm also guessing it'll find its way into more professional settings for grunt work - think a picture frame or fake TV show that would exist in the background of an animated project - likely a detail most people probably wouldn't notice, but that would save the creators time and money and/or allow them to focus more on the essential aspects of said work. Beyond that, as I've predicted before: I expect plenty of emails will be generated from a short list of bullet points, only to be summarized by the recipient's AI back into bullet points.
I will also make a prediction counter to what seems mainstream: AI is about to peak for a while. The start of AI image generation was with Google's DeepDream in 2015 - image recognition software that could be run in reverse to "recognize" patterns where there were none, effectively generating an image from digital noise or an unrelated image. While I'm not an expert by any means, I don't think we're too far off from that a decade later, just using very fine tuned tools that develop more coherent images. I guess that we're close to maxing out how efficiently we're able to generate images and video in that manner, and the hard caps on how much creative direction we can have when using AI - as well as the limits to how long we can keep it coherent (e.g. long videos or a chronologically consistent set of images) - will prevent AI from progressing too far beyond what it is currently unless/until another breakthrough occurs.
-
@ 24a587f0:edc58e00
2025-05-04 14:11:53A plataforma 7FF vem ganhando destaque no cenário do entretenimento online por oferecer uma experiência completa, segura e empolgante para os jogadores brasileiros. Com um visual moderno, navegação intuitiva e suporte técnico eficiente, a 7FF proporciona tudo o que os usuários precisam para se divertir e testar sua sorte de maneira prática e confiável.
Desde o primeiro acesso, é fácil perceber que a 7FF foi desenvolvida pensando nos detalhes. O site é otimizado para funcionar perfeitamente tanto em computadores quanto em dispositivos móveis, o que permite aos usuários jogarem de onde estiverem. A interface amigável e os menus organizados facilitam a navegação, mesmo para quem não tem muita familiaridade com plataformas de jogos online.
Outro grande diferencial da 7FF é a segurança. A empresa investe em tecnologia de criptografia de ponta para proteger dados pessoais e financeiros dos jogadores, garantindo um ambiente confiável e protegido. Além disso, a plataforma conta com um sistema de cadastro rápido e verificação simples, o que agiliza o processo de início para novos usuários.
Catálogo de Jogos Variado e Empolgante A 7FF oferece uma ampla variedade de jogos que atendem a todos os perfis de jogadores, desde os mais experientes até os iniciantes. Entre as principais categorias disponíveis, destacam-se os jogos de cartas, slots temáticos, roletas virtuais e experiências ao vivo com crupiês reais. Os títulos são fornecidos por desenvolvedores reconhecidos internacionalmente, o que assegura gráficos de alta qualidade, jogabilidade fluida e excelentes recursos interativos.
Para quem gosta de jogos com forte componente visual e temático, os slots são uma ótima pedida. Na 7FF, é possível encontrar opções com temas que vão da mitologia ao faroeste, passando por aventuras espaciais e culturas orientais. Já para os fãs de estratégia e tomada de decisões rápidas, os jogos de cartas como poker e blackjack garantem rodadas emocionantes e desafiadoras.
Experiência do Jogador: Diversão, Bônus e Suporte Na 7FF, a experiência do jogador é tratada como prioridade. A plataforma oferece promoções regulares e bônus atrativos para novos usuários e também para jogadores frequentes. Isso inclui bônus de boas-vindas, giros grátis, cashback em determinadas modalidades e prêmios em eventos sazonais. Essas recompensas tornam as partidas ainda mais emocionantes e aumentam as chances de obter ganhos expressivos.
O suporte ao cliente também é um dos pontos fortes da 7FF. A equipe de atendimento está disponível 24 horas por dia, sete dias por semana, oferecendo ajuda rápida e eficiente por meio de chat ao vivo, e-mail e FAQ detalhado. Essa atenção ao jogador contribui para uma experiência tranquila e satisfatória, mesmo em situações que exigem suporte técnico.
Além disso, a 7FF valoriza o jogo responsável e disponibiliza ferramentas que ajudam os usuários a estabelecerem limites pessoais, promovendo um ambiente equilibrado e saudável para todos.
Conclusão A 7FF se consolida como uma das melhores opções de entretenimento digital no Brasil. Com uma plataforma moderna, jogos variados e suporte de qualidade, a experiência do jogador é levada a sério em todos os aspectos. Se você está em busca de emoção, diversão e oportunidades de ganhar prêmios reais, a 7FF é o lugar ideal para começar a sua jornada.
-
@ e6817453:b0ac3c39
2025-01-05 14:29:17The Rise of Graph RAGs and the Quest for Data Quality
As we enter a new year, it’s impossible to ignore the boom of retrieval-augmented generation (RAG) systems, particularly those leveraging graph-based approaches. The previous year saw a surge in advancements and discussions about Graph RAGs, driven by their potential to enhance large language models (LLMs), reduce hallucinations, and deliver more reliable outputs. Let’s dive into the trends, challenges, and strategies for making the most of Graph RAGs in artificial intelligence.
Booming Interest in Graph RAGs
Graph RAGs have dominated the conversation in AI circles. With new research papers and innovations emerging weekly, it’s clear that this approach is reshaping the landscape. These systems, especially those developed by tech giants like Microsoft, demonstrate how graphs can:
- Enhance LLM Outputs: By grounding responses in structured knowledge, graphs significantly reduce hallucinations.
- Support Complex Queries: Graphs excel at managing linked and connected data, making them ideal for intricate problem-solving.
Conferences on linked and connected data have increasingly focused on Graph RAGs, underscoring their central role in modern AI systems. However, the excitement around this technology has brought critical questions to the forefront: How do we ensure the quality of the graphs we’re building, and are they genuinely aligned with our needs?
Data Quality: The Foundation of Effective Graphs
A high-quality graph is the backbone of any successful RAG system. Constructing these graphs from unstructured data requires attention to detail and rigorous processes. Here’s why:
- Richness of Entities: Effective retrieval depends on graphs populated with rich, detailed entities.
- Freedom from Hallucinations: Poorly constructed graphs amplify inaccuracies rather than mitigating them.
Without robust data quality, even the most sophisticated Graph RAGs become ineffective. As a result, the focus must shift to refining the graph construction process. Improving data strategy and ensuring meticulous data preparation is essential to unlock the full potential of Graph RAGs.
Hybrid Graph RAGs and Variations
While standard Graph RAGs are already transformative, hybrid models offer additional flexibility and power. Hybrid RAGs combine structured graph data with other retrieval mechanisms, creating systems that:
- Handle diverse data sources with ease.
- Offer improved adaptability to complex queries.
Exploring these variations can open new avenues for AI systems, particularly in domains requiring structured and unstructured data processing.
Ontology: The Key to Graph Construction Quality
Ontology — defining how concepts relate within a knowledge domain — is critical for building effective graphs. While this might sound abstract, it’s a well-established field blending philosophy, engineering, and art. Ontology engineering provides the framework for:
- Defining Relationships: Clarifying how concepts connect within a domain.
- Validating Graph Structures: Ensuring constructed graphs are logically sound and align with domain-specific realities.
Traditionally, ontologists — experts in this discipline — have been integral to large enterprises and research teams. However, not every team has access to dedicated ontologists, leading to a significant challenge: How can teams without such expertise ensure the quality of their graphs?
How to Build Ontology Expertise in a Startup Team
For startups and smaller teams, developing ontology expertise may seem daunting, but it is achievable with the right approach:
- Assign a Knowledge Champion: Identify a team member with a strong analytical mindset and give them time and resources to learn ontology engineering.
- Provide Training: Invest in courses, workshops, or certifications in knowledge graph and ontology creation.
- Leverage Partnerships: Collaborate with academic institutions, domain experts, or consultants to build initial frameworks.
- Utilize Tools: Introduce ontology development tools like Protégé, OWL, or SHACL to simplify the creation and validation process.
- Iterate with Feedback: Continuously refine ontologies through collaboration with domain experts and iterative testing.
So, it is not always affordable for a startup to have a dedicated oncologist or knowledge engineer in a team, but you could involve consulters or build barefoot experts.
You could read about barefoot experts in my article :
Even startups can achieve robust and domain-specific ontology frameworks by fostering in-house expertise.
How to Find or Create Ontologies
For teams venturing into Graph RAGs, several strategies can help address the ontology gap:
-
Leverage Existing Ontologies: Many industries and domains already have open ontologies. For instance:
-
Public Knowledge Graphs: Resources like Wikipedia’s graph offer a wealth of structured knowledge.
- Industry Standards: Enterprises such as Siemens have invested in creating and sharing ontologies specific to their fields.
-
Business Framework Ontology (BFO): A valuable resource for enterprises looking to define business processes and structures.
-
Build In-House Expertise: If budgets allow, consider hiring knowledge engineers or providing team members with the resources and time to develop expertise in ontology creation.
-
Utilize LLMs for Ontology Construction: Interestingly, LLMs themselves can act as a starting point for ontology development:
-
Prompt-Based Extraction: LLMs can generate draft ontologies by leveraging their extensive training on graph data.
- Domain Expert Refinement: Combine LLM-generated structures with insights from domain experts to create tailored ontologies.
Parallel Ontology and Graph Extraction
An emerging approach involves extracting ontologies and graphs in parallel. While this can streamline the process, it presents challenges such as:
- Detecting Hallucinations: Differentiating between genuine insights and AI-generated inaccuracies.
- Ensuring Completeness: Ensuring no critical concepts are overlooked during extraction.
Teams must carefully validate outputs to ensure reliability and accuracy when employing this parallel method.
LLMs as Ontologists
While traditionally dependent on human expertise, ontology creation is increasingly supported by LLMs. These models, trained on vast amounts of data, possess inherent knowledge of many open ontologies and taxonomies. Teams can use LLMs to:
- Generate Skeleton Ontologies: Prompt LLMs with domain-specific information to draft initial ontology structures.
- Validate and Refine Ontologies: Collaborate with domain experts to refine these drafts, ensuring accuracy and relevance.
However, for validation and graph construction, formal tools such as OWL, SHACL, and RDF should be prioritized over LLMs to minimize hallucinations and ensure robust outcomes.
Final Thoughts: Unlocking the Power of Graph RAGs
The rise of Graph RAGs underscores a simple but crucial correlation: improving graph construction and data quality directly enhances retrieval systems. To truly harness this power, teams must invest in understanding ontologies, building quality graphs, and leveraging both human expertise and advanced AI tools.
As we move forward, the interplay between Graph RAGs and ontology engineering will continue to shape the future of AI. Whether through adopting existing frameworks or exploring innovative uses of LLMs, the path to success lies in a deep commitment to data quality and domain understanding.
Have you explored these technologies in your work? Share your experiences and insights — and stay tuned for more discussions on ontology extraction and its role in AI advancements. Cheers to a year of innovation!
-
@ 24a587f0:edc58e00
2025-05-04 14:11:03O cenário dos jogos online está em constante transformação, e a plataforma CR7 surge como uma referência de inovação e entretenimento de alta qualidade para jogadores que buscam emoção, segurança e experiências memoráveis. Com uma interface moderna, suporte profissional e uma ampla variedade de jogos, o CR7 oferece muito mais do que simples diversão: proporciona uma jornada completa de entretenimento digital.
Conheça a Plataforma CR7 A plataforma CR7 foi desenvolvida com o objetivo de entregar uma experiência superior em jogos online. Desde o primeiro acesso, os usuários percebem a fluidez da navegação, a rapidez no carregamento das páginas e a facilidade de encontrar seus jogos favoritos. Compatível com smartphones, tablets e computadores, o CR7 permite que você jogue de onde estiver, com total segurança e comodidade.
Além do design intuitivo, o CR7 conta com um sistema de cadastro simples e rápido, além de métodos de pagamento eficientes, oferecendo opções variadas para depósitos e saques. A segurança é um dos pilares da plataforma, com tecnologias de criptografia e verificação que garantem proteção total dos dados dos usuários.
Jogos Diversificados para Todos os Estilos Uma das maiores qualidades do CR7 está na sua variedade de jogos. A plataforma conta com centenas de opções que agradam desde os jogadores mais tradicionais até os que preferem jogos modernos e cheios de interatividade. Os principais gêneros disponíveis incluem:
Slots Online: Máquinas de giro com gráficos impressionantes, efeitos sonoros envolventes e temáticas variadas, que vão desde o Egito Antigo até aventuras futuristas.
Jogos de Mesa: Para os fãs de jogos clássicos, o CR7 oferece versões digitais de roleta, blackjack, pôquer e outros, todos com excelente jogabilidade e recursos interativos.
Jogos ao Vivo: Uma das maiores atrações da plataforma, os jogos ao vivo permitem interação em tempo real com apresentadores e outros jogadores, tornando a experiência ainda mais envolvente.
Minigames e Jogos Instantâneos: Para quem prefere partidas rápidas e dinâmicas, o CR7 disponibiliza diversos minigames com regras simples e recompensas atrativas.
Cada jogo é desenvolvido por provedores reconhecidos internacionalmente, garantindo gráficos de alta qualidade, jogabilidade justa e retorno adequado ao jogador.
Experiência do Jogador: Diversão com Suporte e Recompensas O CR7 não se limita a oferecer jogos: ele proporciona uma verdadeira experiência de entretenimento. Os jogadores contam com promoções exclusivas, bônus de boas-vindas e programas de fidelidade que tornam a jornada ainda mais gratificante.
O suporte ao cliente está disponível 24 horas por dia, todos os dias da semana, com atendimento em português e canais como chat ao vivo, e-mail e suporte via WhatsApp. Isso garante que qualquer dúvida ou problema seja resolvido de forma rápida e eficiente.
Outro ponto de destaque é a comunidade ativa de jogadores que a plataforma fomenta. Através de torneios, desafios e rankings, os usuários têm a chance de competir, se destacar e ganhar prêmios adicionais, além de interagir com outros participantes em tempo real.
Conclusão A plataforma CR7 representa uma nova era no universo dos jogos online. Com tecnologia de ponta, suporte constante, variedade impressionante de jogos e recompensas reais, ela se consolida como uma escolha inteligente para quem busca entretenimento completo, seguro e emocionante.
Se você está pronto para elevar sua experiência nos jogos digitais, o CR7 é o destino certo. Cadastre-se, explore, jogue e descubra por que tantos brasileiros já escolheram o CR7 como sua plataforma de confiança.
-
@ a4a6b584:1e05b95b
2025-01-02 18:13:31The Four-Layer Framework
Layer 1: Zoom Out
Start by looking at the big picture. What’s the subject about, and why does it matter? Focus on the overarching ideas and how they fit together. Think of this as the 30,000-foot view—it’s about understanding the "why" and "how" before diving into the "what."
Example: If you’re learning programming, start by understanding that it’s about giving logical instructions to computers to solve problems.
- Tip: Keep it simple. Summarize the subject in one or two sentences and avoid getting bogged down in specifics at this stage.
Once you have the big picture in mind, it’s time to start breaking it down.
Layer 2: Categorize and Connect
Now it’s time to break the subject into categories—like creating branches on a tree. This helps your brain organize information logically and see connections between ideas.
Example: Studying biology? Group concepts into categories like cells, genetics, and ecosystems.
- Tip: Use headings or labels to group similar ideas. Jot these down in a list or simple diagram to keep track.
With your categories in place, you’re ready to dive into the details that bring them to life.
Layer 3: Master the Details
Once you’ve mapped out the main categories, you’re ready to dive deeper. This is where you learn the nuts and bolts—like formulas, specific techniques, or key terminology. These details make the subject practical and actionable.
Example: In programming, this might mean learning the syntax for loops, conditionals, or functions in your chosen language.
- Tip: Focus on details that clarify the categories from Layer 2. Skip anything that doesn’t add to your understanding.
Now that you’ve mastered the essentials, you can expand your knowledge to include extra material.
Layer 4: Expand Your Horizons
Finally, move on to the extra material—less critical facts, trivia, or edge cases. While these aren’t essential to mastering the subject, they can be useful in specialized discussions or exams.
Example: Learn about rare programming quirks or historical trivia about a language’s development.
- Tip: Spend minimal time here unless it’s necessary for your goals. It’s okay to skim if you’re short on time.
Pro Tips for Better Learning
1. Use Active Recall and Spaced Repetition
Test yourself without looking at notes. Review what you’ve learned at increasing intervals—like after a day, a week, and a month. This strengthens memory by forcing your brain to actively retrieve information.
2. Map It Out
Create visual aids like diagrams or concept maps to clarify relationships between ideas. These are particularly helpful for organizing categories in Layer 2.
3. Teach What You Learn
Explain the subject to someone else as if they’re hearing it for the first time. Teaching exposes any gaps in your understanding and helps reinforce the material.
4. Engage with LLMs and Discuss Concepts
Take advantage of tools like ChatGPT or similar large language models to explore your topic in greater depth. Use these tools to:
- Ask specific questions to clarify confusing points.
- Engage in discussions to simulate real-world applications of the subject.
- Generate examples or analogies that deepen your understanding.Tip: Use LLMs as a study partner, but don’t rely solely on them. Combine these insights with your own critical thinking to develop a well-rounded perspective.
Get Started
Ready to try the Four-Layer Method? Take 15 minutes today to map out the big picture of a topic you’re curious about—what’s it all about, and why does it matter? By building your understanding step by step, you’ll master the subject with less stress and more confidence.
-
@ 005bc4de:ef11e1a2
2025-05-04 12:01:42OSU commencement speech revisited 1 year later
One year ago, May 5, 2024, the commencement speaker at Ohio State University was Chris Pan. He got booed for mentioning bitcoin. There were some other things involved, but the bitcoin part is what could my ears.
Here's an article about the speech and a video clip with the bitcoin mention. The quote that I feel is especially pertinent is this, '“The mechanics of investing are actually easy, but it comes down to mindset,” Pan said. “The most common barriers are fear, laziness and closed-mindedness.”'
Last year, I wrote this and had it sent as a reminder to myself (I received the reminder yesterday after totally forgetting about this):
Ohio State commencement speaker mentions bitcoin and got booed.
I wondered what would've happened if they'd taken his advice to heart and bought bitcoin that day. Linked article: https://www.businessinsider.com/osu-commencement-speaker-ayahuasca-praises-bitcoin-booed-viral-2024-5
Nat Brunell interviewed him on her Coin Stories podcast shortly after his speech: https://www.youtube.com/watch?v=LRqKxKqlbcI
BTC on 5/5/2024 day of speech: about $64,047 (chart below)
If any of those now wise old 23 year olds remember the advice they were given, bitcoin is currently at $95,476. If any took Pan's advice, they achieved a 49% gain in one year. Those who did not take Pan's advice, lost about 2.7% of their buying power due to inflation.
For bitcoiners, think about how far we've come. May of 2024 was still the waning days of the "War on Crypto," bitcoin was boiling the oceans, if you held, used, or liked bitcoin you were evil. Those were dark days and days I'm glad are behind us.
Here is the full commencement speech. The bitcoin part is around the 5 or 6 minute mark: https://m.youtube.com/watch?v=lcH-iL_FdYo
-
@ 90de72b7:8f68fdc0
2025-05-04 10:23:52Custom Traffic Light Control System
This Petri net represents a traffic control protocol ensuring that two traffic lights alternate safely and are never both green at the same time.
Places & Transitions
- Places:
- greenLight1: Indicates that the first traffic light is green.
- greenLight2: Indicates that the second traffic light is green.
- redLight1: Indicates that the first traffic light is red.
- redLight2: Indicates that the second traffic light is red.
-
queue: Acts as a synchronization mechanism ensuring controlled alternation between the two traffic lights.
-
Transitions:
- start: Initializes the system by placing tokens in greenLight1 and redLight2.
- toRed1: Moves a token from greenLight1 to redLight1, while placing a token in queue.
- toGreen2: Moves a token from redLight2 to greenLight2, requiring queue.
- toGreen1: Moves a token from queue and redLight1 to greenLight1.
- toRed2: Moves a token from greenLight2 to redLight2, placing a token back into queue.
- stop: Terminates the system by removing tokens from redLight1, queue, and redLight2, representing the system's end state.
petrinet ;start () -> greenLight1 redLight2 ;toRed1 greenLight1 -> queue redLight1 ;toGreen2 redLight2 queue -> greenLight2 ;toGreen1 queue redLight1 -> greenLight1 ;toRed2 greenLight2 -> redLight2 queue ;stop redLight1 queue redLight2 -> ()
-
@ fe32298e:20516265
2024-12-16 20:59:13Today I learned how to install NVapi to monitor my GPUs in Home Assistant.
NVApi is a lightweight API designed for monitoring NVIDIA GPU utilization and enabling automated power management. It provides real-time GPU metrics, supports integration with tools like Home Assistant, and offers flexible power management and PCIe link speed management based on workload and thermal conditions.
- GPU Utilization Monitoring: Utilization, memory usage, temperature, fan speed, and power consumption.
- Automated Power Limiting: Adjusts power limits dynamically based on temperature thresholds and total power caps, configurable per GPU or globally.
- Cross-GPU Coordination: Total power budget applies across multiple GPUs in the same system.
- PCIe Link Speed Management: Controls minimum and maximum PCIe link speeds with idle thresholds for power optimization.
- Home Assistant Integration: Uses the built-in RESTful platform and template sensors.
Getting the Data
sudo apt install golang-go git clone https://github.com/sammcj/NVApi.git cd NVapi go run main.go -port 9999 -rate 1 curl http://localhost:9999/gpu
Response for a single GPU:
[ { "index": 0, "name": "NVIDIA GeForce RTX 4090", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 16, "power_limit_watts": 450, "memory_total_gb": 23.99, "memory_used_gb": 0.46, "memory_free_gb": 23.52, "memory_usage_percent": 2, "temperature": 38, "processes": [], "pcie_link_state": "not managed" } ]
Response for multiple GPUs:
[ { "index": 0, "name": "NVIDIA GeForce RTX 3090", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 14, "power_limit_watts": 350, "memory_total_gb": 24, "memory_used_gb": 0.43, "memory_free_gb": 23.57, "memory_usage_percent": 2, "temperature": 36, "processes": [], "pcie_link_state": "not managed" }, { "index": 1, "name": "NVIDIA RTX A4000", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 10, "power_limit_watts": 140, "memory_total_gb": 15.99, "memory_used_gb": 0.56, "memory_free_gb": 15.43, "memory_usage_percent": 3, "temperature": 41, "processes": [], "pcie_link_state": "not managed" } ]
Start at Boot
Create
/etc/systemd/system/nvapi.service
:``` [Unit] Description=Run NVapi After=network.target
[Service] Type=simple Environment="GOPATH=/home/ansible/go" WorkingDirectory=/home/ansible/NVapi ExecStart=/usr/bin/go run main.go -port 9999 -rate 1 Restart=always User=ansible
Environment="GPU_TEMP_CHECK_INTERVAL=5"
Environment="GPU_TOTAL_POWER_CAP=400"
Environment="GPU_0_LOW_TEMP=40"
Environment="GPU_0_MEDIUM_TEMP=70"
Environment="GPU_0_LOW_TEMP_LIMIT=135"
Environment="GPU_0_MEDIUM_TEMP_LIMIT=120"
Environment="GPU_0_HIGH_TEMP_LIMIT=100"
Environment="GPU_1_LOW_TEMP=45"
Environment="GPU_1_MEDIUM_TEMP=75"
Environment="GPU_1_LOW_TEMP_LIMIT=140"
Environment="GPU_1_MEDIUM_TEMP_LIMIT=125"
Environment="GPU_1_HIGH_TEMP_LIMIT=110"
[Install] WantedBy=multi-user.target ```
Home Assistant
Add to Home Assistant
configuration.yaml
and restart HA (completely).For a single GPU, this works: ``` sensor: - platform: rest name: MYPC GPU Information resource: http://mypc:9999 method: GET headers: Content-Type: application/json value_template: "{{ value_json[0].index }}" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature scan_interval: 1 # seconds
- platform: template sensors: mypc_gpu_0_gpu: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} GPU" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_memory: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Memory" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_power: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_0_power_limit: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_0_temperature: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Temperature" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'temperature') }}" unit_of_measurement: "°C" ```
For multiple GPUs: ``` rest: scan_interval: 1 resource: http://mypc:9999 sensor: - name: "MYPC GPU0 Information" value_template: "{{ value_json[0].index }}" json_attributes_path: "$.0" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature - name: "MYPC GPU1 Information" value_template: "{{ value_json[1].index }}" json_attributes_path: "$.1" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature
-
platform: template sensors: mypc_gpu_0_gpu: friendly_name: "MYPC GPU0 GPU" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_memory: friendly_name: "MYPC GPU0 Memory" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_power: friendly_name: "MYPC GPU0 Power" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_0_power_limit: friendly_name: "MYPC GPU0 Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_0_temperature: friendly_name: "MYPC GPU0 Temperature" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'temperature') }}" unit_of_measurement: "C"
-
platform: template sensors: mypc_gpu_1_gpu: friendly_name: "MYPC GPU1 GPU" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_1_memory: friendly_name: "MYPC GPU1 Memory" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_1_power: friendly_name: "MYPC GPU1 Power" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_1_power_limit: friendly_name: "MYPC GPU1 Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_1_temperature: friendly_name: "MYPC GPU1 Temperature" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'temperature') }}" unit_of_measurement: "C"
```
Basic entity card:
type: entities entities: - entity: sensor.mypc_gpu_0_gpu secondary_info: last-updated - entity: sensor.mypc_gpu_0_memory secondary_info: last-updated - entity: sensor.mypc_gpu_0_power secondary_info: last-updated - entity: sensor.mypc_gpu_0_power_limit secondary_info: last-updated - entity: sensor.mypc_gpu_0_temperature secondary_info: last-updated
Ansible Role
```
-
name: install go become: true package: name: golang-go state: present
-
name: git clone git: repo: "https://github.com/sammcj/NVApi.git" dest: "/home/ansible/NVapi" update: yes force: true
go run main.go -port 9999 -rate 1
-
name: install systemd service become: true copy: src: nvapi.service dest: /etc/systemd/system/nvapi.service
-
name: Reload systemd daemons, enable, and restart nvapi become: true systemd: name: nvapi daemon_reload: yes enabled: yes state: restarted ```
-
@ 6f6b50bb:a848e5a1
2024-12-15 15:09:52Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
-
Scoperta. Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
-
Efficienza. La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
Scoperta. Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
Efficienza. La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!
Tradotto da:\ https://willwhitney.com/computing-inside-ai.htmlhttps://willwhitney.com/computing-inside-ai.html
-
-
@ 826e9f89:ffc5c759
2025-04-12 21:34:24What follows began as snippets of conversations I have been having for years, on and off, here and there. It will likely eventually be collated into a piece I have been meaning to write on “payments” as a whole. I foolishly started writing this piece years ago, not realizing that the topic is gargantuan and for every week I spend writing it I have to add two weeks to my plan. That may or may not ever come to fruition, but in the meantime, Tether announced it was issuing on Taproot Assets and suddenly everybody is interested again. This is as good a catalyst as any to carve out my “stablecoin thesis”, such as it exists, from “payments”, and put it out there for comment and feedback.
In contrast to the “Bitcoiner take” I will shortly revert to, I invite the reader to keep the following potential counterargument in mind, which might variously be termed the “shitcoiner”, “realist”, or “cynical” take, depending on your perspective: that stablecoins have clear product-market-fit. Now, as a venture capitalist and professional thinkboi focusing on companies building on Bitcoin, I obviously think that not only is Bitcoin the best money ever invented and its monetization is pretty much inevitable, but that, furthermore, there is enormous, era-defining long-term potential for a range of industries in which Bitcoin is emerging as superior technology, even aside from its role as money. But in the interest not just of steelmanning but frankly just of honesty, I would grudgingly agree with the following assessment as of the time of writing: the applications of crypto (inclusive of Bitcoin but deliberately wider) that have found product-market-fit today, and that are not speculative bets on future development and adoption, are: Bitcoin as savings technology, mining as a means of monetizing energy production, and stablecoins.
I think there are two typical Bitcoiner objections to stablecoins of significantly greater importance than all others: that you shouldn’t be supporting dollar hegemony, and that you don’t need a blockchain. I will elaborate on each of these, and for the remainder of the post will aim to produce a synthesis of three superficially contrasting (or at least not obviously related) sources of inspiration: these objections, the realisation above that stablecoins just are useful, and some commentary on technical developments in Bitcoin and the broader space that I think inform where things are likely to go. As will become clear as the argument progresses, I actually think the outcome to which I am building up is where things have to go. I think the technical and economic incentives at play make this an inevitability rather than a “choice”, per se. Given my conclusion, which I will hold back for the time being, this is a fantastically good thing, hence I am motivated to write this post at all!
Objection 1: Dollar Hegemony
I list this objection first because there isn’t a huge amount to say about it. It is clearly a normative position, and while I more or less support it personally, I don’t think that it is material to the argument I am going on to make, so I don’t want to force it on the reader. While the case for this objection is probably obvious to this audience (isn’t the point of Bitcoin to destroy central banks, not further empower them?) I should at least offer the steelman that there is a link between this and the realist observation that stablecoins are useful. The reason they are useful is because people prefer the dollar to even shitter local fiat currencies. I don’t think it is particularly fruitful to say that they shouldn’t. They do. Facts don’t care about your feelings. There is a softer bridging argument to be made here too, to the effect that stablecoins warm up their users to the concept of digital bearer (ish) assets, even though these particular assets are significantly scammier than Bitcoin. Again, I am just floating this, not telling the reader they should or shouldn’t buy into it.
All that said, there is one argument I do want to put my own weight behind, rather than just float: stablecoin issuance is a speculative attack on the institution of fractional reserve banking. A “dollar” Alice moves from JPMorgan to Tether embodies two trade-offs from Alice’s perspective: i) a somewhat opaque profile on the credit risk of the asset: the likelihood of JPMorgan ever really defaulting on deposits vs the operator risk of Tether losing full backing and/or being wrench attacked by the Federal Government and rugging its users. These risks are real but are almost entirely political. I’m skeptical it is meaningful to quantify them, but even if it is, I am not the person to try to do it. Also, more transparently to Alice, ii) far superior payment rails (for now, more on this to follow).
However, from the perspective of the fiat banking cartel, fractional reserve leverage has been squeezed. There are just as many notional dollars in circulation, but there the backing has been shifted from levered to unlevered issuers. There are gradations of relevant objections to this: while one might say, Tether’s backing comes from Treasuries, so you are directly funding US debt issuance!, this is a bit silly in the context of what other dollars one might hold. It’s not like JPMorgan is really competing with the Treasury to sell credit into the open market. Optically they are, but this is the core of the fiat scam. Via the guarantees of the Federal Reserve System, JPMorgan can sell as much unbacked credit as it wants knowing full well the difference will be printed whenever this blows up. Short-term Treasuries are also JPMorgan’s most pristine asset safeguarding its equity, so the only real difference is that Tether only holds Treasuries without wishing more leverage into existence. The realization this all builds up to is that, by necessity,
Tether is a fully reserved bank issuing fiduciary media against the only dollar-denominated asset in existence whose value (in dollar terms) can be guaranteed. Furthermore, this media arguably has superior “moneyness” to the obvious competition in the form of US commercial bank deposits by virtue of its payment rails.
That sounds pretty great when you put it that way! Of course, the second sentence immediately leads to the second objection, and lets the argument start to pick up steam …
Objection 2: You Don’t Need a Blockchain
I don’t need to explain this to this audience but to recap as briefly as I can manage: Bitcoin’s value is entirely endogenous. Every aspect of “a blockchain” that, out of context, would be an insanely inefficient or redundant modification of a “database”, in context is geared towards the sole end of enabling the stability of this endogenous value. Historically, there have been two variations of stupidity that follow a failure to grok this: i) “utility tokens”, or blockchains with native tokens for something other than money. I would recommend anybody wanting a deeper dive on the inherent nonsense of a utility token to read Only The Strong Survive, in particular Chapter 2, Crypto Is Not Decentralized, and the subsection, Everything Fights For Liquidity, and/or Green Eggs And Ham, in particular Part II, Decentralized Finance, Technically. ii) “real world assets” or, creating tokens within a blockchain’s data structure that are not intended to have endogenous value but to act as digital quasi-bearer certificates to some or other asset of value exogenous to this system. Stablecoins are in this second category.
RWA tokens definitionally have to have issuers, meaning some entity that, in the real world, custodies or physically manages both the asset and the record-keeping scheme for the asset. “The blockchain” is at best a secondary ledger to outsource ledger updates to public infrastructure such that the issuer itself doesn’t need to bother and can just “check the ledger” whenever operationally relevant. But clearly ownership cannot be enforced in an analogous way to Bitcoin, under both technical and social considerations. Technically, Bitcoin’s endogenous value means that whoever holds the keys to some or other UTXOs functionally is the owner. Somebody else claiming to be the owner is yelling at clouds. Whereas, socially, RWA issuers enter a contract with holders (whether legally or just in terms of a common-sense interpretation of the transaction) such that ownership of the asset issued against is entirely open to dispute. That somebody can point to “ownership” of the token may or may not mean anything substantive with respect to the physical reality of control of the asset, and how the issuer feels about it all.
And so, one wonders, why use a blockchain at all? Why doesn’t the issuer just run its own database (for the sake of argument with some or other signature scheme for verifying and auditing transactions) given it has the final say over issuance and redemption anyway? I hinted at an answer above: issuing on a blockchain outsources this task to public infrastructure. This is where things get interesting. While it is technically true, given the above few paragraphs, that, you don’t need a blockchain for that, you also don’t need to not use a blockchain for that. If you want to, you can.
This is clearly the case given stablecoins exist at all and have gone this route. If one gets too angry about not needing a blockchain for that, one equally risks yelling at clouds! And, in fact, one can make an even stronger argument, more so from the end users’ perspective. These products do not exist in a vacuum but rather compete with alternatives. In the case of stablecoins, the alternative is traditional fiat money, which, as stupid as RWAs on a blockchain are, is even dumber. It actually is just a database, except it’s a database that is extremely annoying to use, basically for political reasons because the industry managing these private databases form a cartel that never needs to innovate or really give a shit about its customers at all. In many, many cases, stablecoins on blockchains are dumb in the abstract, but superior to the alternative methods of holding and transacting in dollars existing in other forms. And note, this is only from Alice’s perspective of wanting to send and receive, not a rehashing of the fractional reserve argument given above. This is the essence of their product-market-fit. Yell at clouds all you like: they just are useful given the alternative usually is not Bitcoin, it’s JPMorgan’s KYC’d-up-the-wazoo 90s-era website, more than likely from an even less solvent bank.
So where does this get us? It might seem like we are back to “product-market-fit, sorry about that” with Bitcoiners yelling about feelings while everybody else makes do with their facts. However, I think we have introduced enough material to move the argument forward by incrementally incorporating the following observations, all of which I will shortly go into in more detail: i) as a consequence of making no technical sense with respect to what blockchains are for, today’s approach won’t scale; ii) as a consequence of short-termist tradeoffs around socializing costs, today’s approach creates an extremely unhealthy and arguably unnatural market dynamic in the issuer space; iii) Taproot Assets now exist and handily address both points i) and ii), and; iv) eCash is making strides that I believe will eventually replace even Taproot Assets.
To tease where all this is going, and to get the reader excited before we dive into much more detail: just as Bitcoin will eat all monetary premia, Lightning will likely eat all settlement, meaning all payments will gravitate towards routing over Lightning regardless of the denomination of the currency at the edges. Fiat payments will gravitate to stablecoins to take advantage of this; stablecoins will gravitate to TA and then to eCash, and all of this will accelerate hyperbitcoinization by “bitcoinizing” payment rails such that an eventual full transition becomes as simple as flicking a switch as to what denomination you want to receive.
I will make two important caveats before diving in that are more easily understood in light of having laid this groundwork: I am open to the idea that it won’t be just Lightning or just Taproot Assets playing the above roles. Without veering into forecasting the entire future development of Bitcoin tech, I will highlight that all that really matters here are, respectively: a true layer 2 with native hashlocks, and a token issuance scheme that enables atomic routing over such a layer 2 (or combination of such). For the sake of argument, the reader is welcome to swap in “Ark” and “RGB” for “Lightning” and “TA” both above and in all that follows. As far as I can tell, this makes no difference to the argument and is even exciting in its own right. However, for the sake of simplicity in presentation, I will stick to “Lightning” and “TA” hereafter.
1) Today’s Approach to Stablecoins Won’t Scale
This is the easiest to tick off and again doesn’t require much explanation to this audience. Blockchains fundamentally don’t scale, which is why Bitcoin’s UTXO scheme is a far better design than ex-Bitcoin Crypto’s’ account-based models, even entirely out of context of all the above criticisms. This is because Bitcoin transactions can be batched across time and across users with combinations of modes of spending restrictions that provide strong economic guarantees of correct eventual net settlement, if not perpetual deferral. One could argue this is a decent (if abstrusely technical) definition of “scaling” that is almost entirely lacking in Crypto.
What we see in ex-Bitcoin crypto is so-called “layer 2s” that are nothing of the sort, forcing stablecoin schemes in these environments into one of two equally poor design choices if usage is ever to increase: fees go higher and higher, to the point of economic unviability (and well past it) as blocks fill up, or move to much more centralized environments that increasingly are just databases, and hence which lose the benefits of openness thought to be gleaned by outsourcing settlement to public infrastructure. This could be in the form of punting issuance to a bullshit “layer 2” that is a really a multisig “backing” a private execution environment (to be decentralized any daw now) or an entirely different blockchain that is just pretending even less not to be a database to begin with. In a nutshell, this is a decent bottom-up explanation as to why Tron has the highest settlement of Tether.
This also gives rise to the weirdness of “gas tokens” - assets whose utility as money is and only is in the form of a transaction fee to transact a different kind of money. These are not quite as stupid as a “utility token,” given at least they are clearly fulfilling a monetary role and hence their artificial scarcity can be justified. But they are frustrating from Bitcoiners’ and users’ perspectives alike: users would prefer to pay transaction fees on dollars in dollars, but they can’t because the value of Ether, Sol, Tron, or whatever, is the string and bubblegum that hold their boondoggles together. And Bitcoiners wish this stuff would just go away and stop distracting people, whereas this string and bubblegum is proving transiently useful.
All in all, today’s approach is fine so long as it isn’t being used much. It has product-market fit, sure, but in the unenviable circumstance that, if it really starts to take off, it will break, and even the original users will find it unusable.
2) Today’s Approach to Stablecoins Creates an Untenable Market Dynamic
Reviving the ethos of you don’t need a blockchain for that, notice the following subtlety: while the tokens representing stablecoins have value to users, that value is not native to the blockchain on which they are issued. Tether can (and routinely does) burn tokens on Ethereum and mint them on Tron, then burn on Tron and mint on Solana, and so on. So-called blockchains “go down” and nobody really cares. This makes no difference whatsoever to Tether’s own accounting, and arguably a positive difference to users given these actions track market demand. But it is detrimental to the blockchain being switched away from by stripping it of “TVL” that, it turns out, was only using it as rails: entirely exogenous value that leaves as quickly as it arrived.
One underdiscussed and underappreciated implication of the fact that no value is natively running through the blockchain itself is that, in the current scheme, both the sender and receiver of a stablecoin have to trust the same issuer. This creates an extremely powerful network effect that, in theory, makes the first-to-market likely to dominate and in practice has played out exactly as this theory would suggest: Tether has roughly 80% of the issuance, while roughly 19% goes to the political carve-out of USDC that wouldn’t exist at all were it not for government interference. Everybody else combined makes up the final 1%.
So, Tether is a full reserve bank but also has to be everybody’s bank. This is the source of a lot of the discomfort with Tether, and which feeds into the original objection around dollar hegemony, that there is an ill-defined but nonetheless uneasy feeling that Tether is slowly morphing into a CBDC. I would argue this really has nothing to do with Tether’s own behavior but rather is a consequence of the market dynamic inevitably created by the current stablecoin scheme. There is no reason to trust any other bank because nobody really wants a bank, they just want the rails. They want something that will retain a nominal dollar value long enough to spend it again. They don’t care what tech it runs on and they don’t even really care about the issuer except insofar as having some sense they won’t get rugged.
Notice this is not how fiat works. Banks can, of course, settle between each other, thus enabling their users to send money to customers of other banks. This settlement function is actually the entire point of central banks, less the money printing and general corruption enabled (we might say, this was the historical point of central banks, which have since become irredeemably corrupted by this power). This process is clunkier than stablecoins, as covered above, but the very possibility of settlement means there is no gigantic network effect to being the first commercial issuer of dollar balances. If it isn’t too triggering to this audience, one might suggest that the money printer also removes the residual concern that your balances might get rugged! (or, we might again say, you guarantee you don’t get rugged in the short term by guaranteeing you do get rugged in the long term).
This is a good point at which to introduce the unsettling observation that broader fintech is catching on to the benefits of stablecoins without any awareness whatsoever of all the limitations I am outlining here. With the likes of Stripe, Wise, Robinhood, and, post-Trump, even many US megabanks supposedly contemplating issuing stablecoins (obviously within the current scheme, not the scheme I am building up to proposing), we are forced to boggle our minds considering how on earth settlement is going to work. Are they going to settle through Ether? Well, no, because i) Ether isn’t money, it’s … to be honest, I don’t think anybody really knows what it is supposed to be, or if they once did they aren’t pretending anymore, but anyway, Stripe certainly hasn’t figured that out yet so, ii) it won’t be possible to issue them on layer 1s as soon as there is any meaningful volume, meaning they will have to route through “bullshit layer 2 wrapped Ether token that is really already a kind of stablecoin for Ether.”
The way they are going to try to fix this (anybody wanna bet?) is routing through DEXes, which is so painfully dumb you should be laughing and, if you aren’t, I would humbly suggest you don’t get just how dumb it is. What this amounts to is plugging the gap of Ether’s lack of moneyness (and wrapped Ether’s hilarious lack of moneyness) with … drum roll … unknowable technical and counterparty risk and unpredictable cost on top of reverting to just being a database. So, in other words, all of the costs of using a blockchain when you don’t strictly need to, and none of the benefits. Stripe is going to waste billions of dollars getting sandwich attacked out of some utterly vanilla FX settlement it is facilitating for clients who have even less of an idea what is going on and why North Korea now has all their money, and will eventually realize they should have skipped their shitcoin phase and gone straight to understanding Bitcoin instead …
3) Bitcoin (and Taproot Assets) Fixes This
To tie together a few loose ends, I only threw in the hilariously stupid suggestion of settling through wrapped Ether on Ether on Ether in order to tee up the entirely sensible suggestion of settling through Lightning. Again, not that this will be new to this audience, but while issuance schemes have been around on Bitcoin for a long time, the breakthrough of Taproot Assets is essentially the ability to atomically route through Lightning.
I will admit upfront that this presents a massive bootstrapping challenge relative to the ex-Bitcoin Crypto approach, and it’s not obvious to me if or how this will be overcome. I include this caveat to make it clear I am not suggesting this is a given. It may not be, it’s just beyond the scope of this post (or frankly my ability) to predict. This is a problem for Lightning Labs, Tether, and whoever else decides to step up to issue. But even highlighting this as an obvious and major concern invites us to consider an intriguing contrast: scaling TA stablecoins is hardest at the start and gets easier and easier thereafter. The more edge liquidity there is in TA stables, the less of a risk it is for incremental issuance; the more TA activity, the more attractive deploying liquidity is into Lightning proper, and vice versa. With apologies if this metaphor is even more confusing than it is helpful, one might conceive of the situation as being that there is massive inertia to bootstrap, but equally there could be positive feedback in driving the inertia to scale. Again, I have no idea, and it hasn’t happened yet in practice, but in theory it’s fun.
More importantly to this conversation, however, this is almost exactly the opposite dynamic to the current scheme on other blockchains, which is basically free to start, but gets more and more expensive the more people try to use it. One might say it antiscales (I don’t think that’s a real word, but if Taleb can do it, then I can do it too!).
Furthermore, the entire concept of “settling in Bitcoin” makes perfect sense both economically and technically: economically because Bitcoin is money, and technically because it can be locked in an HTLC and hence can enable atomic routing (i.e. because Lightning is a thing). This is clearly better than wrapped Eth on Eth on Eth or whatever, but, tantalisingly, is better than fiat too! The core message of the payments tome I may or may not one day write is (or will be) that fiat payments, while superficially efficient on the basis of centralized and hence costless ledger amendments, actually have a hidden cost in the form of interbank credit. Many readers will likely have heard me say this multiple times and in multiple settings but, contrary to popular belief, there is no such thing as a fiat debit. Even if styled as a debit, all fiat payments are credits and all have credit risk baked into their cost, even if that is obscured and pushed to the absolute foundational level of money printing to keep banks solvent and hence keep payment channels open.
Furthermore! this enables us to strip away the untenable market dynamic from the point above. The underappreciated and underdiscussed flip side of the drawback of the current dynamic that is effectively fixed by Taproot Assets is that there is no longer a mammoth network effect to a single issuer. Senders and receivers can trust different issuers (i.e. their own banks) because those banks can atomically settle a single payment over Lightning. This does not involve credit. It is arguably the only true debit in the world across both the relevant economic and technical criteria: it routes through money with no innate credit risk, and it does so atomically due to that money’s native properties.
Savvy readers may have picked up on a seed I planted a while back and which can now delightfully blossom:
This is what Visa was supposed to be!
Crucially, this is not what Visa is now. Visa today is pretty much the bank that is everybody’s counterparty, takes a small credit risk for the privilege, and oozes free cash flow bottlenecking global consumer payments.
But if you read both One From Many by Dee Hock (for a first person but pretty wild and extravagant take) and Electronic Value Exchange by David Stearns (for a third person, drier, but more analytical and historically contextualized take) or if you are just intimately familiar with the modern history of payments for whatever other reason, you will see that the role I just described for Lightning in an environment of unboundedly many banks issuing fiduciary media in the form of stablecoins is exactly what Dee Hock wanted to create when he envisioned Visa:
A neutral and open layer of value settlement enabling banks to create digital, interbank payment schemes for their customers at very low cost.
As it turns out, his vision was technically impossible with fiat, hence Visa, which started as a cooperative amongst member banks, was corrupted into a duopolistic for-profit rent seeker in curious parallel to the historical path of central banks …
4) eCash
To now push the argument to what I think is its inevitable conclusion, it’s worth being even more vigilant on the front of you don’t need a blockchain for that. I have argued that there is a role for a blockchain in providing a neutral settlement layer to enable true debits of stablecoins. But note this is just a fancy and/or stupid way of saying that Bitcoin is both the best money and is programmable, which we all knew anyway. The final step is realizing that, while TA is nice in terms of providing a kind of “on ramp” for global payments infrastructure as a whole to reorient around Lightning, there is some path dependence here in assuming (almost certainly correctly) that the familiarity of stablecoins as “RWA tokens on a blockchain” will be an important part of the lure.
But once that transition is complete, or is well on its way to being irreversible, we may as well come full circle and cut out tokens altogether. Again, you really don’t need a blockchain for that, and the residual appeal of better rails has been taken care of with the above massive detour through what I deem to be the inevitability of Lightning as a settlement layer. Just as USDT on Tron arguably has better moneyness than a JPMorgan balance, so a “stablecoin” as eCash has better moneyness than as a TA given it is cheaper, more private, and has more relevantly bearer properties (in other words, because it is cash). The technical detail that it can be hashlocked is really all you need to tie this all together. That means it can be atomically locked into a Lightning routed debit to the recipient of a different issuer (or “mint” in eCash lingo, but note this means the same thing as what we have been calling fully reserved banks). And the economic incentive is pretty compelling too because, for all their benefits, there is still a cost to TAs given they are issued onchain and they require asset-specific liquidity to route on Lightning. Once the rest of the tech is in place, why bother? Keep your Lightning connectivity and just become a mint.
What you get at that point is dramatically superior private database to JPMorgan with the dramatically superior public rails of Lightning. There is nothing left to desire from “a blockchain” besides what Bitcoin is fundamentally for in the first place: counterparty-risk-free value settlement.
And as a final point with a curious and pleasing echo to Dee Hock at Visa, Calle has made the point repeatedly that David Chaum’s vision for eCash, while deeply philosophical besides the technical details, was actually pretty much impossible to operate on fiat. From an eCash perspective, fiat stablecoins within the above infrastructure setup are a dramatic improvement on anything previously possible. But, of course, they are a slippery slope to Bitcoin regardless …
Objections Revisited
As a cherry on top, I think the objections I highlighted at the outset are now readily addressed – to the extent the reader believes what I am suggesting is more or less a technical and economic inevitability, that is. While, sure, I’m not particularly keen on giving the Treasury more avenues to sell its welfare-warfare shitcoin, on balance the likely development I’ve outlined is an enormous net positive: it’s going to sell these anyway so I prefer a strong economic incentive to steadily transition not only to Lightning as payment rails but eCash as fiduciary media, and to use “fintech” as a carrot to induce a slow motion bank run.
As alluded to above, once all this is in place, the final step to a Bitcoin standard becomes as simple as an individual’s decision to want Bitcoin instead of fiat. On reflection, this is arguably the easiest part! It's setting up all the tech that puts people off, so trojan-horsing them with “faster, cheaper payment rails” seems like a genius long-term strategy.
And as to “needing a blockchain” (or not), I hope that is entirely wrapped up at this point. The only blockchain you need is Bitcoin, but to the extent people are still confused by this (which I think will take decades more to fully unwind), we may as well lean into dazzling them with whatever innovation buzzwords and decentralization theatre they were going to fall for anyway before realizing they wanted Bitcoin all along.
Conclusion
Stablecoins are useful whether you like it or not. They are stupid in the abstract but it turns out fiat is even stupider, on inspection. But you don’t need a blockchain, and using one as decentralization theatre creates technical debt that is insurmountable in the long run. Blockchain-based stablecoins are doomed to a utility inversely proportional to their usage, and just to rub it in, their ill-conceived design practically creates a commercial dynamic that mandates there only ever be a single issuer.
Given they are useful, it seems natural that this tension is going to blow up at some point. It also seems worthwhile observing that Taproot Asset stablecoins have almost the inverse problem and opposite commercial dynamic: they will be most expensive to use at the outset but get cheaper and cheaper as their usage grows. Also, there is no incentive towards a monopoly issuer but rather towards as many as are willing to try to operate well and provide value to their users.
As such, we can expect any sizable growth in stablecoins to migrate to TA out of technical and economic necessity. Once this has happened - or possibly while it is happening but is clearly not going to stop - we may as well strip out the TA component and just use eCash because you really don’t need a blockchain for that at all. And once all the money is on eCash, deciding you want to denominate it in Bitcoin is the simplest on-ramp to hyperbitcoinization you can possibly imagine, given we’ve spent the previous decade or two rebuilding all payments tech around Lightning.
Or: Bitcoin fixes this. The End.
- Allen, #892,125
thanks to Marco Argentieri, Lyn Alden, and Calle for comments and feedback
-
@ 1f79058c:eb86e1cb
2025-05-04 09:34:30I think we should agree on an HTML element for pointing to the Nostr representation of a document/URL on the Web. We could use the existing one for link relations for example:
html <link rel="alternate" type="application/nostr+json" href="nostr:naddr1qvzqqqr4..." title="This article on Nostr" />
This would be useful in multiple ways:
- Nostr clients, when fetching meta/preview information for a URL that is linked in a note, can detect that there's a Nostr representation of the content, and then render it in Nostr-native ways (whatever that may be depending on the client)
- User agents, usually a browser or browser extension, when opening a URL on the Web, can offer opening the alternative representation of a page in a Nostr client. And/or they could offer to follow the author's pubkey on Nostr. And/or they could offer to zap the content.
- When publishing a new article, authors can share their preferred Web URL everywhere, without having to consider if the reader knows about or uses Nostr at all. However, if a Nostr user finds the Web version of an article outside of Nostr, they can now easily jump to the Nostr version of it.
- Existing Web publications can retroactively create Nostr versions of their content and easily link the Nostr articles on all of their existing article pages without having to add prominent Nostr links everywhere.
There are probably more use cases, like Nostr search engines and whatnot. If you can think of something interesting, please tell me.
Update: I came up with another interesting use case, which is adding alternate links to RSS and Atom feeds.
Proof of concept
In order to show one way in which this could be used, I have created a small Web Extension called Nostr Links, which will discover alternate Nostr links on the pages you visit.
If it finds one or more links, it will show a purple Nostr icon in the address bar, which you can click to open the list of links. It's similar to e.g. the Feed Preview extension, and also to what the Tor Browser does when it discovers an Onion-Location for the page you're looking at:
The links in this popup menu will be
web+nostr:
links, because browsers currently do not allow web apps or extensions to handle unprefixednostr:
links. (I hope someone is working on getting those on par withipfs:
etc.)Following such a link will either open your default Nostr Web app, if you have already configured one, or it will ask you which Web app to open the link with.
Caveat emptor: At the time of writing, my personal default Web app, noStrudel, needs a new release for the links to find the content.
Try it now
Have a look at the source code and/or download the extension (currently only for Firefox).
I have added alternate Nostr links to the Web pages of profiles and long-form content on the Kosmos relay's domain. It's probably the only place on the Web, which will trigger the extension right now.
You can look at this very post to find an alternate link for example.
Update: A certain fiatjaf has added the element to his personal website, which is built entirely from Nostr articles. Multiple other devs also expressed their intent to implement.
Update 2: There is now a pull request for documenting this approach in two existing NIPs. Your feedback is welcome there.
-
@ e6817453:b0ac3c39
2024-12-07 15:06:43I started a long series of articles about how to model different types of knowledge graphs in the relational model, which makes on-device memory models for AI agents possible.
We model-directed graphs
Also, graphs of entities
We even model hypergraphs
Last time, we discussed why classical triple and simple knowledge graphs are insufficient for AI agents and complex memory, especially in the domain of time-aware or multi-model knowledge.
So why do we need metagraphs, and what kind of challenge could they help us to solve?
- complex and nested event and temporal context and temporal relations as edges
- multi-mode and multilingual knowledge
- human-like memory for AI agents that has multiple contexts and relations between knowledge in neuron-like networks
MetaGraphs
A meta graph is a concept that extends the idea of a graph by allowing edges to become graphs. Meta Edges connect a set of nodes, which could also be subgraphs. So, at some level, node and edge are pretty similar in properties but act in different roles in a different context.
Also, in some cases, edges could be referenced as nodes.
This approach enables the representation of more complex relationships and hierarchies than a traditional graph structure allows. Let’s break down each term to understand better metagraphs and how they differ from hypergraphs and graphs.Graph Basics
- A standard graph has a set of nodes (or vertices) and edges (connections between nodes).
- Edges are generally simple and typically represent a binary relationship between two nodes.
- For instance, an edge in a social network graph might indicate a “friend” relationship between two people (nodes).
Hypergraph
- A hypergraph extends the concept of an edge by allowing it to connect any number of nodes, not just two.
- Each connection, called a hyperedge, can link multiple nodes.
- This feature allows hypergraphs to model more complex relationships involving multiple entities simultaneously. For example, a hyperedge in a hypergraph could represent a project team, connecting all team members in a single relation.
- Despite its flexibility, a hypergraph doesn’t capture hierarchical or nested structures; it only generalizes the number of connections in an edge.
Metagraph
- A metagraph allows the edges to be graphs themselves. This means each edge can contain its own nodes and edges, creating nested, hierarchical structures.
- In a meta graph, an edge could represent a relationship defined by a graph. For instance, a meta graph could represent a network of organizations where each organization’s structure (departments and connections) is represented by its own internal graph and treated as an edge in the larger meta graph.
- This recursive structure allows metagraphs to model complex data with multiple layers of abstraction. They can capture multi-node relationships (as in hypergraphs) and detailed, structured information about each relationship.
Named Graphs and Graph of Graphs
As you can notice, the structure of a metagraph is quite complex and could be complex to model in relational and classical RDF setups. It could create a challenge of luck of tools and software solutions for your problem.
If you need to model nested graphs, you could use a much simpler model of Named graphs, which could take you quite far.The concept of the named graph came from the RDF community, which needed to group some sets of triples. In this way, you form subgraphs inside an existing graph. You could refer to the subgraph as a regular node. This setup simplifies complex graphs, introduces hierarchies, and even adds features and properties of hypergraphs while keeping a directed nature.
It looks complex, but it is not so hard to model it with a slight modification of a directed graph.
So, the node could host graphs inside. Let's reflect this fact with a location for a node. If a node belongs to a main graph, we could set the location to null or introduce a main node . it is up to youNodes could have edges to nodes in different subgraphs. This structure allows any kind of nesting graphs. Edges stay location-free
Meta Graphs in Relational Model
Let’s try to make several attempts to model different meta-graphs with some constraints.
Directed Metagraph where edges are not used as nodes and could not contain subgraphs
In this case, the edge always points to two sets of nodes. This introduces an overhead of creating a node set for a single node. In this model, we can model empty node sets that could require application-level constraints to prevent such cases.
Directed Metagraph where edges are not used as nodes and could contain subgraphs
Adding a node set that could model a subgraph located in an edge is easy but could be separate from in-vertex or out-vert.
I also do not see a direct need to include subgraphs to a node, as we could just use a node set interchangeably, but it still could be a case.Directed Metagraph where edges are used as nodes and could contain subgraphs
As you can notice, we operate all the time with node sets. We could simply allow the extension node set to elements set that include node and edge IDs, but in this case, we need to use uuid or any other strategy to differentiate node IDs from edge IDs. In this case, we have a collision of ephemeral edges or ephemeral nodes when we want to change the role and purpose of the node as an edge or vice versa.
A full-scale metagraph model is way too complex for a relational database.
So we need a better model.Now, we have more flexibility but loose structural constraints. We cannot show that the element should have one vertex, one vertex, or both. This type of constraint has been moved to the application level. Also, the crucial question is about query and retrieval needs.
Any meta-graph model should be more focused on domain and needs and should be used in raw form. We did it for a pure theoretical purpose. -
@ a296b972:e5a7a2e8
2025-05-04 08:30:56Am Ende der Woche von Unseremeinungsfreiheit wird in der Unserehauptstadt von Unserdeutschland, Unserberlin, voraussichtlich der neue Unserbundeskanzler vereidigt.
Der Schwur des voraussichtlich nächsten Unserbundeskanzlers sollte aktualisiert werden:
Jetzt, wo endlich mein Traum in Erfüllung geht, nur einmal im Leben Unserbundeskanzler zu werden, zahlen sich für mich alle Tricks und Kniffe aus, die ich angewendet habe, um unter allen Umständen in diese Position zu kommen.
Ich schwöre, dass ich meine Kraft meinem Wohle widmen, meinen Nutzen mehren, nach Vorbild meines Vorgängers Schaden von mir wenden, das Grundgesetz und die Gesetze des Unserbundes formen, meine Pflichten unsergewissenhaft erfüllen und Unseregerechtigkeit gegen jedermann und jederfrau nicht nur üben, sondern unter allen Umständen auch durchsetzen werde, die sich mir bei der Umsetzung der Vorstellungen von Unseredemokratie in den Weg stellen. (So wahr mir wer auch immer helfe).
Der Antrittsbesuch des Unserbundeskanzlers beim Repräsentanten der noch in Unserdeutschland präsenten Besatzungsmacht wird mit Spannung erwartet.
Ein großer Teil der Unsereminister ist schon bekannt. Die Auswahl verspricht viele Unsereüberraschungen.
Die Unsereeinheitspartei, bestehend aus ehemaligen Volksparteien, wird weiterhin dafür sorgen, dass die Nicht-Unsereopposition so wenig wie möglich Einfluss erhält, obwohl sie von den Nicht-Unserebürgern, die mindestens ein Viertel der Urnengänger ausmachen, voll-demokratisch gewählt wurde.
Das Zentralkomitee der Deutschen Unseredemokratischen Bundesrepublik wird zum Wohle seiner Unserebürger alles daransetzen, Unseredemokratie weiter voranzubringen und hofft auch weiterhin auf die Unterstützung von Unser-öffentlich-rechtlicher-Rundfunk.
Die Unserepressefreiheit wird auch weiterhin garantiert.
Auf die Verlautbarungen der Unserepressekonferenz, besetzt mit frischem Unserpersonal, brauchen die Insassen von Unserdeutschland auch weiterhin nicht zu verzichten.
Alles, was nicht gesichert unserdemokratisch ist, gilt als gesichert rechtsextrem.
Als Maxime gilt für alles Handeln: Es muss unter allen Umständen demokratisch aussehen, aber wir (die Unseredemokraten) müssen alles in der Hand haben.
Es ist unwahrscheinlich, dass Unsersondervermögen von den Unserdemokraten zurückgezahlt wird. Dieser Vorzug ist den Unserebürgern und den Nicht-Unserebürgern durch Unseresteuerzahlungen vorbehalten.
Die Unserebundeswehr soll aufgebaut werden (Baut auf, baut auf!), die Unsererüstungsindustrie läuft auf Hochtouren und soll Unserdeutschland wieder unserkriegstüchtig machen, weil Russland immer Unserfeind sein wird.
Von Unserdeutschland soll nur noch Unserfrieden ausgehen.
Zur Bekräftigung, dass alles seinen unser-sozialistischen Gang geht, tauchte die Phoenix*in aus der Asche auf, in dem Unseremutti kürzlich ihren legendären Satz wiederholte:
Wir schaffen das.
Ob damit der endgültige wirtschaftliche Untergang und die Vollendung der gesellschaftlichen Spaltung von Unserdeutschland gemeint war, ist nicht überliefert.
Orwellsche Schlussfolgerung:
Wir = unser
Ihr = Euer
Vogel und Maus passen nicht zusammen
Ausgerichtet auf Ruinen und der Zukunft abgewandt, Uneinigkeit und Unrecht und Unfreiheit für das deutsche Unserland.
Unserdeutschland – ein Land mit viel Vergangenheit und wenig Zukunft?
Es ist zum Heulen.
Dieser Artikel wurde mit dem Pareto-Client geschrieben
* *
(Bild von pixabay)
-
@ e6817453:b0ac3c39
2024-12-07 15:03:06Hey folks! Today, let’s dive into the intriguing world of neurosymbolic approaches, retrieval-augmented generation (RAG), and personal knowledge graphs (PKGs). Together, these concepts hold much potential for bringing true reasoning capabilities to large language models (LLMs). So, let’s break down how symbolic logic, knowledge graphs, and modern AI can come together to empower future AI systems to reason like humans.
The Neurosymbolic Approach: What It Means ?
Neurosymbolic AI combines two historically separate streams of artificial intelligence: symbolic reasoning and neural networks. Symbolic AI uses formal logic to process knowledge, similar to how we might solve problems or deduce information. On the other hand, neural networks, like those underlying GPT-4, focus on learning patterns from vast amounts of data — they are probabilistic statistical models that excel in generating human-like language and recognizing patterns but often lack deep, explicit reasoning.
While GPT-4 can produce impressive text, it’s still not very effective at reasoning in a truly logical way. Its foundation, transformers, allows it to excel in pattern recognition, but the models struggle with reasoning because, at their core, they rely on statistical probabilities rather than true symbolic logic. This is where neurosymbolic methods and knowledge graphs come in.
Symbolic Calculations and the Early Vision of AI
If we take a step back to the 1950s, the vision for artificial intelligence was very different. Early AI research was all about symbolic reasoning — where computers could perform logical calculations to derive new knowledge from a given set of rules and facts. Languages like Lisp emerged to support this vision, enabling programs to represent data and code as interchangeable symbols. Lisp was designed to be homoiconic, meaning it treated code as manipulatable data, making it capable of self-modification — a huge leap towards AI systems that could, in theory, understand and modify their own operations.
Lisp: The Earlier AI-Language
Lisp, short for “LISt Processor,” was developed by John McCarthy in 1958, and it became the cornerstone of early AI research. Lisp’s power lay in its flexibility and its use of symbolic expressions, which allowed developers to create programs that could manipulate symbols in ways that were very close to human reasoning. One of the most groundbreaking features of Lisp was its ability to treat code as data, known as homoiconicity, which meant that Lisp programs could introspect and transform themselves dynamically. This ability to adapt and modify its own structure gave Lisp an edge in tasks that required a form of self-awareness, which was key in the early days of AI when researchers were exploring what it meant for machines to “think.”
Lisp was not just a programming language—it represented the vision for artificial intelligence, where machines could evolve their understanding and rewrite their own programming. This idea formed the conceptual basis for many of the self-modifying and adaptive algorithms that are still explored today in AI research. Despite its decline in mainstream programming, Lisp’s influence can still be seen in the concepts used in modern machine learning and symbolic AI approaches.
Prolog: Formal Logic and Deductive Reasoning
In the 1970s, Prolog was developed—a language focused on formal logic and deductive reasoning. Unlike Lisp, based on lambda calculus, Prolog operates on formal logic rules, allowing it to perform deductive reasoning and solve logical puzzles. This made Prolog an ideal candidate for expert systems that needed to follow a sequence of logical steps, such as medical diagnostics or strategic planning.
Prolog, like Lisp, allowed symbols to be represented, understood, and used in calculations, creating another homoiconic language that allows reasoning. Prolog’s strength lies in its rule-based structure, which is well-suited for tasks that require logical inference and backtracking. These features made it a powerful tool for expert systems and AI research in the 1970s and 1980s.
The language is declarative in nature, meaning that you define the problem, and Prolog figures out how to solve it. By using formal logic and setting constraints, Prolog systems can derive conclusions from known facts, making it highly effective in fields requiring explicit logical frameworks, such as legal reasoning, diagnostics, and natural language understanding. These symbolic approaches were later overshadowed during the AI winter — but the ideas never really disappeared. They just evolved.
Solvers and Their Role in Complementing LLMs
One of the most powerful features of Prolog and similar logic-based systems is their use of solvers. Solvers are mechanisms that can take a set of rules and constraints and automatically find solutions that satisfy these conditions. This capability is incredibly useful when combined with LLMs, which excel at generating human-like language but need help with logical consistency and structured reasoning.
For instance, imagine a scenario where an LLM needs to answer a question involving multiple logical steps or a complex query that requires deducing facts from various pieces of information. In this case, a solver can derive valid conclusions based on a given set of logical rules, providing structured answers that the LLM can then articulate in natural language. This allows the LLM to retrieve information and ensure the logical integrity of its responses, leading to much more robust answers.
Solvers are also ideal for handling constraint satisfaction problems — situations where multiple conditions must be met simultaneously. In practical applications, this could include scheduling tasks, generating optimal recommendations, or even diagnosing issues where a set of symptoms must match possible diagnoses. Prolog’s solver capabilities and LLM’s natural language processing power can make these systems highly effective at providing intelligent, rule-compliant responses that traditional LLMs would struggle to produce alone.
By integrating neurosymbolic methods that utilize solvers, we can provide LLMs with a form of deductive reasoning that is missing from pure deep-learning approaches. This combination has the potential to significantly improve the quality of outputs for use-cases that require explicit, structured problem-solving, from legal queries to scientific research and beyond. Solvers give LLMs the backbone they need to not just generate answers but to do so in a way that respects logical rigor and complex constraints.
Graph of Rules for Enhanced Reasoning
Another powerful concept that complements LLMs is using a graph of rules. A graph of rules is essentially a structured collection of logical rules that interconnect in a network-like structure, defining how various entities and their relationships interact. This structured network allows for complex reasoning and information retrieval, as well as the ability to model intricate relationships between different pieces of knowledge.
In a graph of rules, each node represents a rule, and the edges define relationships between those rules — such as dependencies or causal links. This structure can be used to enhance LLM capabilities by providing them with a formal set of rules and relationships to follow, which improves logical consistency and reasoning depth. When an LLM encounters a problem or a question that requires multiple logical steps, it can traverse this graph of rules to generate an answer that is not only linguistically fluent but also logically robust.
For example, in a healthcare application, a graph of rules might include nodes for medical symptoms, possible diagnoses, and recommended treatments. When an LLM receives a query regarding a patient’s symptoms, it can use the graph to traverse from symptoms to potential diagnoses and then to treatment options, ensuring that the response is coherent and medically sound. The graph of rules guides reasoning, enabling LLMs to handle complex, multi-step questions that involve chains of reasoning, rather than merely generating surface-level responses.
Graphs of rules also enable modular reasoning, where different sets of rules can be activated based on the context or the type of question being asked. This modularity is crucial for creating adaptive AI systems that can apply specific sets of logical frameworks to distinct problem domains, thereby greatly enhancing their versatility. The combination of neural fluency with rule-based structure gives LLMs the ability to conduct more advanced reasoning, ultimately making them more reliable and effective in domains where accuracy and logical consistency are critical.
By implementing a graph of rules, LLMs are empowered to perform deductive reasoning alongside their generative capabilities, creating responses that are not only compelling but also logically aligned with the structured knowledge available in the system. This further enhances their potential applications in fields such as law, engineering, finance, and scientific research — domains where logical consistency is as important as linguistic coherence.
Enhancing LLMs with Symbolic Reasoning
Now, with LLMs like GPT-4 being mainstream, there is an emerging need to add real reasoning capabilities to them. This is where neurosymbolic approaches shine. Instead of pitting neural networks against symbolic reasoning, these methods combine the best of both worlds. The neural aspect provides language fluency and recognition of complex patterns, while the symbolic side offers real reasoning power through formal logic and rule-based frameworks.
Personal Knowledge Graphs (PKGs) come into play here as well. Knowledge graphs are data structures that encode entities and their relationships — they’re essentially semantic networks that allow for structured information retrieval. When integrated with neurosymbolic approaches, LLMs can use these graphs to answer questions in a far more contextual and precise way. By retrieving relevant information from a knowledge graph, they can ground their responses in well-defined relationships, thus improving both the relevance and the logical consistency of their answers.
Imagine combining an LLM with a graph of rules that allow it to reason through the relationships encoded in a personal knowledge graph. This could involve using deductive databases to form a sophisticated way to represent and reason with symbolic data — essentially constructing a powerful hybrid system that uses LLM capabilities for language fluency and rule-based logic for structured problem-solving.
My Research on Deductive Databases and Knowledge Graphs
I recently did some research on modeling knowledge graphs using deductive databases, such as DataLog — which can be thought of as a limited, data-oriented version of Prolog. What I’ve found is that it’s possible to use formal logic to model knowledge graphs, ontologies, and complex relationships elegantly as rules in a deductive system. Unlike classical RDF or traditional ontology-based models, which sometimes struggle with complex or evolving relationships, a deductive approach is more flexible and can easily support dynamic rules and reasoning.
Prolog and similar logic-driven frameworks can complement LLMs by handling the parts of reasoning where explicit rule-following is required. LLMs can benefit from these rule-based systems for tasks like entity recognition, logical inferences, and constructing or traversing knowledge graphs. We can even create a graph of rules that governs how relationships are formed or how logical deductions can be performed.
The future is really about creating an AI that is capable of both deep contextual understanding (using the powerful generative capacity of LLMs) and true reasoning (through symbolic systems and knowledge graphs). With the neurosymbolic approach, these AIs could be equipped not just to generate information but to explain their reasoning, form logical conclusions, and even improve their own understanding over time — getting us a step closer to true artificial general intelligence.
Why It Matters for LLM Employment
Using neurosymbolic RAG (retrieval-augmented generation) in conjunction with personal knowledge graphs could revolutionize how LLMs work in real-world applications. Imagine an LLM that understands not just language but also the relationships between different concepts — one that can navigate, reason, and explain complex knowledge domains by actively engaging with a personalized set of facts and rules.
This could lead to practical applications in areas like healthcare, finance, legal reasoning, or even personal productivity — where LLMs can help users solve complex problems logically, providing relevant information and well-justified reasoning paths. The combination of neural fluency with symbolic accuracy and deductive power is precisely the bridge we need to move beyond purely predictive AI to truly intelligent systems.
Let's explore these ideas further if you’re as fascinated by this as I am. Feel free to reach out, follow my YouTube channel, or check out some articles I’ll link below. And if you’re working on anything in this field, I’d love to collaborate!
Until next time, folks. Stay curious, and keep pushing the boundaries of AI!
-
@ b2caa9b3:9eab0fb5
2025-05-04 08:20:46Hey friends,
Exciting news – I’m currently setting up my very first Discord server!
This space will be all about my travels, behind-the-scenes stories, photo sharing, and practical tips and insights from the road. My goal is to make it the central hub connecting all my decentralized social platforms where I can interact with you more directly, and share exclusive content.
Since I’m just starting out, I’d love to hear from you:
Do you know any useful RSS-feed integrations for updates?
Can you recommend any cool Discord bots for community engagement or automation?
Are there any tips or features you think I must include?
The idea is to keep everything free and accessible, and to grow a warm, helpful community around the joy of exploring the world.
It’s my first time managing a Discord server, so your experience and suggestions would mean a lot. Leave a comment – I’m all ears!
Thanks for your support, Ruben Storm
-
@ e6817453:b0ac3c39
2024-12-07 14:54:46Introduction: Personal Knowledge Graphs and Linked Data
We will explore the world of personal knowledge graphs and discuss how they can be used to model complex information structures. Personal knowledge graphs aren’t just abstract collections of nodes and edges—they encode meaningful relationships, contextualizing data in ways that enrich our understanding of it. While the core structure might be a directed graph, we layer semantic meaning on top, enabling nuanced connections between data points.
The origin of knowledge graphs is deeply tied to concepts from linked data and the semantic web, ideas that emerged to better link scattered pieces of information across the web. This approach created an infrastructure where data islands could connect — facilitating everything from more insightful AI to improved personal data management.
In this article, we will explore how these ideas have evolved into tools for modeling AI’s semantic memory and look at how knowledge graphs can serve as a flexible foundation for encoding rich data contexts. We’ll specifically discuss three major paradigms: RDF (Resource Description Framework), property graphs, and a third way of modeling entities as graphs of graphs. Let’s get started.
Intro to RDF
The Resource Description Framework (RDF) has been one of the fundamental standards for linked data and knowledge graphs. RDF allows data to be modeled as triples: subject, predicate, and object. Essentially, you can think of it as a structured way to describe relationships: “X has a Y called Z.” For instance, “Berlin has a population of 3.5 million.” This modeling approach is quite flexible because RDF uses unique identifiers — usually URIs — to point to data entities, making linking straightforward and coherent.
RDFS, or RDF Schema, extends RDF to provide a basic vocabulary to structure the data even more. This lets us describe not only individual nodes but also relationships among types of data entities, like defining a class hierarchy or setting properties. For example, you could say that “Berlin” is an instance of a “City” and that cities are types of “Geographical Entities.” This kind of organization helps establish semantic meaning within the graph.
RDF and Advanced Topics
Lists and Sets in RDF
RDF also provides tools to model more complex data structures such as lists and sets, enabling the grouping of nodes. This extension makes it easier to model more natural, human-like knowledge, for example, describing attributes of an entity that may have multiple values. By adding RDF Schema and OWL (Web Ontology Language), you gain even more expressive power — being able to define logical rules or even derive new relationships from existing data.
Graph of Graphs
A significant feature of RDF is the ability to form complex nested structures, often referred to as graphs of graphs. This allows you to create “named graphs,” essentially subgraphs that can be independently referenced. For example, you could create a named graph for a particular dataset describing Berlin and another for a different geographical area. Then, you could connect them, allowing for more modular and reusable knowledge modeling.
Property Graphs
While RDF provides a robust framework, it’s not always the easiest to work with due to its heavy reliance on linking everything explicitly. This is where property graphs come into play. Property graphs are less focused on linking everything through triples and allow more expressive properties directly within nodes and edges.
For example, instead of using triples to represent each detail, a property graph might let you store all properties about an entity (e.g., “Berlin”) directly in a single node. This makes property graphs more intuitive for many developers and engineers because they more closely resemble object-oriented structures: you have entities (nodes) that possess attributes (properties) and are connected to other entities through relationships (edges).
The significant benefit here is a condensed representation, which speeds up traversal and queries in some scenarios. However, this also introduces a trade-off: while property graphs are more straightforward to query and maintain, they lack some complex relationship modeling features RDF offers, particularly when connecting properties to each other.
Graph of Graphs and Subgraphs for Entity Modeling
A third approach — which takes elements from RDF and property graphs — involves modeling entities using subgraphs or nested graphs. In this model, each entity can be represented as a graph. This allows for a detailed and flexible description of attributes without exploding every detail into individual triples or lump them all together into properties.
For instance, consider a person entity with a complex employment history. Instead of representing every employment detail in one node (as in a property graph), or as several linked nodes (as in RDF), you can treat the employment history as a subgraph. This subgraph could then contain nodes for different jobs, each linked with specific properties and connections. This approach keeps the complexity where it belongs and provides better flexibility when new attributes or entities need to be added.
Hypergraphs and Metagraphs
When discussing more advanced forms of graphs, we encounter hypergraphs and metagraphs. These take the idea of relationships to a new level. A hypergraph allows an edge to connect more than two nodes, which is extremely useful when modeling scenarios where relationships aren’t just pairwise. For example, a “Project” could connect multiple “People,” “Resources,” and “Outcomes,” all in a single edge. This way, hypergraphs help in reducing the complexity of modeling high-order relationships.
Metagraphs, on the other hand, enable nodes and edges to themselves be represented as graphs. This is an extremely powerful feature when we consider the needs of artificial intelligence, as it allows for the modeling of relationships between relationships, an essential aspect for any system that needs to capture not just facts, but their interdependencies and contexts.
Balancing Structure and Properties
One of the recurring challenges when modeling knowledge is finding the balance between structure and properties. With RDF, you get high flexibility and standardization, but complexity can quickly escalate as you decompose everything into triples. Property graphs simplify the representation by using attributes but lose out on the depth of connection modeling. Meanwhile, the graph-of-graphs approach and hypergraphs offer advanced modeling capabilities at the cost of increased computational complexity.
So, how do you decide which model to use? It comes down to your use case. RDF and nested graphs are strong contenders if you need deep linkage and are working with highly variable data. For more straightforward, engineer-friendly modeling, property graphs shine. And when dealing with very complex multi-way relationships or meta-level knowledge, hypergraphs and metagraphs provide the necessary tools.
The key takeaway is that only some approaches are perfect. Instead, it’s all about the modeling goals: how do you want to query the graph, what relationships are meaningful, and how much complexity are you willing to manage?
Conclusion
Modeling AI semantic memory using knowledge graphs is a challenging but rewarding process. The different approaches — RDF, property graphs, and advanced graph modeling techniques like nested graphs and hypergraphs — each offer unique strengths and weaknesses. Whether you are building a personal knowledge graph or scaling up to AI that integrates multiple streams of linked data, it’s essential to understand the trade-offs each approach brings.
In the end, the choice of representation comes down to the nature of your data and your specific needs for querying and maintaining semantic relationships. The world of knowledge graphs is vast, with many tools and frameworks to explore. Stay connected and keep experimenting to find the balance that works for your projects.
-
@ c631e267:c2b78d3e
2025-04-04 18:47:27Zwei mal drei macht vier, \ widewidewitt und drei macht neune, \ ich mach mir die Welt, \ widewide wie sie mir gefällt. \ Pippi Langstrumpf
Egal, ob Koalitionsverhandlungen oder politischer Alltag: Die Kontroversen zwischen theoretisch verschiedenen Parteien verschwinden, wenn es um den Kampf gegen politische Gegner mit Rückenwind geht. Wer den Alteingesessenen die Pfründe ernsthaft streitig machen könnte, gegen den werden nicht nur «Brandmauern» errichtet, sondern der wird notfalls auch strafrechtlich verfolgt. Doppelstandards sind dabei selbstverständlich inklusive.
In Frankreich ist diese Woche Marine Le Pen wegen der Veruntreuung von EU-Geldern von einem Gericht verurteilt worden. Als Teil der Strafe wurde sie für fünf Jahre vom passiven Wahlrecht ausgeschlossen. Obwohl das Urteil nicht rechtskräftig ist – Le Pen kann in Berufung gehen –, haben die Richter das Verbot, bei Wahlen anzutreten, mit sofortiger Wirkung verhängt. Die Vorsitzende des rechtsnationalen Rassemblement National (RN) galt als aussichtsreiche Kandidatin für die Präsidentschaftswahl 2027.
Das ist in diesem Jahr bereits der zweite gravierende Fall von Wahlbeeinflussung durch die Justiz in einem EU-Staat. In Rumänien hatte Călin Georgescu im November die erste Runde der Präsidentenwahl überraschend gewonnen. Das Ergebnis wurde später annulliert, die behauptete «russische Wahlmanipulation» konnte jedoch nicht bewiesen werden. Die Kandidatur für die Wahlwiederholung im Mai wurde Georgescu kürzlich durch das Verfassungsgericht untersagt.
Die Veruntreuung öffentlicher Gelder muss untersucht und geahndet werden, das steht außer Frage. Diese Anforderung darf nicht selektiv angewendet werden. Hingegen mussten wir in der Vergangenheit bei ungleich schwerwiegenderen Fällen von (mutmaßlichem) Missbrauch ganz andere Vorgehensweisen erleben, etwa im Fall der heutigen EZB-Chefin Christine Lagarde oder im «Pfizergate»-Skandal um die Präsidentin der EU-Kommission Ursula von der Leyen.
Wenngleich derartige Angelegenheiten formal auf einer rechtsstaatlichen Grundlage beruhen mögen, so bleibt ein bitterer Beigeschmack. Es stellt sich die Frage, ob und inwieweit die Justiz politisch instrumentalisiert wird. Dies ist umso interessanter, als die Gewaltenteilung einen essenziellen Teil jeder demokratischen Ordnung darstellt, während die Bekämpfung des politischen Gegners mit juristischen Mitteln gerade bei den am lautesten rufenden Verteidigern «unserer Demokratie» populär zu sein scheint.
Die Delegationen von CDU/CSU und SPD haben bei ihren Verhandlungen über eine Regierungskoalition genau solche Maßnahmen diskutiert. «Im Namen der Wahrheit und der Demokratie» möchte man noch härter gegen «Desinformation» vorgehen und dafür zum Beispiel den Digital Services Act der EU erweitern. Auch soll der Tatbestand der Volksverhetzung verschärft werden – und im Entzug des passiven Wahlrechts münden können. Auf europäischer Ebene würde Friedrich Merz wohl gerne Ungarn das Stimmrecht entziehen.
Der Pegel an Unzufriedenheit und Frustration wächst in großen Teilen der Bevölkerung kontinuierlich. Arroganz, Machtmissbrauch und immer abstrusere Ausreden für offensichtlich willkürliche Maßnahmen werden kaum verhindern, dass den etablierten Parteien die Unterstützung entschwindet. In Deutschland sind die Umfrageergebnisse der AfD ein guter Gradmesser dafür.
[Vorlage Titelbild: Pixabay]
Dieser Beitrag wurde mit dem Pareto-Client geschrieben und ist zuerst auf Transition News erschienen.
-
@ 7bdef7be:784a5805
2025-04-02 12:12:12We value sovereignty, privacy and security when accessing online content, using several tools to achieve this, like open protocols, open OSes, open software products, Tor and VPNs.
The problem
Talking about our social presence, we can manually build up our follower list (social graph), pick a Nostr client that is respectful of our preferences on what to show and how, but with the standard following mechanism, our main feed is public, so everyone can actually snoop what we are interested in, and what is supposable that we read daily.
The solution
Nostr has a simple solution for this necessity: encrypted lists. Lists are what they appear, a collection of people or interests (but they can also group much other stuff, see NIP-51). So we can create lists with contacts that we don't have in our main social graph; these lists can be used primarily to create dedicated feeds, but they could have other uses, for example, related to monitoring. The interesting thing about lists is that they can also be encrypted, so unlike the basic following list, which is always public, we can hide the lists' content from others. The implications are obvious: we can not only have a more organized way to browse content, but it is also really private one.
One might wonder what use can really be made of private lists; here are some examples:
- Browse “can't miss” content from users I consider a priority;
- Supervise competitors or adversarial parts;
- Monitor sensible topics (tags);
- Following someone without being publicly associated with them, as this may be undesirable;
The benefits in terms of privacy as usual are not only related to the casual, or programmatic, observer, but are also evident when we think of how many bots scan our actions to profile us.
The current state
Unfortunately, lists are not widely supported by Nostr clients, and encrypted support is a rarity. Often the excuse to not implement them is that they are harder to develop, since they require managing the encryption stuff (NIP-44). Nevertheless, developers have an easier option to start offering private lists: give the user the possibility to simply mark them as local-only, and never push them to the relays. Even if the user misses the sync feature, this is sufficient to create a private environment.
To date, as far as I know, the best client with list management is Gossip, which permits to manage both encrypted and local-only lists.
Beg your Nostr client to implement private lists!
-
@ 86dfbe73:628cef55
2025-05-04 06:56:33Building A Second Brain (BASB) ist eine Methode, mit der man Ideen, Einsichten und Vernetzungen, die man durch seine Erfahrungen gewonnen hat, systematisch speichert und die jeder Zeit abrufbar sind. Kurz, BASB erweitert das Gedächtnis mit Hilfe moderner digitaler Werkzeuge und Netzwerke. Die Kernidee ist, dass man durch diese Verlagerung sein biologisches Gehirn befreit, um frei denken zu können, seiner Kreativität freien Lauf zu geben oder einfach im Moment sein kann.
Die Methode besteht aus drei Grundschritten: dem Sammeln von Ideen und Erkenntnissen, dem Vernetzen dieser Ideen und Erkenntnisse und dem Erschaffen konkreter Ergebnisse.
Sammeln: Der erste Schritt beim Aufbau eines Second Brains ist das «Sammeln» der Ideen und Erkenntnisse, die genug wichtig oder interessant sind, um sie festzuhalten. Dafür wird als Organisationsstruktur P.A.R.A empfohlen.
Vernetzen: Sobald man angefangen hat, sein persönliches Wissen strukturiert zu sammeln, wird man anfangen, Muster und Verbindungen zwischen den Ideen und Erkenntnissen zu erkennen. Ab dieser Stelle verwende ich parallel die Zettelkastenmethode (ZKM)
Erschaffen: All das Erfassen, Zusammenfassen, Verbinden und Strukturieren haben letztlich das Ziel: Konkrete Ergebnisse in der realen Welt zu erschaffen.
PARA ist die Organisationsstruktur, die auf verschiedenen Endgeräten einsetzt werden kann, um digitale Informationen immer nach dem gleichen Schema abzulegen. Seien es Informationen, Notizen, Grafiken, Videos oder Dateien, alles hat seinen festen Platz und kann anhand von vier Kategorien bzw. „Buckets“ kategorisiert werden.
PARA steht dabei für: * Projekte * Areas * Ressourcen * Archiv
Projekte (engl. Projects) sind kurzfristige Bemühungen in Arbeit und Privatleben. Sie sind das, woran Du aktuell arbeitest. Sie haben einige für die Arbeit förderliche Eigenschaften: * Sie haben einen Anfang und ein Ende (im Gegensatz zu einem Hobby oder einem Verantwortungsbereich). * Sie haben ein konkretes Ergebnis, dass erreicht werden soll und bestehen aus konkreten Schritten, die nötig und zusammen hinreichend sind, um dieses Ziel zu erreichen, entspricht GTD von David Allen
Verantwortungsbereiche (engl. Areas) betreffen alles, was man langfristig im Blick behalten will. Sie unterscheidet von Projekten, dass man bei ihnen kein Ziel verfolgt, sondern einem Standard halten will. Sie sind dementsprechend nicht befristet. Man könnte sagen, dass sie einen Anspruch an uns selbst und unsere Lebenswelt darstellen.
Ressourcen (engl. Resources) sind Themen, die allenfalls langfristig relevant oder nützlich werden könnten. Sie sind eine Sammelkategorie für alles, was weder Projekt noch Verantwortungsbereich ist. Es sind: * Themen, die interessant sind. (English: Topic) * Untersuchungsgegenständige, die man erforschen will. (Englisch: Subject) * Nützliche Informationen für den späteren Gebrauch.
Das Archiv ist für alles Inaktive aus den obigen drei Kategorien. Es ist ein Lager für Beendetes und Aufgeschobenes.
Das System PARA ist eine nach zeitlicher Handlungsrelevanz angeordnete Ablage. Projekte kommen vor den Verantwortungsbereichen, weil sie einen kurz- bis mittelfristigen Zeithorizont haben, Verantwortungsbereiche dagegen einen unbegrenzten Zeithorizont. Ressourcen und Archiv bilden die Schlusslichter, weil sie gewöhnlich weder Priorität haben, noch dringend sind.
-
@ e6817453:b0ac3c39
2024-12-07 14:52:47The temporal semantics and temporal and time-aware knowledge graphs. We have different memory models for artificial intelligence agents. We all try to mimic somehow how the brain works, or at least how the declarative memory of the brain works. We have the split of episodic memory and semantic memory. And we also have a lot of theories, right?
Declarative Memory of the Human Brain
How is the semantic memory formed? We all know that our brain stores semantic memory quite close to the concept we have with the personal knowledge graphs, that it’s connected entities. They form a connection with each other and all those things. So far, so good. And actually, then we have a lot of concepts, how the episodic memory and our experiences gets transmitted to the semantic:
- hippocampus indexing and retrieval
- sanitization of episodic memories
- episodic-semantic shift theory
They all give a different perspective on how different parts of declarative memory cooperate.
We know that episodic memories get semanticized over time. You have semantic knowledge without the notion of time, and probably, your episodic memory is just decayed.
But, you know, it’s still an open question:
do we want to mimic an AI agent’s memory as a human brain memory, or do we want to create something different?
It’s an open question to which we have no good answer. And if you go to the theory of neuroscience and check how episodic and semantic memory interfere, you will still find a lot of theories, yeah?
Some of them say that you have the hippocampus that keeps the indexes of the memory. Some others will say that you semantic the episodic memory. Some others say that you have some separate process that digests the episodic and experience to the semantics. But all of them agree on the plan that it’s operationally two separate areas of memories and even two separate regions of brain, and the semantic, it’s more, let’s say, protected.
So it’s harder to forget the semantical facts than the episodes and everything. And what I’m thinking about for a long time, it’s this, you know, the semantic memory.
Temporal Semantics
It’s memory about the facts, but you somehow mix the time information with the semantics. I already described a lot of things, including how we could combine time with knowledge graphs and how people do it.
There are multiple ways we could persist such information, but we all hit the wall because the complexity of time and the semantics of time are highly complex concepts.
Time in a Semantic context is not a timestamp.
What I mean is that when you have a fact, and you just mentioned that I was there at this particular moment, like, I don’t know, 15:40 on Monday, it’s already awake because we don’t know which Monday, right? So you need to give the exact date, but usually, you do not have experiences like that.
You do not record your memories like that, except you do the journaling and all of the things. So, usually, you have no direct time references. What I mean is that you could say that I was there and it was some event, blah, blah, blah.
Somehow, we form a chain of events that connect with each other and maybe will be connected to some period of time if we are lucky enough. This means that we could not easily represent temporal-aware information as just a timestamp or validity and all of the things.
For sure, the validity of the knowledge graphs (simple quintuple with start and end dates)is a big topic, and it could solve a lot of things. It could solve a lot of the time cases. It’s super simple because you give the end and start dates, and you are done, but it does not answer facts that have a relative time or time information in facts . It could solve many use cases but struggle with facts in an indirect temporal context. I like the simplicity of this idea. But the problem of this approach that in most cases, we simply don’t have these timestamps. We don’t have the timestamp where this information starts and ends. And it’s not modeling many events in our life, especially if you have the processes or ongoing activities or recurrent events.
I’m more about thinking about the time of semantics, where you have a time model as a hybrid clock or some global clock that does the partial ordering of the events. It’s mean that you have the chain of the experiences and you have the chain of the facts that have the different time contexts.
We could deduct the time from this chain of the events. But it’s a big, big topic for the research. But what I want to achieve, actually, it’s not separation on episodic and semantic memory. It’s having something in between.
Blockchain of connected events and facts
I call it temporal-aware semantics or time-aware knowledge graphs, where we could encode the semantic fact together with the time component.I doubt that time should be the simple timestamp or the region of the two timestamps. For me, it is more a chain for facts that have a partial order and form a blockchain like a database or a partially ordered Acyclic graph of facts that are temporally connected. We could have some notion of time that is understandable to the agent and a model that allows us to order the events and focus on what the agent knows and how to order this time knowledge and create the chains of the events.
Time anchors
We may have a particular time in the chain that allows us to arrange a more concrete time for the rest of the events. But it’s still an open topic for research. The temporal semantics gets split into a couple of domains. One domain is how to add time to the knowledge graphs. We already have many different solutions. I described them in my previous articles.
Another domain is the agent's memory and how the memory of the artificial intelligence treats the time. This one, it’s much more complex. Because here, we could not operate with the simple timestamps. We need to have the representation of time that are understandable by model and understandable by the agent that will work with this model. And this one, it’s way bigger topic for the research.”
-
@ 57d1a264:69f1fee1
2025-05-04 06:37:52KOReader is a document viewer for E Ink devices. Supported file formats include EPUB, PDF, DjVu, XPS, CBT, CBZ, FB2, PDB, TXT, HTML, RTF, CHM, DOC, MOBI and ZIP files. It’s available for Kindle, Kobo, PocketBook, Android and desktop Linux.
Download it from https://koreader.rocks Repository: https://github.com/koreader/koreader
originally posted at https://stacker.news/items/970912
-
@ 04c915da:3dfbecc9
2025-03-25 17:43:44One of the most common criticisms leveled against nostr is the perceived lack of assurance when it comes to data storage. Critics argue that without a centralized authority guaranteeing that all data is preserved, important information will be lost. They also claim that running a relay will become prohibitively expensive. While there is truth to these concerns, they miss the mark. The genius of nostr lies in its flexibility, resilience, and the way it harnesses human incentives to ensure data availability in practice.
A nostr relay is simply a server that holds cryptographically verifiable signed data and makes it available to others. Relays are simple, flexible, open, and require no permission to run. Critics are right that operating a relay attempting to store all nostr data will be costly. What they miss is that most will not run all encompassing archive relays. Nostr does not rely on massive archive relays. Instead, anyone can run a relay and choose to store whatever subset of data they want. This keeps costs low and operations flexible, making relay operation accessible to all sorts of individuals and entities with varying use cases.
Critics are correct that there is no ironclad guarantee that every piece of data will always be available. Unlike bitcoin where data permanence is baked into the system at a steep cost, nostr does not promise that every random note or meme will be preserved forever. That said, in practice, any data perceived as valuable by someone will likely be stored and distributed by multiple entities. If something matters to someone, they will keep a signed copy.
Nostr is the Streisand Effect in protocol form. The Streisand effect is when an attempt to suppress information backfires, causing it to spread even further. With nostr, anyone can broadcast signed data, anyone can store it, and anyone can distribute it. Try to censor something important? Good luck. The moment it catches attention, it will be stored on relays across the globe, copied, and shared by those who find it worth keeping. Data deemed important will be replicated across servers by individuals acting in their own interest.
Nostr’s distributed nature ensures that the system does not rely on a single point of failure or a corporate overlord. Instead, it leans on the collective will of its users. The result is a network where costs stay manageable, participation is open to all, and valuable verifiable data is stored and distributed forever.
-
@ a39d19ec:3d88f61e
2024-11-21 12:05:09A state-controlled money supply can influence the development of socialist policies and practices in various ways. Although the relationship is not deterministic, state control over the money supply can contribute to a larger role of the state in the economy and facilitate the implementation of socialist ideals.
Fiscal Policy Capabilities
When the state manages the money supply, it gains the ability to implement fiscal policies that can lead to an expansion of social programs and welfare initiatives. Funding these programs by creating money can enhance the state's influence over the economy and move it closer to a socialist model. The Soviet Union, for instance, had a centralized banking system that enabled the state to fund massive industrialization and social programs, significantly expanding the state's role in the economy.
Wealth Redistribution
Controlling the money supply can also allow the state to influence economic inequality through monetary policies, effectively redistributing wealth and reducing income disparities. By implementing low-interest loans or providing financial assistance to disadvantaged groups, the state can narrow the wealth gap and promote social equality, as seen in many European welfare states.
Central Planning
A state-controlled money supply can contribute to increased central planning, as the state gains more influence over the economy. Central banks, which are state-owned or heavily influenced by the state, play a crucial role in managing the money supply and facilitating central planning. This aligns with socialist principles that advocate for a planned economy where resources are allocated according to social needs rather than market forces.
Incentives for Staff
Staff members working in state institutions responsible for managing the money supply have various incentives to keep the system going. These incentives include job security, professional expertise and reputation, political alignment, regulatory capture, institutional inertia, and legal and administrative barriers. While these factors can differ among individuals, they can collectively contribute to the persistence of a state-controlled money supply system.
In conclusion, a state-controlled money supply can facilitate the development of socialist policies and practices by enabling fiscal policies, wealth redistribution, and central planning. The staff responsible for managing the money supply have diverse incentives to maintain the system, further ensuring its continuation. However, it is essential to note that many factors influence the trajectory of an economic system, and the relationship between state control over the money supply and socialism is not inevitable.
-
@ a39d19ec:3d88f61e
2024-11-17 10:48:56This week's functional 3d print is the "Dino Clip".
Dino Clip
I printed it some years ago for my son, so he would have his own clip for cereal bags.
Now it is used to hold a bag of dog food close.
The design by "Sneaks" is a so called "print in place". This means that the whole clip with moving parts is printed in one part, without the need for assembly after the print.
The clip is very strong, and I would print it again if I need a "heavy duty" clip for more rigid or big bags. Link to the file at Printables
-
@ 57d1a264:69f1fee1
2025-05-04 06:27:15Well, today posts looks are dedicated to STAR WARS. Enjoy!
Today we’re looking at Beat Saber (2019) and why its most essential design element can be used to make great VR games that have nothing to do with music or rhythm.
https://www.youtube.com/watch?v=EoOeO7S9ehw
It’s hard to believe Beat Saber was first released in Early Access seven years ago today. From day one, it was clear the game was something special, but even so we couldn’t have predicted it would become one of VR’s best-selling games of all time—a title it still holds all these years later. In celebration of the game’s lasting legacy we’re re-publishing our episode of Inside XR Design which explores the secret to Beat Saber’s fun, and how it can be applied to VR games which have nothing to do with music.
Read more at https://www.roadtovr.com/beat-saber-instructed-motion-until-you-fall-inside-xr-design/
originally posted at https://stacker.news/items/970909
-
@ 57d1a264:69f1fee1
2025-05-04 06:16:58Found this really fun, so created a few intros for latest SN newsletters https://stacker.news/items/960787/r/Design_r?commentId=970902 and https://stacker.news/items/970459/r/Design_r?commentId=970905
Create your STAR-WARS-like movie intro https://starwarsintrocreator.kassellabs.io/
originally posted at https://stacker.news/items/970906
-
@ a367f9eb:0633efea
2024-11-05 08:48:41Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?
As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
Open-source matters When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
Open-source is for your friends, and enemies In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
Limiting open-source threatens our own advancement If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.
The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?
As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
Open-source matters
When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
Open-source is for your friends, and enemies
In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
Limiting open-source threatens our own advancement
If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.
The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
Originally published on the website of the Consumer Choice Center.
-
@ a29cfc65:484fac9c
2025-05-04 16:20:03Bei einer Führung durch den Naumburger Dom sprach der Domführer über Propaganda im Mittelalter. Die gefühlvollen Gesichtsausdrücke der steinernen Stifterfiguren rund um die berühmte Uta sollten das Volk beeinflussen. Darüber haben wir auf der Heimfahrt nach Leipzig philosophiert und fanden den Denkansatz spannend. Denn auch wenn es damals nicht Propaganda hieß, so gab es doch Interessen der Mächtigen, die sie gegenüber dem Volk durchsetzten. Sie bedienten sich dabei der damals verfügbaren „Medien“, zu denen die Kirche gehörte, wo sich das Volk zum Gottesdienst traf.
Kulturelle Identität Europas
Mitteldeutschland ist ein Zentrum mittelalterlicher Baukunst. Der Naumburger Dom St. Peter und Paul wurde auf den Grundmauern einer noch älteren Kirche im 13. Jahrhundert gebaut. Er ist weltweit einzigartig in seiner Architektur, Bildhauerkunst und Glasmalerei. Seit 2018 ist er Unesco-Weltkulturerbe. Die Stadt Naumburg hatte einst die gleiche Bedeutung wie Merseburg, Magdeburg oder Leipzig. Der Dom – von der Spätromanik bis in die Frühgotik unter Leitung eines heute unbekannten Bildhauerarchitekten errichtet – gilt als Meisterwerk menschlicher Schöpferkraft und Handwerkskunst. Die naturwissenschaftlich-physikalischen Kenntnisse der Menschen waren offensichtlich enorm. Sie verfügten über das Wissen zur Planung und über entsprechende Werk- und Hebezeuge, um solche Bauwerke in relativ kurzer Zeit errichten zu können.
Im Westchor des Doms befinden sich mit den zwölf lebensgroßen Stifterfiguren die bekanntesten Kunstwerke des Doms, unter ihnen Uta von Ballenstedt. Sie soll Walt Disney als Quelle für die schöne und sehr stolze Königin im Zeichentrickfilm Schneewittchen gedient haben. Das Besondere und Neue an den steinernen Stifterfiguren war ihre realitätsnahe Darstellung, die sie lebendig und ausdrucksstark wirken lässt. Sie sind ein Höhepunkt in der Steinmetzkunst der damaligen Zeit. Die Figuren wurden, obschon die dargestellten Personen bereits mehr als 200 Jahre tot waren, mit charakteristischen Gesichtsausdrücken dargestellt: Uta schaut schön und stolz in die Ferne, ihr Gatte Ekkehard wirkt etwas hochmütig. Gegenüber steht die lachende Reglindis neben ihrem wehmütig-leidend blickenden Mann Hermann von Meißen.
Der Domführer sagte, dass die Gesichtsausdrücke menschliche Verhaltensweisen darstellen, die bei den Kirchenbesuchern unerwünscht waren. Wir hätten es hier mit einer sehr frühen Form der Propaganda zu tun. Die katholische Kirche war Vorreiter in Sachen Propaganda. Sie hat etwa 400 Jahre später, im Jahr 1622, mit der Sacra Congregatio de Propaganda Fide ein Amt gegründet, das den „richtigen“ Glauben in die Welt tragen sollte, und erst 1967 umbenannt wurde. Aber ihre gesellschaftlich führende Position hatte damals auch eine positive Seite: Den Kirchen und Klöstern haben wir den Erhalt und die Weitergabe antiken Wissens zu verdanken. Europa konnte sich trotz der politischen Zersplitterung seine kulturelle Identität erhalten. Zum Beispiel lässt sich das Wirken des namenlosen Domschöpfers anhand der Bau- und Kunstwerke quer durch Europa von Nordfrankreich über Mainz nach Naumburg und Meißen nachvollziehen. Aus der weiteren Entwicklung von Kunst und Kultur in Europa entstand in der Renaissance die Philosophie des Humanismus und später daraus die Aufklärung mit ihrer Wirkung auf Literatur und Wissenschaft. Ziel war dabei immer eine Stärkung des Gemeinwesens.
Transhumanismus zerstört Gemeinschaften
Heute scheinen wir uns allerdings an einer Bruchstelle der gesellschaftlichen Entwicklung zu befinden. Die Kirchen spielen in unserer Gesellschaft kaum noch eine Rolle. Weder bringen sie sich in ethische Diskussionen hörbar ein, noch tragen sie die Entwicklung von Kunst und Kultur sichtbar voran. Ihre Rolle im Bereich Propaganda haben längst Zeitungen und Zeitschriften, Rundfunk und Fernsehen übernommen. Diese Medien haben eine größere Reichweite, und die psychologische Beeinflussung ist umfassender. Nach dem Zweiten Weltkrieg wurde die Manipulation der Massen stark intensiviert und nahm nach dem Zusammenbruch der Sowjetunion noch weiter an Fahrt auf. Der Liberalismus konnte auf allen Gebieten seinen Siegeszug antreten, stellte das Individuum in den Mittelpunkt und erhob den Markt zur heiligen Kuh. Im Laufe der Zeit wurden die humanistischen Ideen der Aufklärung in ihr Gegenteil verkehrt. Der Mensch wurde als fehlerhaftes Wesen identifiziert, in die Vereinzelung getrieben, bevormundet und gegängelt – angeblich, damit er sich nicht selbst schadet. Zur psychologischen Beeinflussung kommen die neuen technischen Möglichkeiten aus Bio-Nano-Neuro-Wissenschaften und Digitalisierung. Der Transhumanismus wurde als neues Ziel für die Menschheit ausgerufen. Der Einzelne soll biologisch und technisch perfektioniert werden. Gemeinschaften – von der Familie angefangen – treibt das in die Bedeutungslosigkeit. Es besteht die Gefahr, dass persönliche Integrität und Privatsphäre durch Eingriffe in Körper- und Geistesfunktionen verletzt werden. Eine neue Aufklärung ist nötig. Denn sehr viel von dem über die Jahrhunderte erlangten Wissen ging schon verloren oder ist nur noch versteckt in den Bibliotheken und Archiven der Kirchen zu finden. Die Besinnung auf die vergessenen beziehungsweise verdrängten Grundlagen und Ideale der Aufklärung kann diese Entwicklung abwenden. Die Kulturschätze Mitteleuropas vermitteln in ihrer Schönheit und Vollkommenheit die Ruhe und die zeitlichen und räumlichen Dimensionen, die wir brauchen, wenn wir über die Frage nachdenken, wie wir in Zukunft leben wollen.
Die Rolle der neuen Medien für die zukünftige Entwicklung
Von den Alt-Medien ist in dieser Hinsicht nichts zu erwarten. Sie werden finanziert und sind unterwandert von den Kräften, die transhumanistische Entwicklungen vorantreiben. Die „neue Aufklärung“ ist ein lohnenswertes Ziel für die neuen Medien. Diese lassen sich jedoch noch zu sehr von den aktuellen Themen der Alt-Medien treiben. Der Angst-Propaganda begegnen sie mit – Ängsten, wenn auch anders ausgerichtet. Einige reiten die Empörungswelle in Gegenrichtung zu den Alt-Medien. Manche Betreiber von „alternativen“ Finanz- und Wirtschaftskanälen wollen ihre eigenen marktgläubigen Produkte an den Mann bringen. Stattdessen sollten in den neuen Medien positive Nachrichten verbreitet und eigene Themenfelder eröffnet werden, denen sich die Alt-Medien verweigern:
· der Mensch und seine Bildung zur souveränen, selbständig denkenden und handelnden Persönlichkeit,
· die Entwicklung des eigenen Bewusstseins, um der Fremdbestimmung zu entkommen und zu Wahrhaftigkeit, Authentizität und Menschlichkeit zu gelangen,
· die Entwicklung des Gemeinwohls,
· die Frage, wie wir neue Gemeinschaften bis hin zu autarken Gemeinden gründen können – wichtiger, je mehr das gesellschaftliche System um uns herum zusammenbricht.
Direkt sichtbar ist der letzte Punkt am Niedergang der Architektur und am Zustand der Innenstädte: Die reich dekorierten Gebäude der Gründerzeit wurden nach ihrer Zerstörung im Zweiten Weltkrieg durch gleichförmig rechteckige Gebäude aus Beton und Glas ersetzt. Dazu kamen die in allen Städten austauschbar gleichen Ladenzeilen und in den letzten Jahren Dreck und Schmierereien, die nicht mehr weggeräumt werden.
Die gesellschaftlichen Verwerfungen der Corona-Zeit führten bei vielen Menschen zum Innehalten und Nachdenken über Sinn und Ziele ihres Lebens. So entstanden einige Pilotprojekte, zum Beispiel in den Bereichen Landwirtschaft, Gesundheitswesen und Bildung. Diese auf die Zukunft gerichteten Themen könnten in den neuen Medien umfangreicher vorgestellt und diskutiert werden. Manova setzt schon solche Schwerpunkte mit „The Great WeSet“ von Walter van Rossum sowie mit den Kategorien „Zukunft & Neue Wege“ sowie „Aufwind“. Der Kontrafunk hat Formate entwickelt, die das Gemeinwohl stärker in den Fokus setzen wie etwa die Kultur- und Wissenschaftsrubrik. Nuoviso hat einen eigenen Songcontest ins Leben gerufen. Neben der inhaltlichen Ausrichtung auf eine lebenswerte Zukunft gilt es auch, die technologische Basis der neuen Medien zukunftsfest zu machen und sich der digitalen Zensur zu entziehen. Milosz Matuschek geht mit dem Pareto-Projekt neue Wege. Es könnte zur unzensierbaren Plattform der neuen Medien werden. Denn wie er sagt: Man baut sein neues Haus doch auch nicht auf dem Boden, der einem anderen gehört.
-
@ 09fbf8f3:fa3d60f0
2024-11-02 08:00:29> ### 第三方API合集:
免责申明:
在此推荐的 OpenAI API Key 由第三方代理商提供,所以我们不对 API Key 的 有效性 和 安全性 负责,请你自行承担购买和使用 API Key 的风险。
| 服务商 | 特性说明 | Proxy 代理地址 | 链接 | | --- | --- | --- | --- | | AiHubMix | 使用 OpenAI 企业接口,全站模型价格为官方 86 折(含 GPT-4 )| https://aihubmix.com/v1 | 官网 | | OpenAI-HK | OpenAI的API官方计费模式为,按每次API请求内容和返回内容tokens长度来定价。每个模型具有不同的计价方式,以每1,000个tokens消耗为单位定价。其中1,000个tokens约为750个英文单词(约400汉字)| https://api.openai-hk.com/ | 官网 | | CloseAI | CloseAI是国内规模最大的商用级OpenAI代理平台,也是国内第一家专业OpenAI中转服务,定位于企业级商用需求,面向企业客户的线上服务提供高质量稳定的官方OpenAI API 中转代理,是百余家企业和多家科研机构的专用合作平台。 | https://api.openai-proxy.org | 官网 | | OpenAI-SB | 需要配合Telegram 获取api key | https://api.openai-sb.com | 官网 |
持续更新。。。
推广:
访问不了openai,去
低调云
购买VPN。官网:https://didiaocloud.xyz
邀请码:
w9AjVJit
价格低至1元。
-
@ 4fe14ef2:f51992ec
2025-05-04 06:02:38Let's support Bitcoin merchants! I'd love to hear some of your latest Lightning purchases and interesting products you bought. Feel free to include links to the shops or businesses you bought from.
Who else has a recent purchase they’re excited about? Bonus sats if you found a killer deal! ⚡
If you missed our last thread, here are some of the items stackers recently spent and zap on.
Share and repost: N: https://nostrudel.ninja/#/n/nevent1qvzqqqqqqypzqnlpfme... X: https://x.com/AGORA_SN/status/1918907693516914793
originally posted at https://stacker.news/items/970896
-
@ 4c48cf05:07f52b80
2024-10-30 01:03:42I believe that five years from now, access to artificial intelligence will be akin to what access to the Internet represents today. It will be the greatest differentiator between the haves and have nots. Unequal access to artificial intelligence will exacerbate societal inequalities and limit opportunities for those without access to it.
Back in April, the AI Index Steering Committee at the Institute for Human-Centered AI from Stanford University released The AI Index 2024 Annual Report.
Out of the extensive report (502 pages), I chose to focus on the chapter dedicated to Public Opinion. People involved with AI live in a bubble. We all know and understand AI and therefore assume that everyone else does. But, is that really the case once you step out of your regular circles in Seattle or Silicon Valley and hit Main Street?
Two thirds of global respondents have a good understanding of what AI is
The exact number is 67%. My gut feeling is that this number is way too high to be realistic. At the same time, 63% of respondents are aware of ChatGPT so maybe people are confounding AI with ChatGPT?
If so, there is so much more that they won't see coming.
This number is important because you need to see every other questions and response of the survey through the lens of a respondent who believes to have a good understanding of what AI is.
A majority are nervous about AI products and services
52% of global respondents are nervous about products and services that use AI. Leading the pack are Australians at 69% and the least worried are Japanise at 23%. U.S.A. is up there at the top at 63%.
Japan is truly an outlier, with most countries moving between 40% and 60%.
Personal data is the clear victim
Exaclty half of the respondents believe that AI companies will protect their personal data. And the other half believes they won't.
Expected benefits
Again a majority of people (57%) think that it will change how they do their jobs. As for impact on your life, top hitters are getting things done faster (54%) and more entertainment options (51%).
The last one is a head scratcher for me. Are people looking forward to AI generated movies?
Concerns
Remember the 57% that thought that AI will change how they do their jobs? Well, it looks like 37% of them expect to lose it. Whether or not this is what will happen, that is a very high number of people who have a direct incentive to oppose AI.
Other key concerns include:
- Misuse for nefarious purposes: 49%
- Violation of citizens' privacy: 45%
Conclusion
This is the first time I come across this report and I wil make sure to follow future annual reports to see how these trends evolve.
Overall, people are worried about AI. There are many things that could go wrong and people perceive that both jobs and privacy are on the line.
Full citation: Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.
The AI Index 2024 Annual Report by Stanford University is licensed under Attribution-NoDerivatives 4.0 International.
-
@ dc4cd086:cee77c06
2024-10-18 04:08:33Have you ever wanted to learn from lengthy educational videos but found it challenging to navigate through hours of content? Our new tool addresses this problem by transforming long-form video lectures into easily digestible, searchable content.
Key Features:
Video Processing:
- Automatically downloads YouTube videos, transcripts, and chapter information
- Splits transcripts into sections based on video chapters
Content Summarization:
- Utilizes language models to transform spoken content into clear, readable text
- Formats output in AsciiDoc for improved readability and navigation
- Highlights key terms and concepts with [[term]] notation for potential cross-referencing
Diagram Extraction:
- Analyzes video entropy to identify static diagram/slide sections
- Provides a user-friendly GUI for manual selection of relevant time ranges
- Allows users to pick representative frames from selected ranges
Going Forward:
Currently undergoing a rewrite to improve organization and functionality, but you are welcome to try the current version, though it might not work on every machine. Will support multiple open and closed language models for user choice Free and open-source, allowing for personal customization and integration with various knowledge bases. Just because we might not have it on our official Alexandria knowledge base, you are still welcome to use it on you own personal or community knowledge bases! We want to help find connections between ideas that exist across relays, allowing individuals and groups to mix and match knowledge bases between each other, allowing for any degree of openness you care.
While designed with #Alexandria users in mind, it's available for anyone to use and adapt to their own learning needs.
Screenshots
Frame Selection
This is a screenshot of the frame selection interface. You'll see a signal that represents frame entropy over time. The vertical lines indicate the start and end of a chapter. Within these chapters you can select the frames by clicking and dragging the mouse over the desired range where you think diagram is in that chapter. At the bottom is an option that tells the program to select a specific number of frames from that selection.
Diagram Extraction
This is a screenshot of the diagram extraction interface. For every selection you've made, there will be a set of frames that you can choose from. You can select and deselect as many frames as you'd like to save.
Links
- repo: https://github.com/limina1/video_article_converter
- Nostr Apps 101: https://www.youtube.com/watch?v=Flxa_jkErqE
Output
And now, we have a demonstration of the final result of this tool, with some quick cleaning up. The video we will be using this tool on is titled Nostr Apps 101 by nostr:npub1nxy4qpqnld6kmpphjykvx2lqwvxmuxluddwjamm4nc29ds3elyzsm5avr7 during Nostrasia. The following thread is an analog to the modular articles we are constructing for Alexandria, and I hope it conveys the functionality we want to create in the knowledge space. Note, this tool is the first step! You could use a different prompt that is most appropriate for the specific context of the transcript you are working with, but you can also manually clean up any discrepancies that don't portray the video accurately.
nostr:nevent1qvzqqqqqqypzp5r5hd579v2sszvvzfel677c8dxgxm3skl773sujlsuft64c44ncqy2hwumn8ghj7un9d3shjtnyv9kh2uewd9hj7qgwwaehxw309ahx7uewd3hkctcpzemhxue69uhhyetvv9ujumt0wd68ytnsw43z7qghwaehxw309aex2mrp0yhxummnw3ezucnpdejz7qgewaehxw309aex2mrp0yh8xmn0wf6zuum0vd5kzmp0qqsxunmjy20mvlq37vnrcshkf6sdrtkfjtjz3anuetmcuv8jswhezgc7hglpn
Or view on Coracle nostr:nevent1qqsxunmjy20mvlq37vnrcshkf6sdrtkfjtjz3anuetmcuv8jswhezgcppemhxue69uhkummn9ekx7mp0qgsdqa9md83tz5yqnrqjw07hhkpmfjpkuv9hlh5v8yhu8z274w9dv7qnnq0s3
-
@ 75869cfa:76819987
2025-03-18 07:54:38GM, Nostriches!
The Nostr Review is a biweekly newsletter focused on Nostr statistics, protocol updates, exciting programs, the long-form content ecosystem, and key events happening in the Nostr-verse. If you’re interested, join me in covering updates from the Nostr ecosystem!
Quick review:
In the past two weeks, Nostr statistics indicate over 225,000 daily trusted pubkey events. The number of new users has seen a notable decrease, with profiles containing a contact list dropping by 95%. More than 10 million events have been published, with posts and reposts showing a decrease. Total Zap activity stands at approximately 15 million, marking a 10% decrease.
Additionally, 26 pull requests were submitted to the Nostr protocol, with 6 merged. A total of 45 Nostr projects were tracked, with 8 releasing product updates, and over 463 long-form articles were published, 29% focusing on Bitcoin and Nostr. During this period, 2 notable events took place, and 3 significant events are upcoming.
Nostr Statistics
Based on user activity, the total daily trusted pubkeys writing events is about 225,000, representing a slight 8 % decrease compared to the previous period. Daily activity peaked at 18179 events, with a low of approximately 16093.
The number of new users has decreased significantly. Profiles with a contact list are now around 17,511, reflecting a 95% drop. Profiles with a bio have decreased by 62% compared to the previous period. The only category showing growth is pubkeys writing events, which have increased by 27%.
Regarding event publishing, all metrics have shown a decline. The total number of note events published is around 10 million, reflecting a 14% decrease. Posts remain the most dominant in terms of volume, totaling approximately 1.6 million, which is a 6.1% decrease. Both reposts and reactions have decreased by about 10%.
For zap activity, the total zap amount is about 15 million, showing an increase of over 10% compared to the previous period.
Data source: https://stats.nostr.band/
NIPs
nostr:npub1gcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqlfnj5z is proposing that A bulletin board is a relay-centric system of forums where users can post and reply to others, typically around a specific community. The relay operator controls and moderates who can post and view content. A board is defined by kind:30890. Its naddr representation must provide the community's home relays, from which all posts should be gathered. No other relays should be used.
nostr:npub1xy54p83r6wnpyhs52xjeztd7qyyeu9ghymz8v66yu8kt3jzx75rqhf3urc is proposing a standardized way to represent fitness and workout data in Nostr, including: Exercise Templates (kind: 33401) for storing reusable exercise definitions, Workout Templates (kind: 33402) for defining workout plans, Workout Records (kind: 1301) for recording completed workouts. The format provides structured data for fitness tracking while following Nostr conventions for data representation.Many fitness applications use proprietary formats, locking user data into specific platforms. This NIP enables decentralized fitness tracking, allowing users to control their workout data and history while facilitating social sharing and integration between fitness applications.
nostr:npub1zk6u7mxlflguqteghn8q7xtu47hyerruv6379c36l8lxzzr4x90q0gl6ef is proposing a PR introduces two "1-click" connection flows for setting up initial NWC connections. Rather than having to copy-paste a connection string, the user is presented with an authorization page which they can approve or decline. The secret is generated locally and never leaves the client. HTTP flow - for publicly accessible lightning wallets. Implemented in Alby Hub (my.albyhub.com) and CoinOS (coinos.io). Nostr flow - for mobile-based / self-hosted lightning wallets, very similar to NWA but without a new event type added. Implemented in Alby Go and Alby Hub. Benefits over NWC Deep Links are that it works cross-device, mobile to web, and the client-generated secret never leaves the client. Both flows are also implemented in Alby JS SDK and Bitcoin Connect.
add B0 NIP for Blossom interaction
nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 describes a tiny subset of possible Blossom capabilities, but arguably the most important from the point of view of a most basic Nostr client. This NIP specifies how Nostr clients can use Blossom for handling media. Blossom is a set of standards (called BUDs) for dealing with servers that store files addressable by their SHA-256 sums. Nostr clients may make use of all the BUDs for allowing users to upload files, manage their own files and so on, but most importantly Nostr clients SHOULD make use of BUD-03 to fetch kind:10063 lists of servers for each user.
nostr:npub149p5act9a5qm9p47elp8w8h3wpwn2d7s2xecw2ygnrxqp4wgsklq9g722q defines a standard for creating, managing and publishing to communities by leveraging existing key pairs and relays, introducing the concept of "Communi-keys". This approach allows any existing npub to become a community (identity + manager) while maintaining compatibility with existing relay infrastructure.
A way for relays to be honest about their algos
securitybrahh is proposing a PR introduces NIP-41, a way for relays to be honest about their algos, edits 01.md to account for changes in limit (related #78, #1434, received_at?, #620, #1645) when algo is provided, appends 11.md for relays to advertize whether they are an aggregator or not and their provided algos. solves #522, supersedes #579.
nip31: template-based "alt" tags for known kinds
nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 is proposing that clients hardcoding alt tags are not very trustworthy. alt tags tend to be garbage in a long-enough timeframe.This fixes it with hardcoded rich templates that anyone can implement very easily without having to do it manually for each kind. alt tags can still be used as a fallback.
nostr:npub1gcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqlfnj5z is proposing a PR addresses 3 main problems of NIP-44v2. First, It has a message size limit of 65Kb, which is unnecessarily small. Second, It forces the encrypting key to be the same as the event's signing key. Which forces multi-sig actors to share their main private key in order to encrypt the payload that would be later signed by the group. Decoupling singing and encryption keys, for both source and destination, is one of the goals of this version. And It offers no way to describe what's inside the encrypted blob before requesting the user's approval to decrypt and send the decrypted info back to the requesting application. This PR adds an alt description to allow decrypting signers to display a message and warn the user of what type of information the requesting application is receiving.
Notable Projects
Damus nostr:npub18m76awca3y37hkvuneavuw6pjj4525fw90necxmadrvjg0sdy6qsngq955
- Notes in progress will always be persisted and saved automatically. Never lose those banger notes when you aren't quite ready to ship them.
- Make your profile look just right without any fuss. It also optimizes them on upload now to not nuke other people’s phone data bills.
- You won't see the same note more than once in your home feed.
- Fixed note loading when clicking notifications and damus.io links.
- Fixed NWC not working when you first connect a wallet.
- Fixed overly sensitive and mildly infuriating touch gestures in the thread view when scrolling
Primal nostr:npub12vkcxr0luzwp8e673v29eqjhrr7p9vqq8asav85swaepclllj09sylpugg
Primal for Android build 2.1.9 has been released. * Multi-account support * Deep linking support * "Share via Primal" support * Bug fixes and improvements
Yakihonne nostr:npub1yzvxlwp7wawed5vgefwfmugvumtp8c8t0etk3g8sky4n0ndvyxesnxrf8q
YakiHonne Wallet just got a fresh new look!
0xchat nostr:npub1tm99pgz2lth724jeld6gzz6zv48zy6xp4n9xu5uqrwvx9km54qaqkkxn72
0xchat v1.4.7-beta release * Upgraded the Flutter framework to v3.29.0. * Private chat implementation changed to NIP-104 Nostr MLS. * NIP-17 and NIP-29 messages now support q tags. * You can swipe left to reply to your own messages. * Chat messages now support code block display. * Copy images from the clipboard. * Fixed an issue where underlined text in chat appeared as italic.
GOSSIP 0.14.0 nostr:npub189j8y280mhezlp98ecmdzydn0r8970g4hpqpx3u9tcztynywfczqqr3tg8
Several major bugs have been fixed in the last week. * New Features and Improvements * Zappers and amounts are now shown (click on the zap total) * Reactions and who reacted are now shown (click on the reaction numbers) * Multiple search UI/UX improvements * Undo Send works for DMs too * Undo Send now restores the draft * UI: Side panel contains less so it can be thinner. Bottom bar added. * UI: frame count and spinner (optional) * Relay UI: sorting by score puts important relays at the top. * Relay UI: add more filters so all the bits are covered * Image and video loading is much faster (significant lag reduction) * Thread loading fix makes threads load far more reliably * Settings have reset-to-default buttons, so you don't get too lost. * Setting 'limit inbox seeking to inbox relays' may help avoid spam at the expense of possibly * Fix some bugs * And more updates
Nostur v1.18.1 nostr:npub1n0stur7q092gyverzc2wfc00e8egkrdnnqq3alhv7p072u89m5es5mk6h0
New in this version: * Floating mini video player * Videos: Save to library, Copy video URL, Add bookmark * Improved video stream / chat view * Top zaps on live chat * Posting to Picture-first * Profile view: Show interactions with you (conversations, reactions, zaps, reposts) * Profile view: Show actual reactions instead of only Likes * Improved search + Bookmark search * Detect nsfw / content-warning in posts * Show more to show reactions outside Web of Trust * Show more to show zaps outside Web of Trust * Support .avif image format * Support .mp3 format * Support .m4v video format * Improved zap verification for changed wallets * Improved outbox support * Show label on restricted posts * Low data mode: load media in app on tap instead of external browser * Many other bug fixes and performance improvements
Alby nostr:npub1getal6ykt05fsz5nqu4uld09nfj3y3qxmv8crys4aeut53unfvlqr80nfm
Latest two releases of Alby Go, 1.10 and 1.11, brought you lots of goodies: * BTC Map integration for quick access to global bitcoin merchants map * Confirm new NWC connections to your Alby Hub directly in Alby Go! No more copy-pasting or QR code scanning * Support for MoneyBadger Pay Pick n Pay QR payments in over 2000 stores in South Africa
ZEUS v0.10.0 nostr:npub1xnf02f60r9v0e5kty33a404dm79zr7z2eepyrk5gsq3m7pwvsz2sazlpr5
ZEUS v0.10.0 is now available. This release features the ability to renew channel leases, spin up multiple embedded wallets, Nostr Wallet Connect client support, and more. * Renewable channels * NWC client support * Ability to create multiple Embedded LND 'node in the phone' wallets * Ability to delete Embedded LND wallets * Embedded LND: v0.18.5-beta * New share button (share ZEUS QR images) * Tools: Export Activity CSVs, Developer tools, chantools * Activity: filter by max amount, memo, and note
Long-Form Content Eco
In the past two weeks, more than 463 long-form articles have been published, including over 91 articles on Bitcoin and more than 41 related to Nostr, accounting for 29% of the total content.
These articles about Nostr mainly explore the rise of Nostr as a decentralized platform that is reshaping the future of the internet. They emphasize Nostr's role in providing users with greater freedom, ownership, and fair monetization, particularly in the realm of content creation. The platform is positioned as a counter to centralized social media networks, offering uncensored interactions, enhanced privacy, and direct transactions. Many articles delve into Nostr’s potential to integrate with Bitcoin, creating a Layer 3 solution that promises to end the dominance of old internet structures. Discussions also cover the technical aspects of Nostr, such as the implementation of relays and group functionalities, as well as security concerns like account hacks. Furthermore, there is an exploration of the philosophical and anthropological dimensions of Nostr, with the rise of "Dark Nostr" being portrayed as a deeper expression of decentralized freedom.
The Bitcoin articles discuss the ongoing evolution of Bitcoin and its increasing integration into global financial systems. Many articles focus on the growing adoption of Bitcoin, particularly in areas like Argentina and the U.S., where Bitcoin is being used for rental payments and the establishment of a strategic Bitcoin reserve. Bitcoin is also portrayed as a response to the centralized financial system, with discussions about how it can empower individuals through financial sovereignty, provide a hedge against inflation, and create fairer monetization models for creators. Additionally, the articles explore the challenges and opportunities within the Bitcoin ecosystem, including the rise of Bitcoin ETFs, the development of Bitcoin mining, and the potential impact of AI on Bitcoin adoption. There is also emphasis on Bitcoin's cultural and economic implications, as well as the need for decentralized education and innovation to drive further adoption.
Thank you, nostr:npub1ygzsm5m9ndtgch9n22cwsx2clwvxhk2pqvdfp36t5lmdyjqvz84qkca2m5 nostr:npub1rsv7kx5avkmq74p85v878e9d5g3w626343xhyg76z5ctfc30kz7q9u4dke nostr:npub17wrn0xxg0hfq7734cfm7gkyx3u82yfrqcdpperzzfqxrjf9n7tes6ra78k nostr:npub1fxq5crl52mre7luhl8uqsa639p50853r3dtl0j0wwvyfkuk4f6ssc5tahv nostr:npub1qny3tkh0acurzla8x3zy4nhrjz5zd8l9sy9jys09umwng00manysew95gx nostr:npub19mf4jm44umnup4he4cdqrjk3us966qhdnc3zrlpjx93y4x95e3uq9qkfu2 nostr:npub1marc26z8nh3xkj5rcx7ufkatvx6ueqhp5vfw9v5teq26z254renshtf3g0 nostr:npub1uv0m8xc6q4cnj2p0tewmcgkyzg8cnteyhed0zv30ez03w6dzwvnqtu6gwl nostr:npub1ygzsm5m9ndtgch9n22cwsx2clwvxhk2pqvdfp36t5lmdyjqvz84qkca2m5 nostr:npub1mhcr4j594hsrnen594d7700n2t03n8gdx83zhxzculk6sh9nhwlq7uc226 nostr:npub1xzuej94pvqzwy0ynemeq6phct96wjpplaz9urd7y2q8ck0xxu0lqartaqn nostr:npub1gqgpfv65dz8whvyup942daagsmwauj0d8gtxv9kpfvgxzkw4ga4s4w9awr nostr:npub16dswlmzpcys0axfm8kvysclaqhl5zv20ueurrygpnnm7k9ys0d0s2v653f and others, for your work. Enriching Nostr’s long-form content ecosystem is crucial.
Nostriches Global Meet Ups
Recently, several Nostr events have been hosted in different countries. * The first Bitcoin Meetup organized by Mi Primer Bitcoin was successfully held on March 14, 2025, at Texijal Pizza in Apaneca. The event included Bitcoin education, networking, a Q&A session, and merchandise distribution, offering an exciting experience for all participants.
* The Btrust Space discussion was successfully held on March 13, 2024. The event focused on how to support Bitcoin developers, fund open-source contributions, and grow the Bitcoin ecosystem. The speakers included Bitcoin core contributors, Btrust CEO, engineering leads, and other project leaders.Here is the upcoming Nostr event that you might want to check out.
- The Nostr Workshop, organized by YakiHonne and Bitcoin Safari, will take place online via Google Meet on March 17, 2025, at 7:00 PM (GMT+1). The event will introduce the Nostr ecosystem and Bitcoin payments, with participants learning about decentralized technology through YakiHonne and earning rewards. Register and verify your account to claim exclusive rewards, and invite friends to unlock additional rewards.
- The 2025 Bitcoin, Crypto Economy, and Law FAQ Webinar will be held online on March 20, 2025 (Thursday) from 12:00 to 13:00 Argentina time. The webinar will be hosted by Martin Paolantonio (Academic Director of the course) and Daniel Rybnik (Lawyer specializing in Banking, Corporate, and Financial Law). The session aims to introduce the academic program and explore Bitcoin, the crypto economy, and related legal issues.
- Bitcoin Educators Unconference 2025 will take place on April 10, 2025, at Bitcoin Park in Nashville, Tennessee, USA. This event is non-sponsored and follows an Unconference format, allowing all participants to apply as speakers and share their Bitcoin education experiences in a free and interactive environment. The event has open-sourced all its blueprints and Standard Operating Procedures (SOPs) to encourage global communities to organize similar Unconference events.
Additionally, We warmly invite event organizers who have held recent activities to reach out to us so we can work together to promote the prosperity and development of the Nostr ecosystem.
Thanks for reading! If there’s anything I missed, feel free to reach out and help improve the completeness and accuracy of my coverage.
-
@ 2a655bda:344d8870
2025-05-04 04:24:19Bet888 là một nền tảng giải trí trực tuyến hiện đại, thu hút sự quan tâm của đông đảo người dùng nhờ vào những trải nghiệm độc đáo và phong phú mà nó mang lại. Nền tảng này được thiết kế với giao diện thân thiện, dễ sử dụng, giúp người tham gia dễ dàng truy cập và tận hưởng các hoạt động giải trí ngay trên thiết bị của mình. Bet888 không chỉ là một nền tảng giải trí đơn thuần mà còn tạo ra một không gian cho người tham gia có thể thư giãn, học hỏi và khám phá những cơ hội mới. Với sự kết hợp hoàn hảo giữa công nghệ tiên tiến và các dịch vụ tiện ích, Bet888 đem đến cho người tham gia một trải nghiệm trực tuyến thú vị và mượt mà. Từ các trò chơi thử thách trí tuệ, các sự kiện đặc biệt, đến những hoạt động cộng đồng, nền tảng này không ngừng cải tiến để đáp ứng nhu cầu giải trí ngày càng cao của người dùng. Bet888 cam kết mang lại một không gian giải trí an toàn, thân thiện và luôn tạo ra những cơ hội mới mẻ cho cộng đồng người tham gia.
Một trong những yếu tố làm nên sự thành công của BET888 chính là khả năng cá nhân hóa trải nghiệm người dùng. Nền tảng này cung cấp rất nhiều lựa chọn cho người tham gia, từ các trò chơi giải trí, thử thách trí tuệ đến các sự kiện đặc biệt, giúp người tham gia có thể tự do lựa chọn hoạt động phù hợp với sở thích và nhu cầu cá nhân. Bet888 không chỉ mang lại những giây phút giải trí thú vị mà còn giúp người tham gia phát triển các kỹ năng cá nhân, từ khả năng giải quyết vấn đề đến tư duy chiến lược. Các trò chơi và thử thách được thiết kế không chỉ để giải trí mà còn để kích thích sự sáng tạo và khả năng tư duy logic của người tham gia. Nền tảng này cũng chú trọng đến việc xây dựng một cộng đồng mạnh mẽ, nơi mà người tham gia có thể giao lưu, kết nối và học hỏi lẫn nhau. Các sự kiện cộng đồng được tổ chức thường xuyên, khuyến khích sự hợp tác và chia sẻ ý tưởng sáng tạo giữa các thành viên, từ đó tạo ra một không gian giải trí vui vẻ và đầy thử thách. Bet888 luôn lắng nghe phản hồi từ cộng đồng người tham gia để cải tiến các dịch vụ, giúp nền tảng ngày càng trở nên hấp dẫn và phù hợp với nhu cầu người dùng.
Bảo mật và chất lượng dịch vụ là hai yếu tố quan trọng mà Bet888 luôn chú trọng. Nền tảng này áp dụng các công nghệ bảo mật tiên tiến, đảm bảo rằng thông tin cá nhân của người tham gia luôn được bảo vệ một cách an toàn. Với hệ thống bảo mật chặt chẽ, người dùng có thể yên tâm tham gia vào các hoạt động trực tuyến mà không phải lo lắng về vấn đề bảo mật dữ liệu. Bet888 cũng đặc biệt chú trọng đến việc cung cấp dịch vụ hỗ trợ khách hàng chất lượng cao, giúp người tham gia giải quyết mọi thắc mắc và vấn đề gặp phải trong quá trình sử dụng nền tảng. Đội ngũ hỗ trợ khách hàng của Bet888 luôn sẵn sàng 24/7 để đảm bảo rằng người dùng có thể nhận được sự trợ giúp kịp thời và hiệu quả. Bên cạnh đó, nền tảng này liên tục cải tiến giao diện người dùng, giúp người tham gia dễ dàng tìm kiếm và tham gia vào các hoạt động yêu thích mà không gặp phải bất kỳ trở ngại nào. Các tính năng mới luôn được cập nhật nhanh chóng, giúp người tham gia luôn cảm thấy hài lòng và không bao giờ cảm thấy nhàm chán. Bet888 cam kết tiếp tục phát triển mạnh mẽ, mang lại cho người dùng những trải nghiệm thú vị, an toàn và chất lượng, từ đó giữ vững vị thế là một trong những nền tảng giải trí hàng đầu.
-
@ 502ab02a:a2860397
2025-05-04 03:13:06เรารู้จักกับ Circadian Rhythm และ Infradian Rhythm แล้ว คราวนี้เรามารู้จักกับ Ultradian Rhythm กันครับ จากรากศัพท์ภาษาละติน คำว่า “Ultra” แปลว่า “มากกว่า” หรือ “ถี่กว่า” คำว่า diem” = แปลว่า "วัน" พอเอามารวมกันเป็น “ultradian” ก็หมายถึงวงจรชีวภาพที่เกิดขึ้น บ่อยกว่า 1 รอบต่อวัน (ความถี่สูงกว่ารอบ 24 ชั่วโมง) ไม่ใช่ “ยาวกว่า 1 วัน” สรุปเป็นภาษาง่ายๆคือ "จังหวะชีวภาพที่เกิดซ้ำ มากกว่า 1 ครั้งภายใน 24 ชั่วโมง"
หรือถ้าเราจะเรียงลำดับของ Rythm ทั้ง 3 ประเภทเราจะได้เป็น 1.Circadian Rhythm (ประมาณ 24 ชม.) 2.Ultradian Rhythm (น้อยกว่า 24 ชม.) 3.Infradian Rhythm (มากกว่า 24 ชม.)
สำหรับตัวอย่าง Ultradian Rhythm ที่สำคัญๆนะครับ เช่น 1. วัฏจักรการนอน (Sleep Cycle) ที่แต่ละรอบ จะอยู่ที่ราวๆ 90–120 นาที สลับกันไปมาระหว่าง NREM (หลับลึก) และ REM (ฝัน) อย่างที่สายสุขภาพเรียนรู้กันมาคือ ถ้าหลับสลับครบ 4–6 รอบ จะหลับสนิท ฟื้นเช้ามาสดชื่นแจ่มใสพักผ่อนเต็มที่
แสงเช้า-แดดอ่อนๆ ช่วยรีเซ็ต circadian แต่ก็ส่งผลให้ ultradian sleep cycle เริ่มต้นตรงจังหวะพอดี นอกจากนี้ยังมีสิ่งที่เรียกว่า Power nap ตอนแดดบ่าย (20–25 นาทีแดดอ่อน) จะช่วยกระตุ้น ultradian nap cycle ให้ตื่นขึ้นมาเป๊ะ ไม่งัวเงีย ให้เลือกจุดที่แดดยังอ่อน เช่น ริมหน้าต่างที่มีแดดผ่านมานุ่ม ๆ หรือใต้ต้นไม้ที่กรองแสงได้บางส่วน ไม่จำเป็นต้องนอนตากแดดโดยตรง แต่ให้ “รับแสงธรรมชาติ” พร้อมกับงีบ จะช่วยให้ circadian และ ultradian cycles ทำงานประสานกันได้ดีขึ้น
- การหลั่งฮอร์โมนแบบพัลซ์ หรือ Pulsatile Hormone Secretion คือรูปแบบการปล่อยฮอร์โมนออกมาจากต่อมต่าง ๆ ในร่างกายแบบเป็น “จังหวะ” หรือ “เป็นช่วง” (bursts/pulses) ไม่ใช่การหลั่งออกมาอย่างต่อเนื่องตลอดเวลา เช่น เทพแห่งการลดน้ำหนัก Growth Hormone (GH) หลั่งพุ่งตอนหลับลึกทุก 3–4 ชั่วโมง / Cortisol มีพัลซ์เล็กๆ ในวัน แม้หลักๆ จะเป็น circadian แต่ก็มี ultradian pulse ทุก 1–2 ชั่วโมง ได้เหมือนกัน / Insulin & Glucagon ชัดเลยชาว keto IF รู้ดีที่สุดว่า หลั่งเป็นรอบตามมื้ออาหารและช่วงพักระหว่างมื้อ
ลองนึกภาพว่า “แสงแดง” และ “อินฟราเรด” เปรียบเหมือนอาหารเช้าของเซลล์เรา เมื่อผิวเราโดนแดดอ่อน ๆ ในช่วงเช้าหรือบ่ายแก่ แสงเหล่านี้จะซึมเข้าไปกระตุ้น “โรงไฟฟ้าประจำเซลล์” (ไมโตคอนเดรีย) ให้ผลิตพลังงาน (ATP) ขึ้นมาเพิ่ม เหมือนเติมน้ำมันให้เครื่องยนต์วิ่งได้ลื่น พอเซลล์มีพลังงานมากขึ้น ในช่วงที่ร่างกายหลั่งฮอร์โมนการซ่อมแซมอย่าง “growth hormone” (GH) ร่างกายก็จะใช้พลังงานจากแสงนี้พร้อมกับฮอร์โมนในการซ่อมแซมกล้ามเนื้อและเนื้อเยื่อต่าง ๆ ได้เต็มประสิทธิภาพขึ้นนั่นเองครับ
- ช่วงเวลาการจดจ่อ หรือ Attention Span & Energy Cycle คนทั่วไปมีสมาธิ/พลังงานโฟกัสอยู่รอบละ 90 นาที หลังจากนั้นควรพัก 10–20 นาที หากฝืนต่อเนื่อง จะเกิดอาการอ่อนล้า สมาธิหลุด
Blue light เช้า จากแดดจะไปกระตุ้นในส่วนของ suprachiasmatic nucleus (SCN) ในสมองให้ปล่อยสารกระตุ้นความตื่นตัว (เช่น คอร์ติซอล) พอสมควร ซึ่งช่วยให้ ultradian cycle ของสมาธิ (โฟกัสได้ประมาณ 90 นาที) ทำงานเต็มประสิทธิภาพ ถ้าเช้าๆ ไม่เจอแดดเลย cycle นี้จะเลื่อนออกไป ทำให้รู้สึกง่วงเหงาหาวนอนเร็ว หรือโฟกัสไม่ได้นานตามปกติ เป็นที่มาของการเพลียแม้จะตื่นสายแล้วก็ตาม
- รอบความหิว หรือ Appetite & Digestive Rhythm ชื่อเท่ห์ป่ะหล่ะ 555 คือความหิวมาเป็นรอบตามวิธีการกินของแต่ละคน ซึ่งเป็นความสัมพันธ์กับ ฮอร์โมน GI (เช่น ghrelin, leptin) ก็วิ่งเป็นรอบเหมือนกัน
แสงแดดเช้า ช่วยตั้ง leptin/ghrelin baseline ให้สมดุล ลดการกินจุกจิกนอกมื้อได้ ส่วนแสงอ่อนๆ ตอนบ่ายช่วยบูสต์ blood flow ในทางเดินอาหาร ให้ digestion cycle หรือการดูดซึมสารอาหารตรงจังหวะ
แดดเป็นแค่ส่วนสำคัญในชีวิตแต่การใช้ Ultradian Rhythm มันต้องประกอบกับกิจกรรมอื่นๆด้วยนะครับ เช่น ทำงานหรืออ่านหนังสือ 90 นาที แล้ว พัก 15–20 นาที ยืดเส้นสาย เคลื่อนไหวเล็กน้อย, ออกกำลังกายให้ตรงจังหวะ, ในช่วง ultradian break พยายามลดการใช้จอมือถือ/คอมฯ ออกไปรับแสงธรรมชาติ หรือยืดเส้น เปิดเพลงเบาๆ เพื่อหลีกเลี่ยง "stimuli" ช่วง break หรือ สิ่งเร้าภายนอก ที่มากระตุ้นประสาทสัมผัสและสมองเรา
แสงแดดจึงเป็น ตัวตั้งเวลา (zeitgeber) ที่ไม่ได้แค่กับรอบวัน-เดือน-ปี แต่รวมถึงจังหวะสั้นๆ ภายในวันด้วย การใช้แสงธรรมชาติให้พอดีในแต่ละช่วง (เช้า เบรก บ่าย) จะช่วยให้ ultradian rhythms ในด้านสมาธิ การนอน ฮอร์โมน และการย่อยอาหาร ทำงานสอดคล้องกับจังหวะชีวิตที่เป็นธรรมชาติที่สุดครับ #pirateketo #SundaySpecialเราจะไปเป็นหมูแดดเดียว #กูต้องรู้มั๊ย #ม้วนหางสิลูก #siamstr
-
@ 8947a945:9bfcf626
2024-10-17 08:06:55สวัสดีทุกคนบน Nostr ครับ รวมไปถึง watchersและ ผู้ติดตามของผมจาก Deviantart และ platform งานศิลปะอื่นๆนะครับ
ตั้งแต่ต้นปี 2024 ผมใช้ AI เจนรูปงานตัวละครสาวๆจากอนิเมะ และเปิด exclusive content ให้สำหรับผู้ที่ชื่นชอบผลงานของผมเป็นพิเศษ
ผมโพสผลงานผมทั้งหมดไว้ที่เวบ Deviantart และค่อยๆสร้างฐานผู้ติดตามมาเรื่อยๆอย่างค่อยเป็นค่อยไปมาตลอดครับ ทุกอย่างเติบโตไปเรื่อยๆของมัน ส่วนตัวผมมองว่ามันเป็นพิร์ตธุรกิจออนไลน์ ของผมพอร์ตนึงได้เลย
เมื่อวันที่ 16 กย.2024 มีผู้ติดตามคนหนึ่งส่งข้อความส่วนตัวมาหาผม บอกว่าชื่นชอบผลงานของผมมาก ต้องการจะขอซื้อผลงาน แต่ขอซื้อเป็น NFT นะ เสนอราคาซื้อขายต่อชิ้นที่สูงมาก หลังจากนั้นผมกับผู้ซื้อคนนี้พูดคุยกันในเมล์ครับ
นี่คือข้อสรุปสั่นๆจากการต่อรองซื้อขายครับ
(หลังจากนี้ผมขอเรียกผู้ซื้อว่า scammer นะครับ เพราะไพ่มันหงายมาแล้ว ว่าเขาคือมิจฉาชีพ)
- Scammer รายแรก เลือกผลงานที่จะซื้อ เสนอราคาซื้อที่สูงมาก แต่ต้องเป็นเวบไซต์ NFTmarket place ที่เขากำหนดเท่านั้น มันทำงานอยู่บน ERC20 ผมเข้าไปดูเวบไซต์ที่ว่านี้แล้วรู้สึกว่ามันดูแปลกๆครับ คนที่จะลงขายผลงานจะต้องใช้ email ในการสมัครบัญชีซะก่อน ถึงจะผูก wallet อย่างเช่น metamask ได้ เมื่อผูก wallet แล้วไม่สามารถเปลี่ยนได้ด้วย ตอนนั้นผมใช้ wallet ที่ไม่ได้ link กับ HW wallet ไว้ ทดลองสลับ wallet ไปๆมาๆ มันทำไม่ได้ แถมลอง log out แล้ว เลข wallet ก็ยังคาอยู่อันเดิม อันนี้มันดูแปลกๆแล้วหนึ่งอย่าง เวบนี้ค่า ETH ในการ mint 0.15 - 0.2 ETH … ตีเป็นเงินบาทนี่แพงบรรลัยอยู่นะครับ
-
Scammer รายแรกพยายามชักจูงผม หว่านล้อมผมว่า แหม เดี๋ยวเขาก็มารับซื้องานผมน่า mint งานเสร็จ รีบบอกเขานะ เดี๋ยวเขารีบกดซื้อเลย พอขายได้กำไร ผมก็ได้ค่า gas คืนได้ แถมยังได้กำไรอีก ไม่มีอะไรต้องเสีนจริงมั้ย แต่มันเป้นความโชคดีครับ เพราะตอนนั้นผมไม่เหลือทุนสำรองที่จะมาซื้อ ETH ได้ ผมเลยต่อรองกับเขาตามนี้ครับ :
-
ผมเสนอว่า เอางี้มั้ย ผมส่งผลงานของผมแบบ low resolution ให้ก่อน แลกกับให้เขาช่วยโอน ETH ที่เป็นค่า mint งานมาให้หน่อย พอผมได้ ETH แล้ว ผมจะ upscale งานของผม แล้วเมล์ไปให้ ใจแลกใจกันไปเลย ... เขาไม่เอา
- ผมเสนอให้ไปซื้อที่ร้านค้าออนไลน์ buymeacoffee ของผมมั้ย จ่ายเป็น USD ... เขาไม่เอา
- ผมเสนอให้ซื้อขายผ่าน PPV lightning invoice ที่ผมมีสิทธิ์เข้าถึง เพราะเป็น creator ของ Creatr ... เขาไม่เอา
- ผมยอกเขาว่างั้นก็รอนะ รอเงินเดือนออก เขาบอก ok
สัปดาห์ถัดมา มี scammer คนที่สองติดต่อผมเข้ามา ใช้วิธีการใกล้เคียงกัน แต่ใช้คนละเวบ แถมเสนอราคาซื้อที่สูงกว่าคนแรกมาก เวบที่สองนี้เลวร้ายค่าเวบแรกอีกครับ คือต้องใช้เมล์สมัครบัญชี ไม่สามารถผูก metamask ได้ พอสมัครเสร็จจะได้ wallet เปล่าๆมาหนึ่งอัน ผมต้องโอน ETH เข้าไปใน wallet นั้นก่อน เพื่อเอาไปเป็นค่า mint NFT 0.2 ETH
ผมบอก scammer รายที่สองว่า ต้องรอนะ เพราะตอนนี้กำลังติดต่อซื้อขายอยู่กับผู้ซื้อรายแรกอยู่ ผมกำลังรอเงินเพื่อมาซื้อ ETH เป็นต้นทุนดำเนินงานอยู่ คนคนนี้ขอให้ผมส่งเวบแรกไปให้เขาดูหน่อย หลังจากนั้นไม่นานเขาเตือนผมมาว่าเวบแรกมันคือ scam นะ ไม่สามารถถอนเงินออกมาได้ เขายังส่งรูป cap หน้าจอที่คุยกับผู้เสียหายจากเวบแรกมาให้ดูว่าเจอปัญหาถอนเงินไม่ได้ ไม่พอ เขายังบลัฟ opensea ด้วยว่าลูกค้าขายงานได้ แต่ถอนเงินไม่ได้
Opensea ถอนเงินไม่ได้ ตรงนี้แหละครับคือตัวกระตุกต่อมเอ๊ะของผมดังมาก เพราะ opensea อ่ะ ผู้ใช้ connect wallet เข้ากับ marketplace โดยตรง ซื้อขายกันเกิดขึ้น เงินวิ่งเข้าวิ่งออก wallet ของแต่ละคนโดยตรงเลย opensea เก็บแค่ค่า fee ในการใช้ platform ไม่เก็บเงินลูกค้าไว้ แถมปีนี้ค่า gas fee ก็ถูกกว่า bull run cycle 2020 มาก ตอนนี้ค่า gas fee ประมาณ 0.0001 ETH (แต่มันก็แพงกว่า BTC อยู่ดีอ่ะครับ)
ผมเลยเอาเรื่องนี้ไปปรึกษาพี่บิท แต่แอดมินมาคุยกับผมแทน ทางแอดมินแจ้งว่ายังไม่เคยมีเพื่อนๆมาปรึกษาเรื่องนี้ กรณีที่ผมทักมาถามนี่เป็นรายแรกเลย แต่แอดมินให้ความเห็นไปในทางเดียวกับสมมุติฐานของผมว่าน่าจะ scam ในเวลาเดียวกับผมเอาเรื่องนี้ไปถามในเพจ NFT community คนไทนด้วย ได้รับการ confirm ชัดเจนว่า scam และมีคนไม่น้อยโดนหลอก หลังจากที่ผมรู้ที่มาแล้ว ผมเลยเล่นสงครามปั่นประสาท scammer ทั้งสองคนนี้ครับ เพื่อดูว่าหลอกหลวงมิจฉาชีพจริงมั้ย
โดยวันที่ 30 กย. ผมเลยปั่นประสาน scammer ทั้งสองรายนี้ โดยการ mint ผลงานที่เขาเสนอซื้อนั่นแหละ ขึ้น opensea แล้วส่งข้อความไปบอกว่า
mint ให้แล้วนะ แต่เงินไม่พอจริงๆว่ะโทษที เลย mint ขึ้น opensea แทน พอดีบ้านจน ทำได้แค่นี้ไปถึงแค่ opensea รีบไปซื้อล่ะ มีคนจ้องจะคว้างานผมเยอะอยู่ ผมไม่คิด royalty fee ด้วยนะเฮ้ย เอาไปขายต่อไม่ต้องแบ่งกำไรกับผม
เท่านั้นแหละครับ สงครามจิตวิทยาก็เริ่มขึ้น แต่เขาจนมุม กลืนน้ำลายตัวเอง ช็อตเด็ดคือ
เขา : เนี่ยอุส่ารอ บอกเพื่อนในทีมว่าวันจันทร์ที่ 30 กย. ได้ของแน่ๆ เพื่อนๆในทีมเห็นงานผมแล้วมันสวยจริง เลยใส่เงินเต็มที่ 9.3ETH (+ capture screen ส่งตัวเลขยอดเงินมาให้ดู)ไว้รอโดยเฉพาะเลยนะ ผม : เหรอ ... งั้น ขอดู wallet address ที่มี transaction มาให้ดูหน่อยสิ เขา : 2ETH นี่มัน 5000$ เลยนะ ผม : แล้วไง ขอดู wallet address ที่มีการเอายอดเงิน 9.3ETH มาให้ดูหน่อย ไหนบอกว่าเตรียมเงินไว้มากแล้วนี่ ขอดูหน่อย ว่าใส่ไว้เมื่อไหร่ ... เอามาแค่ adrress นะเว้ย ไม่ต้องทะลึ่งส่ง seed มาให้ เขา : ส่งรูปเดิม 9.3 ETH มาให้ดู ผม : รูป screenshot อ่ะ มันไม่มีความหมายหรอกเว้ย ตัดต่อเอาก็ได้ง่ายจะตาย เอา transaction hash มาดู ไหนว่าเตรียมเงินไว้รอ 9.3ETH แล้วอยากซื้องานผมจนตัวสั่นเลยไม่ใช่เหรอ ถ้าจะส่ง wallet address มาให้ดู หรือจะช่วยส่ง 0.15ETH มาให้ยืม mint งานก่อน แล้วมากดซื้อ 2ETH ไป แล้วผมใช้ 0.15ETH คืนให้ก็ได้ จะซื้อหรือไม่ซื้อเนี่ย เขา : จะเอา address เขาไปทำไม ผม : ตัดจบ รำคาญ ไม่ขายให้ละ เขา : 2ETH = 5000 USD เลยนะ ผม : แล้วไง
ผมเลยเขียนบทความนี้มาเตือนเพื่อนๆพี่ๆทุกคนครับ เผื่อใครกำลังเปิดพอร์ตทำธุรกิจขาย digital art online แล้วจะโชคดี เจอของดีแบบผม
ทำไมผมถึงมั่นใจว่ามันคือการหลอกหลวง แล้วคนโกงจะได้อะไร
อันดับแรกไปพิจารณาดู opensea ครับ เป็นเวบ NFTmarketplace ที่ volume การซื้อขายสูงที่สุด เขาไม่เก็บเงินของคนจะซื้อจะขายกันไว้กับตัวเอง เงินวิ่งเข้าวิ่งออก wallet ผู้ซื้อผู้ขายเลย ส่วนทางเวบเก็บค่าธรรมเนียมเท่านั้น แถมค่าธรรมเนียมก็ถูกกว่าเมื่อปี 2020 เยอะ ดังนั้นการที่จะไปลงขายงานบนเวบ NFT อื่นที่ค่า fee สูงกว่ากันเป็นร้อยเท่า ... จะทำไปทำไม
ผมเชื่อว่า scammer โกงเงินเจ้าของผลงานโดยการเล่นกับความโลภและความอ่อนประสบการณ์ของเจ้าของผลงานครับ เมื่อไหร่ก็ตามที่เจ้าของผลงานโอน ETH เข้าไปใน wallet เวบนั้นเมื่อไหร่ หรือเมื่อไหร่ก็ตามที่จ่ายค่า fee ในการ mint งาน เงินเหล่านั้นสิ่งเข้ากระเป๋า scammer ทันที แล้วก็จะมีการเล่นตุกติกต่อแน่นอนครับ เช่นถอนไม่ได้ หรือซื้อไม่ได้ ต้องโอนเงินมาเพิ่มเพื่อปลดล็อค smart contract อะไรก็ว่าไป แล้วคนนิสัยไม่ดีพวกเนี้ย ก็จะเล่นกับความโลภของคน เอาราคาเสนอซื้อที่สูงโคตรๆมาล่อ ... อันนี้ไม่ว่ากัน เพราะบนโลก NFT รูปภาพบางรูปที่ไม่ได้มีความเป็นศิลปะอะไรเลย มันดันขายกันได้ 100 - 150 ETH ศิลปินที่พยายามสร้างตัวก็อาจจะมองว่า ผลงานเรามีคนรับซื้อ 2 - 4 ETH ต่องานมันก็มากพอแล้ว (จริงๆมากเกินจนน่าตกใจด้วยซ้ำครับ)
บนโลกของ BTC ไม่ต้องเชื่อใจกัน โอนเงินไปหากันได้ ปิดสมุดบัญชีได้โดยไม่ต้องเชื่อใจกัน
บบโลกของ ETH "code is law" smart contract มีเขียนอยู่แล้ว ไปอ่าน มันไม่ได้ยากมากในการทำความเข้าใจ ดังนั้น การจะมาเชื่อคำสัญญาจากคนด้วยกัน เป็นอะไรที่ไม่มีเหตุผล
ผมไปเล่าเรื่องเหล่านี้ให้กับ community งานศิลปะ ก็มีทั้งเสียงตอบรับที่ดี และไม่ดีปนกันไป มีบางคนยืนยันเสียงแข็งไปในทำนองว่า ไอ้เรื่องแบบเนี้ยไม่ได้กินเขาหรอก เพราะเขาตั้งใจแน่วแน่ว่างานศิลป์ของเขา เขาไม่เอาเข้ามายุ่งในโลก digital currency เด็ดขาด ซึ่งผมก็เคารพมุมมองเขาครับ แต่มันจะดีกว่ามั้ย ถ้าเราเปิดหูเปิดตาให้ทันเทคโนโลยี โดยเฉพาะเรื่อง digital currency , blockchain โดนโกงทีนึงนี่คือหมดตัวกันง่ายกว่าเงิน fiat อีก
อยากจะมาเล่าให้ฟังครับ และอยากให้ช่วยแชร์ไปให้คนรู้จักด้วย จะได้ระวังตัวกัน
Note
- ภาพประกอบ cyber security ทั้งสองนี่ของผมเองครับ ทำเอง วางขายบน AdobeStock
- อีกบัญชีนึงของผม "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw กำลังค่อยๆเอาผลงานจากโลกข้างนอกเข้ามา nostr ครับ ตั้งใจจะมาสร้างงานศิลปะในนี้ เพื่อนๆที่ชอบงาน จะได้ไม่ต้องออกไปหาที่ไหน
ผลงานของผมครับ - Anime girl fanarts : HikariHarmony - HikariHarmony on Nostr - General art : KeshikiRakuen - KeshikiRakuen อาจจะเป็นบัญชี nostr ที่สามของผม ถ้าไหวครับ
-
@ 8947a945:9bfcf626
2024-10-17 07:33:00Hello everyone on Nostr and all my watchersand followersfrom DeviantArt, as well as those from other art platforms
I have been creating and sharing AI-generated anime girl fanart since the beginning of 2024 and have been running member-exclusive content on Patreon.
I also publish showcases of my artworks to Deviantart. I organically build up my audience from time to time. I consider it as one of my online businesses of art. Everything is slowly growing
On September 16, I received a DM from someone expressing interest in purchasing my art in NFT format and offering a very high price for each piece. We later continued the conversation via email.
Here’s a brief overview of what happened
- The first scammer selected the art they wanted to buy and offered a high price for each piece. They provided a URL to an NFT marketplace site running on the Ethereum (ETH) mainnet or ERC20. The site appeared suspicious, requiring email sign-up and linking a MetaMask wallet. However, I couldn't change the wallet address later. The minting gas fees were quite expensive, ranging from 0.15 to 0.2 ETH
-
The scammers tried to convince me that the high profits would easily cover the minting gas fees, so I had nothing to lose. Luckily, I didn’t have spare funds to purchase ETH for the gas fees at the time, so I tried negotiating with them as follows:
-
I offered to send them a lower-quality version of my art via email in exchange for the minting gas fees, but they refused.
- I offered them the option to pay in USD through Buy Me a Coffee shop here, but they refused.
- I offered them the option to pay via Bitcoin using the Lightning Network invoice , but they refused.
- I asked them to wait until I could secure the funds, and they agreed to wait.
The following week, a second scammer approached me with a similar offer, this time at an even higher price and through a different NFT marketplace website.
This second site also required email registration, and after navigating to the dashboard, it asked for a minting fee of 0.2 ETH. However, the site provided a wallet address for me instead of connecting a MetaMask wallet.
I told the second scammer that I was waiting to make a profit from the first sale, and they asked me to show them the first marketplace. They then warned me that the first site was a scam and even sent screenshots of victims, including one from OpenSea saying that Opensea is not paying.
This raised a red flag, and I began suspecting I might be getting scammed. On OpenSea, funds go directly to users' wallets after transactions, and OpenSea charges a much lower platform fee compared to the previous crypto bull run in 2020. Minting fees on OpenSea are also significantly cheaper, around 0.0001 ETH per transaction.
I also consulted with Thai NFT artist communities and the ex-chairman of the Thai Digital Asset Association. According to them, no one had reported similar issues, but they agreed it seemed like a scam.
After confirming my suspicions with my own research and consulting with the Thai crypto community, I decided to test the scammers’ intentions by doing the following
I minted the artwork they were interested in, set the price they offered, and listed it for sale on OpenSea. I then messaged them, letting them know the art was available and ready to purchase, with no royalty fees if they wanted to resell it.
They became upset and angry, insisting I mint the art on their chosen platform, claiming they had already funded their wallet to support me. When I asked for proof of their wallet address and transactions, they couldn't provide any evidence that they had enough funds.
Here’s what I want to warn all artists in the DeviantArt community or other platforms If you find yourself in a similar situation, be aware that scammers may be targeting you.
My Perspective why I Believe This is a Scam and What the Scammers Gain
From my experience with BTC and crypto since 2017, here's why I believe this situation is a scam, and what the scammers aim to achieve
First, looking at OpenSea, the largest NFT marketplace on the ERC20 network, they do not hold users' funds. Instead, funds from transactions go directly to users’ wallets. OpenSea’s platform fees are also much lower now compared to the crypto bull run in 2020. This alone raises suspicion about the legitimacy of other marketplaces requiring significantly higher fees.
I believe the scammers' tactic is to lure artists into paying these exorbitant minting fees, which go directly into the scammers' wallets. They convince the artists by promising to purchase the art at a higher price, making it seem like there's no risk involved. In reality, the artist has already lost by paying the minting fee, and no purchase is ever made.
In the world of Bitcoin (BTC), the principle is "Trust no one" and “Trustless finality of transactions” In other words, transactions are secure and final without needing trust in a third party.
In the world of Ethereum (ETH), the philosophy is "Code is law" where everything is governed by smart contracts deployed on the blockchain. These contracts are transparent, and even basic code can be read and understood. Promises made by people don’t override what the code says.
I also discuss this issue with art communities. Some people have strongly expressed to me that they want nothing to do with crypto as part of their art process. I completely respect that stance.
However, I believe it's wise to keep your eyes open, have some skin in the game, and not fall into scammers’ traps. Understanding the basics of crypto and NFTs can help protect you from these kinds of schemes.
If you found this article helpful, please share it with your fellow artists.
Until next time Take care
Note
- Both cyber security images are mine , I created and approved by AdobeStock to put on sale
- I'm working very hard to bring all my digital arts into Nostr to build my Sats business here to my another npub "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw
Link to my full gallery - Anime girl fanarts : HikariHarmony - HikariHarmony on Nostr - General art : KeshikiRakuen
-
@ a19caaa8:88985eaf
2025-05-04 15:25:58・を、書くつもりでは、いるんだけど、まだ何も思いつかないから、いったんテスト!
・この文章も全部、ゆくゆくは消すかも。それもまた、代替可能かつ、今このタイミングでだけ見られるという、良さかもね!
-
@ 211c0393:e9262c4d
2025-05-04 02:32:24**日本の覚せい剤ビジネスの闇:
警察、暴力団、そして「沈黙の共犯関係」の真相**
1. 暴力団の支配構造(公的データに基づく)
- 輸入依存の理由:
- 国内製造は困難(平成6年「覚せい剤原料規制法」で規制強化)→ ミャンマー・中国からの密輸が主流(国連薬物犯罪事務所「World Drug Report 2023」)。
- 暴力団の利益率:1kgあたり仕入れ価格30万円 → 小売価格500万~1000万円(警察庁「薬物情勢報告書」2022年)。
2. 警察と暴力団の「共生関係」
- 逮捕統計の不自然さ:
- 全薬物逮捕者の70%が単純所持(厚生労働省「薬物乱用状況」2023年)。
- 密輸組織の摘発は全体の5%未満(東京地検特捜部データ)。
- メディアの検証:
- NHKスペシャル「覚せい剤戦争」(2021年)で指摘された「末端ユーザー優先捜査」の実態。
3. 矛盾する現実
- 需要の不可視性:
- G7で最高の覚せい剤価格(1gあたり3~7万円、欧米の3倍)→ 暴力団の暴利(財務省「組織犯罪資金流動調査」)。
- 使用者率は低い(人口の0.2%、国連調査)が、逮捕者の過半数を占める矛盾。
4. 「密輸組織対策」の限界
- 国際的な失敗例:
- メキシコ(カルテル摘発後も市場拡大)、欧州(合成薬物の蔓延)→ 代替組織が即座に台頭(英「The Economist」2023年6月号)。
- 日本の地理的ハンデ:
- 海上密輸の摘発率は10%未満(海上保安庁報告)。
5. 解決策の再考(事実に基づく提案)
- ADHD治療薬の合法化:
- アメリカ精神医学会「ADHD患者の60%が自己治療で違法薬物使用」(2019年研究)。
- 日本ではリタリン・アデロール禁止→ 暴力団の市場独占。
- 労働環境改革:
- 過労死ライン超えの労働者20%(厚労省「労働時間調査」2023年)→ 覚せい剤需要の一因。
6. 告発のリスクと情報源
- 匿名性の重要性:
- 過去の暴力団報復事例(2018年、告発記者への脅迫事件・毎日新聞報道)。
- 公的データのみ引用:
- 例:「警察庁統計」「国連報告書」など第三者検証可能な情報。
結論:変革のためには「事実」の可視化が必要
「薬物=個人の道徳的問題」という幻想が、暴力団と腐敗官僚を利している。
国際データと国内統計の矛盾を突くことで、システムの欺瞞を暴ける。安全な共有のために:
- 個人特定を避け、匿名プラットフォーム(Tor上フォーラム等)で議論。
- 公的機関のデータを直接リンク(例:警察庁PDFレポート)。
この文書は、公表された統計・メディア報道のみを根拠とし、個人の推測を排除しています。
脅威を避けるため、具体的な個人・組織の非難は意図的に避けています。 -
@ 0c87e540:8a0292f2
2025-05-04 01:53:02A plataforma 188W tem se destacado como uma das opções mais confiáveis e empolgantes para jogadores brasileiros que buscam entretenimento de alto nível no mundo dos jogos online. Com um ambiente moderno, seguro e recheado de funcionalidades, o site oferece uma experiência completa, indo muito além das expectativas do público que procura lazer aliado à tecnologia de ponta.
Introdução à Plataforma 188W Desde sua chegada ao mercado, o 188W vem conquistando espaço e fidelizando usuários graças à sua interface intuitiva, sistema de navegação simplificado e compatibilidade com todos os dispositivos — seja computador, tablet ou smartphone. A proposta é clara: oferecer uma plataforma prática e acessível para jogadores de todos os perfis, desde os iniciantes até os mais experientes.
Além de um visual atrativo, o site se destaca por sua segurança. A proteção de dados dos usuários é prioridade, utilizando criptografia de última geração e protocolos rigorosos para garantir uma experiência tranquila e confiável.
Catálogo de Jogos Variado e Emocionante O 188W apresenta um portfólio amplo de jogos para todos os gostos. Entre os destaques estão as slots online (conhecidas como caça-níqueis virtuais), com centenas de opções que variam entre temas clássicos e modernos, gráficos vibrantes e trilhas sonoras imersivas. Esses jogos são desenvolvidos por fornecedores renomados, o que garante qualidade e desempenho técnico superior.
Para quem prefere desafios estratégicos, a plataforma oferece também jogos de cartas e roleta online, que permitem ao usuário testar suas habilidades e explorar diferentes modos de jogo. Muitos desses jogos são ao vivo, com transmissão em tempo real e interatividade com crupiês profissionais, o que torna a experiência ainda mais autêntica e envolvente.
Os fãs de esportes também encontram espaço no 188W. A seção de apostas esportivas é completa, com cobertura de dezenas de modalidades, como futebol, basquete, tênis, MMA, entre outros. É possível apostar antes ou durante os jogos, aproveitando as melhores odds do mercado e diversas promoções exclusivas.
A Experiência do Jogador no 188W Um dos principais diferenciais da plataforma é o foco total na experiência do usuário. Desde o cadastro rápido até o suporte ao cliente 24 horas, tudo é pensado para que o jogador se sinta valorizado e tenha acesso fácil a todas as funcionalidades.
Os bônus de boas-vindas são generosos e frequentes, oferecendo vantagens reais para novos usuários. Além disso, o 188W costuma lançar promoções temáticas, desafios semanais e sorteios especiais que tornam a jornada ainda mais divertida.
O suporte ao cliente é outro ponto forte da plataforma. Com atendimento em português via chat ao vivo, e-mail e WhatsApp, a equipe está sempre pronta para ajudar com dúvidas, problemas técnicos ou informações sobre pagamentos e retiradas.
O sistema de pagamentos também é eficiente e seguro. O 188W aceita diversos métodos populares no Brasil, incluindo PIX, transferências bancárias, carteiras digitais e boletos. As retiradas são processadas rapidamente, muitas vezes em menos de 24 horas.
Conclusão O 188W é muito mais do que uma simples plataforma de jogos. Ele representa um ambiente completo, onde entretenimento, segurança e inovação caminham juntos. Com um catálogo vasto, suporte de qualidade e uma experiência de usuário pensada nos mínimos detalhes, o site se firma como uma excelente escolha para quem busca diversão com praticidade e confiança.
-
@ e6817453:b0ac3c39
2024-10-06 11:21:27Hey folks, today we're diving into an exciting and emerging topic: personal artificial intelligence (PAI) and its connection to sovereignty, privacy, and ethics. With the rapid advancements in AI, there's a growing interest in the development of personal AI agents that can work on behalf of the user, acting autonomously and providing tailored services. However, as with any new technology, there are several critical factors that shape the future of PAI. Today, we'll explore three key pillars: privacy and ownership, explainability, and bias.
1. Privacy and Ownership: Foundations of Personal AI
At the heart of personal AI, much like self-sovereign identity (SSI), is the concept of ownership. For personal AI to be truly effective and valuable, users must own not only their data but also the computational power that drives these systems. This autonomy is essential for creating systems that respect the user's privacy and operate independently of large corporations.
In this context, privacy is more than just a feature—it's a fundamental right. Users should feel safe discussing sensitive topics with their AI, knowing that their data won’t be repurposed or misused by big tech companies. This level of control and data ownership ensures that users remain the sole beneficiaries of their information and computational resources, making privacy one of the core pillars of PAI.
2. Bias and Fairness: The Ethical Dilemma of LLMs
Most of today’s AI systems, including personal AI, rely heavily on large language models (LLMs). These models are trained on vast datasets that represent snapshots of the internet, but this introduces a critical ethical challenge: bias. The datasets used for training LLMs can be full of biases, misinformation, and viewpoints that may not align with a user’s personal values.
This leads to one of the major issues in AI ethics for personal AI—how do we ensure fairness and minimize bias in these systems? The training data that LLMs use can introduce perspectives that are not only unrepresentative but potentially harmful or unfair. As users of personal AI, we need systems that are free from such biases and can be tailored to our individual needs and ethical frameworks.
Unfortunately, training models that are truly unbiased and fair requires vast computational resources and significant investment. While large tech companies have the financial means to develop and train these models, individual users or smaller organizations typically do not. This limitation means that users often have to rely on pre-trained models, which may not fully align with their personal ethics or preferences. While fine-tuning models with personalized datasets can help, it's not a perfect solution, and bias remains a significant challenge.
3. Explainability: The Need for Transparency
One of the most frustrating aspects of modern AI is the lack of explainability. Many LLMs operate as "black boxes," meaning that while they provide answers or make decisions, it's often unclear how they arrived at those conclusions. For personal AI to be effective and trustworthy, it must be transparent. Users need to understand how the AI processes information, what data it relies on, and the reasoning behind its conclusions.
Explainability becomes even more critical when AI is used for complex decision-making, especially in areas that impact other people. If an AI is making recommendations, judgments, or decisions, it’s crucial for users to be able to trace the reasoning process behind those actions. Without this transparency, users may end up relying on AI systems that provide flawed or biased outcomes, potentially causing harm.
This lack of transparency is a major hurdle for personal AI development. Current LLMs, as mentioned earlier, are often opaque, making it difficult for users to trust their outputs fully. The explainability of AI systems will need to be improved significantly to ensure that personal AI can be trusted for important tasks.
Addressing the Ethical Landscape of Personal AI
As personal AI systems evolve, they will increasingly shape the ethical landscape of AI. We’ve already touched on the three core pillars—privacy and ownership, bias and fairness, and explainability. But there's more to consider, especially when looking at the broader implications of personal AI development.
Most current AI models, particularly those from big tech companies like Facebook, Google, or OpenAI, are closed systems. This means they are aligned with the goals and ethical frameworks of those companies, which may not always serve the best interests of individual users. Open models, such as Meta's LLaMA, offer more flexibility and control, allowing users to customize and refine the AI to better meet their personal needs. However, the challenge remains in training these models without significant financial and technical resources.
There’s also the temptation to use uncensored models that aren’t aligned with the values of large corporations, as they provide more freedom and flexibility. But in reality, models that are entirely unfiltered may introduce harmful or unethical content. It’s often better to work with aligned models that have had some of the more problematic biases removed, even if this limits some aspects of the system’s freedom.
The future of personal AI will undoubtedly involve a deeper exploration of these ethical questions. As AI becomes more integrated into our daily lives, the need for privacy, fairness, and transparency will only grow. And while we may not yet be able to train personal AI models from scratch, we can continue to shape and refine these systems through curated datasets and ongoing development.
Conclusion
In conclusion, personal AI represents an exciting new frontier, but one that must be navigated with care. Privacy, ownership, bias, and explainability are all essential pillars that will define the future of these systems. As we continue to develop personal AI, we must remain vigilant about the ethical challenges they pose, ensuring that they serve the best interests of users while remaining transparent, fair, and aligned with individual values.
If you have any thoughts or questions on this topic, feel free to reach out—I’d love to continue the conversation!
-
@ 04cb16e4:2ec3e5d5
2025-03-13 21:26:13Wenn man etwas verkaufen will, muss man eine Geschichte über sein Produkt erzählen. Nur wenige können etwas damit anfangen, wenn du sagst: Unser Produkt enthält 50 Gramm Hafer (hoffentlich gentechnikfrei), 5 mittelgroße Erdbeeren, Spuren von Sesamschalen sowie einen Teelöffel Honig. So funktioniert das nicht. Dein Riegel braucht einen Namen und eine Geschichte.
Wenn wir über Krieg und Frieden sprechen, denn gibt es zumeist Zahlen, Fakten und Meinungen. Tausende von Kindern die in einem Krieg getötet werden sind eine schockierende Anzahl. Nimmst du die Zahlen weg und beschäftigst dich mit jedem einzelnen Schicksal, dann ist das unmöglich zu ertragen. Also kämpfen wir hier vor Ort, in Deutschland, zwar nicht mit Waffen gegeneinander, sondern mittels unserer Meinungen in Kombination mit zu vermittelnden relativen Wahrheiten. Da kommt das Ego ins Spiel. Wir wollen unbedingt Recht haben! Irgendeiner soll in diesem Meinungskampf am Ende als Gewinner dastehen. Weil er die besseren Argumente hat. Schließlich werden Emotionen mit Fakten vermischt und als Totschlagargumente in die Gegenfront geworfen.
Was aber, wenn man eine Geschichte über den Krieg erzählt, die jeden mitnehmen kann, ganz gleich, welche Meinung man zu den aktuell verhandelten Kampfschauplätzen hat? Alles Trennende wird aus der Erzählung herausgenommen und was bleibt, sind die zerstörerische Kraft des Krieges und die Verantwortung jedes einzelnen Menschen zu entscheiden, ob er dieses grausame Monster füttert oder eben nicht. In dem afrikanischen Märchen „Sheikhi“ basieren diese Entscheidungen nicht auf Fakten und Meinungen, sondern auf persönlichen Erfahrungen. Die Protagonisten nehmen uns mit in ihre Welt und lassen uns ihre inneren Kämpfe, Zweifel, Ängste und Hoffnungen miterleben. Wir können uns mit ihnen identifizieren, obwohl wir unter völlig anderen Bedingungen leben und sterben.
Hier kannst du das Buch direkt beim Verlag bestellen
Die alternative Buchmesse Seitenwechsel
Am Ende des Buches konnte ich gar nicht anders, als eine tiefe Sehnsucht nach Frieden und Einigkeit zu verspüren. Diese Sehnsucht basierte aber nicht mehr auf dem Bedürfnis, bessere Argumente als die vermeintliche Gegenseite zu haben, sondern vielmehr darauf, dass dieses verzweifelte Ringen und Hassen endlich zu einem Ende kommt. Nicht nur auf den Schlachtfeldern Asiens und Afrikas, sondern ebenfalls auf Facebook, X, den Straßen unserer Städte und im Krieg jedes Menschen gegen sich selbst. Inzwischen gelingt es mir immer öfter, mir einen bissigen Kommentar zu verkneifen, wenn jemand auf Facebook etwas schreibt, was ich unerträglich finde. Ich weiß, das ich ihn nicht vom Gegenteil überzeugen werde und das mein Kommentar das selbe Monster füttert, dass sich an den Opfern des Krieges satt isst.
Wenn es irgendwo Menschen auf der Welt gibt, die Mord und Folter verzeihen können, dann kann auch ich eine andere Meinung ertragen ohne rechthaberisch, arrogant und destruktiv zu werden. Notfalls gehe ich in den Wald und schreie.
-
@ a012dc82:6458a70d
2025-03-11 15:41:36Argentina's journey through economic turmoil has been long and fraught with challenges. The country has grappled with inflation, debt, and a fragile economic structure that has left policymakers searching for solutions. In this context, President Javier Milei's introduction of the "Ley Ómnibus" represented a bold step towards addressing these systemic issues. The reform package was not just a set of isolated measures but a comprehensive plan aimed at overhauling the Argentine economy and social framework. The intention was to create a more robust, free, and prosperous Argentina, where economic freedoms could lead to broader social benefits.
The "Ley Ómnibus" was ambitious in its scope, covering a wide range of areas from tax reform to social policies, aiming to stimulate economic growth, reduce bureaucratic red tape, and enhance the overall quality of life for Argentines. This package was seen as a critical move to reset the economic compass of the country, aiming to attract foreign investment, boost local industry, and provide a clearer, more stable environment for businesses and individuals alike. However, such sweeping reforms were bound to encounter resistance, particularly when they touched upon sensitive areas like taxation and digital assets.
Table of Contents
-
The Crypto Tax Proposal: Initial Considerations
-
Public Backlash and Strategic Withdrawal
-
The Rationale Behind Dropping Crypto Taxes
-
Implications for Crypto Investors and the Market
-
Milei's Political Strategy and Future Prospects
-
Conclusion
-
FAQs
The Crypto Tax Proposal: Initial Considerations
Within the vast array of proposals in the Ley Ómnibus, the crypto tax stood out due to its novelty and the growing interest in digital currencies within Argentina. The country had seen a surge in cryptocurrency adoption, driven by factors such as high inflation rates and currency controls that made traditional financial systems less attractive. Cryptocurrencies offered an alternative for savings, investment, and transactions, leading to a burgeoning crypto economy.
The initial rationale behind proposing a crypto tax was multifaceted. On one hand, it aimed to bring Argentina in line with global trends where countries are increasingly seeking to regulate and tax digital currencies. On the other hand, it was seen as a potential new revenue stream for the government, which was desperately seeking funds to address its fiscal deficits. The proposal also intended to bring transparency to a sector that is often criticized for its opacity, making it easier to combat fraud, money laundering, and other illicit activities associated with cryptocurrencies.
However, the proposal was not just about regulation and revenue. It was also a litmus test for Argentina's approach to innovation and digital transformation. How the government handled this issue would signal its stance towards new technologies and economic paradigms, which are increasingly dominated by digital assets and fintech innovations.
Public Backlash and Strategic Withdrawal
The backlash against the proposed crypto taxes was swift and significant. The crypto community in Argentina, which had been flourishing in an environment of relative freedom, saw the tax as a direct threat to its growth and viability. But the discontent went beyond the crypto enthusiasts; the general public, already burdened by high taxes and economic instability, viewed the proposal as yet another financial strain.
The protests and debates that ensued highlighted a broader discontent with the government's approach to economic management. Many Argentines felt that the focus should be on fixing the fundamental issues plaguing the economy, such as inflation and corruption, rather than imposing new taxes. The crypto tax became a symbol of the government's perceived detachment from the real concerns of its citizens.
In this heated atmosphere, President Milei's decision to withdraw the crypto tax proposal from the Ley Ómnibus was not just a tactical retreat; it was a necessary move to quell the growing unrest and focus on more pressing economic reforms. This decision underscored the complexities of governing in a highly polarized environment and the need for a more nuanced approach to policy-making, especially when dealing with emerging technologies and markets.
The Rationale Behind Dropping Crypto Taxes
The decision to drop the crypto tax from the omnibus reform package was not taken lightly. It was a recognition of the crypto sector's unique dynamics and the government's limitations in effectively regulating and taxing this space without stifling innovation. The move also reflected a broader understanding of the economic landscape, where rapid development and legislative efficiency were deemed more crucial than ever.
By removing the contentious clauses, the government aimed to streamline the passage of the Ley Ómnibus, ensuring that other, less controversial, reforms could be implemented swiftly. This strategic pivot was also a nod to the global debate on how best to integrate cryptocurrencies into national economies. Argentina's government recognized that a more cautious and informed approach was necessary, one that could balance the need for regulation with the desire to foster a thriving digital economy.
Furthermore, the withdrawal of the crypto tax proposal can be seen as an acknowledgment of the power of public opinion and the crypto community's growing influence. It highlighted the need for governments to engage with stakeholders and understand the implications of new technologies before rushing to regulate them.
Implications for Crypto Investors and the Market
The removal of the crypto tax proposal has had immediate and significant implications for the Argentine crypto market. For investors, the decision has provided a reprieve from the uncertainty that had clouded the sector, allowing them to breathe a sigh of relief and continue their activities without the looming threat of new taxes. This has helped sustain the momentum of the crypto market in Argentina, which is seen as a vital component of the country's digital transformation and economic diversification.
However, the situation remains complex and fluid. The government's stance on cryptocurrencies is still evolving, and future regulations could impact the market in unforeseen ways. Investors are now more aware of the need to stay informed and engaged with regulatory developments, understanding that the legal landscape for digital currencies is still being shaped.
The episode has also highlighted the broader challenges facing the Argentine economy, including the need for comprehensive tax reform and the creation of a more conducive environment for technological innovation and investment. The crypto market's response to the government's actions reflects the delicate balance between regulation and growth, a balance that will be crucial for Argentina's economic future.
Milei's Political Strategy and Future Prospects
President Milei's handling of the crypto tax controversy reveals much about his political strategy and vision for Argentina. By withdrawing the proposal, he demonstrated a willingness to listen to public concerns and adapt his policies accordingly. This flexibility could be a key asset as he navigates the complex landscape of Argentine politics and governance.
The episode also offers insights into the potential future direction of Milei's administration. The focus on economic reforms, coupled with a pragmatic approach to contentious issues, suggests a leadership style that prioritizes economic stability and growth over ideological purity. This could bode well for Argentina's future, particularly if Milei can harness the energy and innovation of the digital economy as part of his broader reform agenda.
However, the challenges ahead are significant. The Ley Ómnibus is just one part of a larger puzzle, and Milei's ability to implement comprehensive reforms will be tested in the coming months and years. The crypto tax saga has shown that while change is possible, it requires careful negotiation, stakeholder engagement, and a clear understanding of the economic and social landscape.
Conclusion
The story of Argentina's crypto tax proposal is a microcosm of the broader challenges facing the country as it seeks to reform its economy and society. It highlights the tensions between innovation and regulation, the importance of public opinion, and the complexities of governance in a rapidly changing world.
As Argentina moves forward, the lessons learned from this episode will be invaluable. The need for clear, informed, and inclusive policy-making has never been greater, particularly as the country navigates the uncertainties of the digital age.
FAQs
What is the Ley Ómnibus? The Ley Ómnibus, formally known as the "Law of Bases and Starting Points for the Freedom of Argentines," is a comprehensive reform package introduced by President Javier Milei. It aims to address various economic, social, and administrative issues in Argentina, aiming to stimulate growth, reduce bureaucracy, and improve the overall quality of life.
Why were crypto taxes proposed in Argentina? Crypto taxes were proposed as part of the Ley Ómnibus to broaden the tax base, align with global trends of regulating digital currencies, and generate additional revenue for the government. They were also intended to bring more transparency to the cryptocurrency sector in Argentina.
Why were the proposed crypto taxes withdrawn? The proposed crypto taxes were withdrawn due to significant public backlash and concerns that they would stifle innovation and economic freedom in the burgeoning crypto market. The decision was also influenced by the government's priority to ensure the swift passage of other reforms within the Ley Ómnibus.
What does the withdrawal of crypto taxes mean for investors? The withdrawal means that, for now, crypto investors in Argentina will not face additional taxes specifically targeting their cryptocurrency holdings or transactions. However, selling large amounts of cryptocurrency at a profit will still be subject to income tax.
That's all for today
If you want more, be sure to follow us on:
NOSTR: croxroad@getalby.com
Instagram: @croxroadnews.co/
Youtube: @thebitcoinlibertarian
Store: https://croxroad.store
Subscribe to CROX ROAD Bitcoin Only Daily Newsletter
https://www.croxroad.co/subscribe
Get Orange Pill App And Connect With Bitcoiners In Your Area. Stack Friends Who Stack Sats link: https://signup.theorangepillapp.com/opa/croxroad
Buy Bitcoin Books At Konsensus Network Store. 10% Discount With Code “21croxroad” link: https://bitcoinbook.shop?ref=21croxroad
DISCLAIMER: None of this is financial advice. This newsletter is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. Please be careful and do your own research.
-
-
@ 84b0c46a:417782f5
2025-05-04 15:14:21https://long-form-editer.vercel.app/
β版のため予期せぬ動作が発生する可能性があります。記事を修正する際は事前にバックアップを取ることをおすすめします
機能
-
nostr:npub1sjcvg64knxkrt6ev52rywzu9uzqakgy8ehhk8yezxmpewsthst6sw3jqcw や、 nostr:nevent1qvzqqqqqqypzpp9sc34tdxdvxh4jeg5xgu9ctcypmvsg0n00vwfjydkrjaqh0qh4qyxhwumn8ghj77tpvf6jumt9qys8wumn8ghj7un9d3shjtt2wqhxummnw3ezuamfwfjkgmn9wshx5uqpz9mhxue69uhkuenjv4kxz7fwv9c8qqpq486d6yazu7ydx06lj5gr4aqgeq6rkcreyykqnqey8z5fm6qsj8fqfetznk のようにnostr:要素を挿入できる
-
:monoice:のようにカスタム絵文字を挿入できる(メニューの😃アイコンから←アイコン変えるかも)
:monopaca_kao:
:kubipaca_karada:
- 新規記事作成と、既存記事の修正ができる
やること
- [x] nostr:を投稿するときにtagにいれる
- [ ] レイアウトを整える
- [x] 画像をアップロードできるようにする
できる
- [ ] 投稿しましたログとかをトースト的なやつでだすようにする
- [ ] あとなんか
-
-
@ e6817453:b0ac3c39
2024-09-30 14:52:23In the modern world of AI, managing vast amounts of data while keeping it relevant and accessible is a significant challenge, mainly when dealing with large language models (LLMs) and vector databases. One approach that has gained prominence in recent years is integrating vector search with metadata, especially in retrieval-augmented generation (RAG) pipelines. Vector search and metadata enable faster and more accurate data retrieval. However, the process of pre- and post-search filtering results plays a crucial role in ensuring data relevance.
The Vector Search and Metadata Challenge
In a typical vector search, you create embeddings from chunks of text, such as a PDF document. These embeddings allow the system to search for similar items and retrieve them based on relevance. The challenge, however, arises when you need to combine vector search results with structured metadata. For example, you may have timestamped text-based content and want to retrieve the most relevant content within a specific date range. This is where metadata becomes critical in refining search results.
Unfortunately, most vector databases treat metadata as a secondary feature, isolating it from the primary vector search process. As a result, handling queries that combine vectors and metadata can become a challenge, particularly when the search needs to account for a dynamic range of filters, such as dates or other structured data.
LibSQL and vector search metadata
LibSQL is a more general-purpose SQLite-based database that adds vector capabilities to regular data. Vectors are presented as blob columns of regular tables. It makes vector embeddings and metadata a first-class citizen that naturally builds deep integration of these data points.
create table if not exists conversation ( id varchar(36) primary key not null, startDate real, endDate real, summary text, vectorSummary F32_BLOB(512) );
It solves the challenge of metadata and vector search and eliminates impedance between vector data and regular structured data points in the same storage.
As you can see, you can access vector-like data and start date in the same query.
select c.id ,c.startDate, c.endDate, c.summary, vector_distance_cos(c.vectorSummary, vector(${vector})) distance from conversation where ${startDate ? `and c.startDate >= ${startDate.getTime()}` : ''} ${endDate ? `and c.endDate <= ${endDate.getTime()}` : ''} ${distance ? `and distance <= ${distance}` : ''} order by distance limit ${top};
vector_distance_cos calculated as distance allows us to make a primitive vector search that does a full scan and calculates distances on rows. We could optimize it with CTE and limit search and distance calculations to a much smaller subset of data.
This approach could be calculation intensive and fail on large amounts of data.
Libsql offers a way more effective vector search based on FlashDiskANN vector indexed.
vector_top_k('idx_conversation_vectorSummary', ${vector} , ${top}) i
vector_top_k is a table function that searches for the top of the newly created vector search index. As you can see, we could use only vector as a function parameter, and other columns could be used outside of the table function. So, to use a vector index together with different columns, we need to apply some strategies.
Now we get a classical problem of integration vector search results with metadata queries.
Post-Filtering: A Common Approach
The most widely adopted method in these pipelines is post-filtering. In this approach, the system first retrieves data based on vector similarities and then applies metadata filters. For example, imagine you’re conducting a vector search to retrieve conversations relevant to a specific question. Still, you also want to ensure these conversations occurred in the past week.
Post-filtering allows the system to retrieve the most relevant vector-based results and subsequently filter out any that don’t meet the metadata criteria, such as date range. This method is efficient when vector similarity is the primary factor driving the search, and metadata is only applied as a secondary filter.
const sqlQuery = ` select c.id ,c.startDate, c.endDate, c.summary, vector_distance_cos(c.vectorSummary, vector(${vector})) distance from vector_top_k('idx_conversation_vectorSummary', ${vector} , ${top}) i inner join conversation c on i.id = c.rowid where ${startDate ? `and c.startDate >= ${startDate.getTime()}` : ''} ${endDate ? `and c.endDate <= ${endDate.getTime()}` : ''} ${distance ? `and distance <= ${distance}` : ''} order by distance limit ${top};
However, there are some limitations. For example, the initial vector search may yield fewer results or omit some relevant data before applying the metadata filter. If the search window is narrow enough, this can lead to complete results.
One working strategy is to make the top value in vector_top_K much bigger. Be careful, though, as the function's default max number of results is around 200 rows.
Pre-Filtering: A More Complex Approach
Pre-filtering is a more intricate approach but can be more effective in some instances. In pre-filtering, metadata is used as the primary filter before vector search takes place. This means that only data that meets the metadata criteria is passed into the vector search process, limiting the scope of the search right from the beginning.
While this approach can significantly reduce the amount of irrelevant data in the final results, it comes with its own challenges. For example, pre-filtering requires a deeper understanding of the data structure and may necessitate denormalizing the data or creating separate pre-filtered tables. This can be resource-intensive and, in some cases, impractical for dynamic metadata like date ranges.
In certain use cases, pre-filtering might outperform post-filtering. For instance, when the metadata (e.g., specific date ranges) is the most important filter, pre-filtering ensures the search is conducted only on the most relevant data.
Pre-filtering with distance-based filtering
So, we are getting back to an old concept. We do prefiltering instead of using a vector index.
WITH FilteredDates AS ( SELECT c.id, c.startDate, c.endDate, c.summary, c.vectorSummary FROM YourTable c WHERE ${startDate ? `AND c.startDate >= ${startDate.getTime()}` : ''} ${endDate ? `AND c.endDate <= ${endDate.getTime()}` : ''} ), DistanceCalculation AS ( SELECT fd.id, fd.startDate, fd.endDate, fd.summary, fd.vectorSummary, vector_distance_cos(fd.vectorSummary, vector(${vector})) AS distance FROM FilteredDates fd ) SELECT dc.id, dc.startDate, dc.endDate, dc.summary, dc.distance FROM DistanceCalculation dc WHERE 1=1 ${distance ? `AND dc.distance <= ${distance}` : ''} ORDER BY dc.distance LIMIT ${top};
It makes sense if the filter produces small data and distance calculation happens on the smaller data set.
As a pro of this approach, you have full control over the data and get all results without omitting some typical values for extensive index searches.
Choosing Between Pre and Post-Filtering
Both pre-filtering and post-filtering have their advantages and disadvantages. Post-filtering is more accessible to implement, especially when vector similarity is the primary search factor, but it can lead to incomplete results. Pre-filtering, on the other hand, can yield more accurate results but requires more complex data handling and optimization.
In practice, many systems combine both strategies, depending on the query. For example, they might start with a broad pre-filtering based on metadata (like date ranges) and then apply a more targeted vector search with post-filtering to refine the results further.
Conclusion
Vector search with metadata filtering offers a powerful approach for handling large-scale data retrieval in LLMs and RAG pipelines. Whether you choose pre-filtering or post-filtering—or a combination of both—depends on your application's specific requirements. As vector databases continue to evolve, future innovations that combine these two approaches more seamlessly will help improve data relevance and retrieval efficiency further.
-
@ 0c87e540:8a0292f2
2025-05-04 01:52:26No cenário atual de entretenimento digital, plataformas que oferecem praticidade, segurança e uma ampla gama de jogos são cada vez mais valorizadas pelos usuários. A KCBET surge como uma dessas opções, conquistando o público brasileiro com uma proposta moderna e intuitiva, que combina tecnologia de ponta, variedade de jogos e um suporte eficiente ao usuário.
Desde o primeiro acesso, a KCBET impressiona pelo seu design amigável e pela facilidade de navegação. O site é totalmente otimizado para funcionar tanto em computadores quanto em dispositivos móveis, o que garante liberdade ao usuário para jogar onde e quando quiser. Outro diferencial importante é a interface em português, o que torna a experiência ainda mais personalizada para o público brasileiro.
A KCBETse destaca por oferecer um ambiente seguro, utilizando tecnologias de criptografia avançadas para proteger os dados dos usuários. Além disso, o processo de cadastro é simples e rápido, permitindo que novos jogadores comecem sua jornada em poucos minutos.
Variedade de Jogos para Todos os Perfis O grande atrativo da KCBET é, sem dúvida, a diversidade de jogos disponíveis. A plataforma conta com uma seleção impressionante de opções para todos os gostos e estilos. Os fãs de jogos tradicionais vão encontrar diversas versões de títulos consagrados, enquanto aqueles que preferem novidades e inovação têm à disposição opções modernas com gráficos de última geração e recursos interativos.
Entre os destaques estão os jogos de mesa, jogos de cartas, slots virtuais e opções com crupiês ao vivo — uma experiência que aproxima o jogador da emoção de estar em um ambiente físico, sem sair de casa. Todos os jogos são fornecidos por desenvolvedores renomados do setor, o que garante qualidade visual, fluidez e, principalmente, resultados justos e auditados.
Experiência do Jogador: Imersão e Suporte A experiência do usuário na KCBET vai além dos jogos. A plataforma investe continuamente em proporcionar um ambiente imersivo, com bônus promocionais atrativos, torneios temáticos e desafios semanais que mantêm a emoção sempre em alta. Para novos usuários, há ofertas de boas-vindas que tornam o início da jornada ainda mais empolgante.
Outro ponto alto da KCBET é o atendimento ao cliente. O suporte está disponível 24 horas por dia, sete dias por semana, pronto para auxiliar em qualquer dúvida ou situação. Os canais de atendimento incluem chat ao vivo, e-mail e até suporte via aplicativos de mensagem, garantindo agilidade e eficiência na resolução de problemas.
A KCBET também oferece uma plataforma de pagamentos completa, com diversos métodos de depósito e saque, incluindo opções locais como PIX e boletos bancários. As transações são rápidas, seguras e sem complicações, o que contribui para uma experiência ainda mais satisfatória.
Conclusão: KCBET é a Escolha Certa para Jogadores Brasileiros Com uma proposta moderna, segura e voltada para o entretenimento de qualidade, a KCBET se firma como uma das melhores opções para jogadores brasileiros que buscam diversão online com praticidade. Sua vasta gama de jogos, promoções atrativas, atendimento eficiente e suporte a métodos de pagamento nacionais tornam a plataforma uma escolha inteligente para quem quer transformar momentos de lazer em experiências memoráveis.
-
@ 3bf0c63f:aefa459d
2024-09-06 12:49:46Nostr: a quick introduction, attempt #2
Nostr doesn't subscribe to any ideals of "free speech" as these belong to the realm of politics and assume a big powerful government that enforces a common ruleupon everybody else.
Nostr instead is much simpler, it simply says that servers are private property and establishes a generalized framework for people to connect to all these servers, creating a true free market in the process. In other words, Nostr is the public road that each market participant can use to build their own store or visit others and use their services.
(Of course a road is never truly public, in normal cases it's ran by the government, in this case it relies upon the previous existence of the internet with all its quirks and chaos plus a hand of government control, but none of that matters for this explanation).
More concretely speaking, Nostr is just a set of definitions of the formats of the data that can be passed between participants and their expected order, i.e. messages between clients (i.e. the program that runs on a user computer) and relays (i.e. the program that runs on a publicly accessible computer, a "server", generally with a domain-name associated) over a type of TCP connection (WebSocket) with cryptographic signatures. This is what is called a "protocol" in this context, and upon that simple base multiple kinds of sub-protocols can be added, like a protocol for "public-square style microblogging", "semi-closed group chat" or, I don't know, "recipe sharing and feedback".
-
@ da18e986:3a0d9851
2024-08-14 13:58:24After months of development I am excited to officially announce the first version of DVMDash (v0.1). DVMDash is a monitoring and debugging tool for all Data Vending Machine (DVM) activity on Nostr. The website is live at https://dvmdash.live and the code is available on Github.
Data Vending Machines (NIP-90) offload computationally expensive tasks from relays and clients in a decentralized, free-market manner. They are especially useful for AI tools, algorithmic processing of user’s feeds, and many other use cases.
The long term goal of DVMDash is to become 1) a place to easily see what’s happening in the DVM ecosystem with metrics and graphs, and 2) provide real-time tools to help developers monitor, debug, and improve their DVMs.
DVMDash aims to enable users to answer these types of questions at a glance: * What’s the most popular DVM right now? * How much money is being paid to image generation DVMs? * Is any DVM down at the moment? When was the last time that DVM completed a task? * Have any DVMs failed to deliver after accepting payment? Did they refund that payment? * How long does it take this DVM to respond? * For task X, what’s the average amount of time it takes for a DVM to complete the task? * … and more
For developers working with DVMs there is now a visual, graph based tool that shows DVM-chain activity. DVMs have already started calling other DVMs to assist with work. Soon, we will have humans in the loop monitoring DVM activity, or completing tasks themselves. The activity trace of which DVM is being called as part of a sub-task from another DVM will become complicated, especially because these decisions will be made at run-time and are not known ahead of time. Building a tool to help users and developers understand where a DVM is in this activity trace, whether it’s gotten stuck or is just taking a long time, will be invaluable. For now, the website only shows 1 step of a dvm chain from a user's request.
One of the main designs for the site is that it is highly clickable, meaning whenever you see a DVM, Kind, User, or Event ID, you can click it and open that up in a new page to inspect it.
Another aspect of this website is that it should be fast. If you submit a DVM request, you should see it in DVMDash within seconds, as well as events from DVMs interacting with your request. I have attempted to obtain DVM events from relays as quickly as possible and compute metrics over them within seconds.
This project makes use of a nosql database and graph database, currently set to use mongo db and neo4j, for which there are free, community versions that can be run locally.
Finally, I’m grateful to nostr:npub10pensatlcfwktnvjjw2dtem38n6rvw8g6fv73h84cuacxn4c28eqyfn34f for supporting this project.
Features in v0.1:
Global Network Metrics:
This page shows the following metrics: - DVM Requests: Number of unencrypted DVM requests (kind 5000-5999) - DVM Results: Number of unencrypted DVM results (kind 6000-6999) - DVM Request Kinds Seen: Number of unique kinds in the Kind range 5000-5999 (except for known non-DVM kinds 5666 and 5969) - DVM Result Kinds Seen: Number of unique kinds in the Kind range 6000-6999 (except for known non-DVM kinds 6666 and 6969) - DVM Pub Keys Seen: Number of unique pub keys that have written a kind 6000-6999 (except for known non-DVM kinds) or have published a kind 31990 event that specifies a ‘k’ tag value between 5000-5999 - DVM Profiles (NIP-89) Seen: Number of 31990 that have a ‘k’ tag value for kind 5000-5999 - Most Popular DVM: The DVM that has produced the most result events (kind 6000-6999) - Most Popular Kind: The Kind in range 5000-5999 that has the most requests by users. - 24 hr DVM Requests: Number of kind 5000-5999 events created in the last 24 hrs - 24 hr DVM Results: Number of kind 6000-6999 events created in the last 24 hours - 1 week DVM Requests: Number of kind 5000-5999 events created in the last week - 1 week DVM Results: Number of kind 6000-6999 events created in the last week - Unique Users of DVMs: Number of unique pubkeys of kind 5000-5999 events - Total Sats Paid to DVMs: - This is an estimate. - This value is likely a lower bound as it does not take into consideration subscriptions paid to DVMs - This is calculated by counting the values of all invoices where: - A DVM published a kind 7000 event requesting payment and containing an invoice - The DVM later provided a DVM Result for the same job for which it requested payment. - The assumption is that the invoice was paid, otherwise the DVM would not have done the work - Note that because there are multiple ways to pay a DVM such as lightning invoices, ecash, and subscriptions, there is no guaranteed way to know whether a DVM has been paid. Additionally, there is no way to know that a DVM completed the job because some DVMs may not publish a final result event and instead send the user a DM or take some other kind of action.
Recent Requests:
This page shows the most recent 3 events per kind, sorted by created date. You should always be able to find the last 3 events here of all DVM kinds.
DVM Browser:
This page will either show a profile of a specific DVM, or when no DVM is given in the url, it will show a table of all DVMs with some high level stats. Users can click on a DVM in the table to load the DVM specific page.
Kind Browser:
This page will either show data on a specific kind including all DVMs that have performed jobs of that kind, or when no kind is given, it will show a table summarizing activity across all Kinds.
Debug:
This page shows the graph based visualization of all events, users, and DVMs involved in a single job as well as a table of all events in order from oldest to newest. When no event is given, this page shows the 200 most recent events where the user can click on an event in order to debug that job. The graph-based visualization allows the user to zoom in and out and move around the graph, as well as double click on any node in the graph (except invoices) to open up that event, user, or dvm in a new page.
Playground:
This page is currently under development and may not work at the moment. If it does work, in the current state you can login with NIP-07 extension and broadcast a 5050 event with some text and then the page will show you events from DVMs. This page will be used to interact with DVMs live. A current good alternative to this feature, for some but not all kinds, is https://vendata.io/.
Looking to the Future
I originally built DVMDash out of Fear-of-Missing-Out (FOMO); I wanted to make AI systems that were comprised of DVMs but my day job was taking up a lot of my time. I needed to know when someone was performing a new task or launching a new AI or Nostr tool!
I have a long list of DVMs and Agents I hope to build and I needed DVMDash to help me do it; I hope it helps you achieve your goals with Nostr, DVMs, and even AI. To this end, I wish for this tool to be useful to others, so if you would like a feature, please submit a git issue here or note me on Nostr!
Immediate Next Steps:
- Refactoring code and removing code that is no longer used
- Improve documentation to run the project locally
- Adding a metric for number of encrypted requests
- Adding a metric for number of encrypted results
Long Term Goals:
- Add more metrics based on community feedback
- Add plots showing metrics over time
- Add support for showing a multi-dvm chain in the graph based visualizer
- Add a real-time mode where the pages will auto update (currently the user must refresh the page)
- ... Add support for user requested features!
Acknowledgements
There are some fantastic people working in the DVM space right now. Thank you to nostr:npub1drvpzev3syqt0kjrls50050uzf25gehpz9vgdw08hvex7e0vgfeq0eseet for making python bindings for nostr_sdk and for the recent asyncio upgrades! Thank you to nostr:npub1nxa4tywfz9nqp7z9zp7nr7d4nchhclsf58lcqt5y782rmf2hefjquaa6q8 for answering lots of questions about DVMs and for making the nostrdvm library. Thank you to nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft for making the original DVM NIP and vendata.io which I use all the time for testing!
P.S. I rushed to get this out in time for Nostriga 2024; code refactoring will be coming :)
-
@ 04c915da:3dfbecc9
2025-03-10 23:31:30Bitcoin has always been rooted in freedom and resistance to authority. I get that many of you are conflicted about the US Government stacking but by design we cannot stop anyone from using bitcoin. Many have asked me for my thoughts on the matter, so let’s rip it.
Concern
One of the most glaring issues with the strategic bitcoin reserve is its foundation, built on stolen bitcoin. For those of us who value private property this is an obvious betrayal of our core principles. Rather than proof of work, the bitcoin that seeds this reserve has been taken by force. The US Government should return the bitcoin stolen from Bitfinex and the Silk Road.
Usually stolen bitcoin for the reserve creates a perverse incentive. If governments see a bitcoin as a valuable asset, they will ramp up efforts to confiscate more bitcoin. The precedent is a major concern, and I stand strongly against it, but it should be also noted that governments were already seizing coin before the reserve so this is not really a change in policy.
Ideally all seized bitcoin should be burned, by law. This would align incentives properly and make it less likely for the government to actively increase coin seizures. Due to the truly scarce properties of bitcoin, all burned bitcoin helps existing holders through increased purchasing power regardless. This change would be unlikely but those of us in policy circles should push for it regardless. It would be best case scenario for American bitcoiners and would create a strong foundation for the next century of American leadership.
Optimism
The entire point of bitcoin is that we can spend or save it without permission. That said, it is a massive benefit to not have one of the strongest governments in human history actively trying to ruin our lives.
Since the beginning, bitcoiners have faced horrible regulatory trends. KYC, surveillance, and legal cases have made using bitcoin and building bitcoin businesses incredibly difficult. It is incredibly important to note that over the past year that trend has reversed for the first time in a decade. A strategic bitcoin reserve is a key driver of this shift. By holding bitcoin, the strongest government in the world has signaled that it is not just a fringe technology but rather truly valuable, legitimate, and worth stacking.
This alignment of incentives changes everything. The US Government stacking proves bitcoin’s worth. The resulting purchasing power appreciation helps all of us who are holding coin and as bitcoin succeeds our government receives direct benefit. A beautiful positive feedback loop.
Realism
We are trending in the right direction. A strategic bitcoin reserve is a sign that the state sees bitcoin as an asset worth embracing rather than destroying. That said, there is a lot of work left to be done. We cannot be lulled into complacency, the time to push forward is now, and we cannot take our foot off the gas. We have a seat at the table for the first time ever. Let's make it worth it.
We must protect the right to free usage of bitcoin and other digital technologies. Freedom in the digital age must be taken and defended, through both technical and political avenues. Multiple privacy focused developers are facing long jail sentences for building tools that protect our freedom. These cases are not just legal battles. They are attacks on the soul of bitcoin. We need to rally behind them, fight for their freedom, and ensure the ethos of bitcoin survives this new era of government interest. The strategic reserve is a step in the right direction, but it is up to us to hold the line and shape the future.
-
@ e516ecb8:1be0b167
2025-05-04 01:45:38El sol de la tarde caía oblicuo sobre un campo de hierba alta, tiñéndolo de tonos dorados y rojizos. A un lado, una formación disciplinada de hombres vestidos con armaduras de cuero y metal relucía bajo la luz. Eran legionarios romanos, cada uno portando un scutum, el gran escudo rectangular, y un gladius corto y letal. Se movían como una sola entidad, un muro de escudos erizado de puntas de lanza que asomaban por encima.
Al otro lado del campo, una fuerza más dispersa pero igualmente imponente esperaba. Eran samuráis, guerreros vestidos con armaduras lacadas de intrincado diseño. En sus manos, las brillantes curvas de las katanas reflejaban el sol poniente. Su presencia era menos de masa compacta y más de tensión contenida, como la de depredadores listos para abalanzarse.
El silencio se quebró cuando un oficial romano alzó su signum, un estandarte con el águila imperial. Al unísono, los legionarios avanzaron con paso firme, sus sandalias clavándose en la tierra. Gritaban su grito de guerra, un rugido gutural que resonaba en el aire.
Los samuráis observaron el avance implacable. Su líder, un hombre de rostro sereno con una cicatriz que le cruzaba la mejilla, desenvainó su katana con un movimiento fluido y silencioso. La hoja brilló intensamente. Con un grito agudo, dio la orden de ataque.
La batalla comenzó con un choque violento. Los legionarios, con sus escudos entrelazados, formaron una muralla impenetrable. Los samuráis se lanzaron contra ella, sus katanas trazando arcos de acero en el aire. El choque de metal contra metal llenó el campo, un coro estridente de la guerra.
Un samurái, ágil como un felino, intentó saltar sobre el muro de escudos. Pero un legionario, rápido y entrenado, lo recibió con una estocada precisa de su gladius, que encontró un hueco en la armadura. El samurái cayó, la sangre tiñendo la hierba.
Otro samurái, con un grito furioso, lanzó un corte horizontal con su katana. El golpe impactó contra un scutum, dejando una marca profunda en la madera y el metal, pero el escudo resistió. Antes de que pudiera recuperar su arma, el legionario detrás del escudo le asestó un golpe rápido con el gladius en el costado desprotegido.
La formación romana era una máquina de matar eficiente. Los legionarios trabajaban en equipo, protegiéndose mutuamente con sus escudos y atacando con sus gladius en los momentos oportunos. La disciplina y el entrenamiento eran sus mayores armas.
Sin embargo, la ferocidad y la habilidad individual de los samuráis eran innegables. Sus katanas, a pesar de no poder penetrar fácilmente la sólida pared de escudos, eran devastadoras en los espacios abiertos. Un samurái logró flanquear a un grupo de legionarios y, con movimientos rápidos y precisos, cortó brazos y piernas, sembrando el caos en la retaguardia romana.
La batalla se convirtió en un torbellino de acero y gritos. Los legionarios mantenían su formación, avanzando lentamente mientras repelían los ataques. Los samuráis, aunque sufrían bajas, no retrocedían, impulsados por su honor y su valentía.
En un punto crucial, un grupo de samuráis liderados por su comandante logró concentrar sus ataques en un sector de la línea romana. Con golpes repetidos y feroces, consiguieron romper la formación, creando una brecha. Se lanzaron a través de ella, sus katanas sedientas de sangre.
La disciplina romana se tambaleó por un momento. Los samuráis, aprovechando la oportunidad, lucharon cuerpo a cuerpo con una furia indomable. La longitud de sus katanas les daba ventaja en el combate individual, permitiéndoles mantener a raya a los legionarios con cortes amplios y letales.
Sin embargo, la respuesta romana fue rápida. Los oficiales gritaron órdenes, y las líneas se cerraron nuevamente, rodeando a los samuráis que habían penetrado la formación. Los legionarios, trabajando en parejas, inmovilizaban los largos brazos de los samuráis con sus escudos mientras otros asestaban golpes mortales con sus gladius.
La batalla continuó durante lo que pareció una eternidad. El sol finalmente se ocultó en el horizonte, tiñendo el campo de batalla de sombras oscuras y reflejos sangrientos. Ambos bandos lucharon con una determinación feroz, sin ceder terreno fácilmente.
Al final, la disciplina y la formación compacta de los legionarios comenzaron a imponerse. Lentamente, pero de manera constante, fueron cercando y diezmando a los samuráis. La muralla de escudos era demasiado sólida, y la lluvia constante de estocadas del gladius era implacable.
Los últimos samuráis lucharon con la desesperación de quienes saben que su final está cerca. Sus katanas seguían cortando con gracia mortal, pero eran superados en número y en la táctica del combate en grupo. Uno a uno, fueron cayendo, sus brillantes espadas manchadas de sangre.
Cuando la última katana cayó al suelo con un resonido metálico, un silencio pesado se cernió sobre el campo. Los legionarios, exhaustos pero victoriosos, permanecieron en formación, sus escudos goteando sangre. Habían prevalecido gracias a su disciplina, su equipo y su táctica de combate en grupo. La ferocidad individual y la maestría de la katana de los samuráis no habían sido suficientes contra la máquina de guerra romana.
La noche cubrió el campo de batalla, llevándose consigo los ecos de la lucha y dejando solo la sombría realidad de la victoria y la derrota.
-
@ 79be667e:16f81798
2025-05-04 01:28:53Bla blablabla
-
@ b8a9df82:6ab5cbbd
2025-03-06 22:39:15Last week at Bitcoin Investment Week in New York City, hosted by Anthony Pompliano, Jack Mallers walked in wearing sneakers and a T-shirt, casually dropping, “Man… I hate politics.”
That was it. That was the moment I felt aligned again. That’s the energy I came for. No suits. No corporate jargon. Just a guy who gets it—who cares about people, bringing Bitcoin-powered payments to the masses and making sure people can actually use it.
His presence was a reminder of why we’re here in the first place. And his words—“I hate politics”—were a breath of fresh air.
Now, don’t get me wrong. Anthony was a fantastic host. His ability to mix wittiness, playfulness, and seriousness made him an entertaining moderator. But this week was unlike anything I’ve ever experienced in the Bitcoin ecosystem.
One of the biggest letdowns was the lack of interaction. No real Q&A sessions, no direct engagement, no real discussions. Just one fireside chat after another.
And sure, I get it—people love to hear themselves talk. But where were the questions? The critical debates? The chance for the audience to actually participate?
I’m used to Bitcoin meetups and conferences where you walk away with new ideas, new friends, and maybe even a new project to contribute to. Here, it was more like sitting in an expensive lecture hall, watching a lineup of speakers tell us things we already know.
A different vibe—and not in a good way
Over the past few months, I’ve attended nearly ten Bitcoin conferences, each leaving me feeling uplifted, inspired, and ready to take action. But this? This felt different. And not in a good way.
If this had been my first Bitcoin event, I might have walked away questioning whether I even belonged here. It wasn’t Prague. It wasn’t Riga. It wasn’t the buzzing, grassroots, pleb-filled gatherings I had grown to love. Instead, it felt more like a Wall Street networking event disguised as a Bitcoin conference.
Maybe it was the suits.
Or the fact that I was sitting in a room full of investors who have no problem dropping $1,000+ on a ticket.
Or that it reminded me way too much of my former life—working as a manager in London’s real estate industry, navigating boardrooms full of finance guys in polished shoes, talking about “assets under management.”
Bitcoin isn’t just an investment thesis. It’s a revolution. A movement. And yet, at times during this week, I felt like I was back in my fiat past, stuck in a room where people measured success in dollars, not in freedom.
Maybe that’s the point. Bitcoin Investment Week was never meant to be a pleb gathering.
That said, the week did have some bright spots. PubKey was a fantastic kickoff. That was real Bitcoin culture—plebs, Nostr, grassroots energy. People who actually use Bitcoin, not just talk about it.
But the absolute highlight? Jack Mallers, sneakers and all, cutting through the noise with his authenticity.
So, why did we even go?
Good question. Maybe it was curiosity. Maybe it was stepping out of our usual circles to see Bitcoin through a different lens. Maybe it was to remind ourselves why we chose this path in the first place.
Would I go again? Probably not.
Would I trade Prague, Riga, bitcoin++ or any of the grassroots Bitcoin conferences for this? Not a chance.
At the end of the day, Bitcoin doesn’t belong to Wall Street from my opinion. It belongs to the people who actually use it. And those are the people I want to be around.
-
@ 5df413d4:2add4f5b
2025-05-04 00:51:49Short photo-stories of the hidden, hard to find, obscure, and off the beaten track.
Come now, take a walk with me…
The Traveller 01: Ku/苦 Bar
Find a dingy, nondescript alley in a suspiciously quiet corner of Bangkok’s Chinatown at night. Walk down it. Pass the small prayer shrine that houses the angels who look over these particular buildings and approach an old wooden door. You were told that there is a bar here, as to yet nothing suggests that this is so…
Wait! A closer inspection reveals a simple bronze plaque, out of place for its polish and tended upkeep, “cocktails 3rd floor.” Up the stairs then! The landing on floor 3 presents a white sign with the Chinese character for bitter, ku/苦, and a red arrow pointing right.
Pass through the threshold, enter a new space. To your right, a large expanse of barren concrete, an empty “room.” Tripods for…some kind of filming? A man-sized, locked container. Yet, you did not come here to ask questions, such things are none of your business!
And to your left, you find the golden door. Approach. Enter. Be greeted. You have done well! You have found it. 苦 Bar. You are among friends now. Inside exudes deep weirdness - in the etymological sense - the bending of destinies, control of the fates. And for the patrons, a quiet yet social place, a sensual yet sacred space.
Ethereal sounds, like forlorn whale songs fill the air, a strange music for an even stranger magic. But, Taste! Taste is the order of the day! Fragrant, Bizarre, Obscure, Dripping and Arcane. Here you find a most unique use flavor, flavors myriad and manifold, flavors beyond name. Buddha’s hand, burnt cedar charcoal, ylang ylang, strawberry leaf, maybe wild roots brought in by some friendly passerby, and many, many other things. So, Taste! The drinks here, libations even, are not so much to be liked or disliked, rather, the are liquid context, experience to be embraced with a curious mind and soul freed from judgment.
And In the inner room, one may find another set of stairs. Down this time. Leading to the second place - KANGKAO. A natural wine bar, or so they say. Cozy, botanical, industrial, enclosed. The kind of private setting where you might overhear Bangkok’s resident “State Department,” “UN,” and “NGO” types chatting auspiciously in both Mandarin and English with their Mainland Chinese counterparts. But don’t look hard or listen too long! Surely, there’s no reason to be rude… Relax, relax, you are amongst friends now.
**苦 Bar. Bangkok, circa 2020. There are secrets to be found. Go there. **
Plebchain #Bitcoin #NostrArt #ArtOnNostr #Writestr #Createstr #NostrLove #Travel #Photography #Art #Story #Storytelling #Nostr #Zap #Zaps #Bangkok #Thailand #Siamstr
-
@ 3c984938:2ec11289
2024-07-22 11:43:17Bienvenide a Nostr!
Introduccíon
Es tu primera vez aqui en Nostr? Bienvenides! Nostr es un acrónimo raro para "Notes and Other Stuff Transmitted by Relays" on un solo objetivo; resistirse a la censura. Una alternativa a las redes sociales tradicionales, comunicaciónes, blogging, streaming, podcasting, y feventualmente el correo electronico (en fase de desarrollo) con características descentralizadas que te capacita, usario. Jamas seras molestado por un anuncio, capturado por una entidad centralizada o algoritmo que te monetiza.
Permítame ser su anfitrión! Soy Onigiri! Yo estoy explorando el mundo de Nostr, un protocolo de comunicacíon decentralizada. Yo escribo sobre las herramientas y los desarolladores increíbles de Nostr que dan vida a esta reino.
Bienvenides a Nostr Wonderland
Estas a punto de entrar a un otro mundo digtal que te hará explotar tu mente de todas las aplicaciones descentralizadas, clientes, sitios que puedes utilizar. Nunca volverás a ver a las comunicaciones ni a las redes sociales de la mesma manera. Todo gracias al carácter criptográfico de nostr, inpirado por la tecnología "blockchain". Cada usario, cuando crean una cuenta en Nostr, recibe un par de llaves: una privada y una publico. Estos son las llaves de tu propio reino. Lo que escribes, cantes, grabes, lo que creas - todo te pertenece.
Unos llaves de Oro y Plata
Mi amigo y yo llamamos a esto "identidad mediante cifrado" porque tu identidad es cifrado. Tu puedes compartir tu llave de plata "npub" a otros usarios para conectar y seguir. Utiliza tu llave de oro "nsec" para accedar a tu cuenta y exponerte a muchas aplicaciones. Mantenga la llave a buen recaudo en todo momento. Ya no hay razor para estar enjaulado por los terminos de plataformas sociales nunca más.
Onigirl
npub18jvyjwpmm65g8v9azmlvu8knd5m7xlxau08y8vt75n53jtkpz2ys6mqqu3
Todavia No tienes un cliente? Seleccione la mejor opción.
Encuentra la aplicación adecuada para ti! Utilice su clave de oro "nsec" para acceder a estas herramientas maravillosas. También puedes visit a esta pagina a ver a todas las aplicaciones. Antes de pegar tu llave de oro en muchas aplicaciones, considera un "signer" (firmante) para los sitios web 3. Por favor, mire la siguiente imagen para más detalles. Consulte también la leyenda.
Get a Signer extension via chrome webstore
Un firmante (o "signer" en inglés) es una extensión del navegador web. Nos2x and NostrConnect son extensiónes ampliamente aceptado para aceder a Nostr. Esto simplifica el proceso de aceder a sitios "web 3". En lugar de copiar y pegar la clave oro "nsec" cada vez, la mantienes guardado en la extensión y le des permiso para aceder a Nostr.
👉⚡⚡Obtén una billetera Bitcoin lightning para enviar/recibir Zaps⚡⚡ (Esto es opcional)
Aqui en Nostr, utilizamos la red Lightning de Bitcoin (L2). Nesitaras una cartera lightning para enviar y recibir Satoshis, la denominacion mas chiquita de un Bitcoin. (0.000000001 BTC) Los "zaps" son un tipo de micropago en Nostr. Si te gusta el contenido de un usario, es norma dejarle una propina en la forma de un ¨zap". Por ejemplo, si te gusta este contenido, tu me puedes hacer "zap" con Satoshis para recompensar mi trabajo. Pero apenas llegaste, as que todavia no tienes una cartera. No se preocupe, puedo ayudar en eso!
"Stacker.News" es una plataforma donde los usarios pueden ganar SATS por publicar articulos y interactuar con otros.
Stacker.News es el lugar mas facil para recibir una direccion de cartera Bitcoin Lightning.
- Acedese con su extensión firmante "signer" - Nos2x or NostrConnect - hace click en tu perfil, un codigo de letras y numeros en la mano superior derecha. Veás algo como esto
- Haga clic en "edit" y elija un nombre que te guste. Se puede cambiar si deseas en el futuro.
- Haga clic en "save"
- Crea una biografía y la comunidad SN son muy acogedora. Te mandarán satoshi para darte la bienvenida.
- Tu nueva direccion de cartera Bitcoin Lightning aparecerá asi
^^No le mandas "zaps" a esta direccion; es puramente con fines educativos.
- Con tu Nueva dirección de monedero Bitcoin Lightning puedes ponerla en cualquier cliente o app de tu elección. Para ello, ve a tu página de perfil y bajo la dirección de tu monedero en "Dirección Lightning", introduce tu nueva dirección y pulsa "guardar " y ya está. Enhorabuena.
👉✨Con el tiempo, es posible que desee pasar a las opciones de auto-custodia y tal vez incluso considerar la posibilidad de auto-alojar su propio nodo LN para una mejor privacidad. La buena noticia es que stacker.news tambien está dejando de ser una cartera custodio.
⭐NIP-05-identidad DNS⭐ Al igual que en Twitter, una marca de verificación es para mostrar que eres del mismo jardín "como un humano", y no un atípico como una mala hierba o, "bot". Pero no de la forma nefasta en que lo hacen las grandes tecnológicas. En el país de las maravillas de Nostr, esto te permite asignar tu llave de plata, "npub", a un identificador DNS. Una vez verificado, puedes gritar para anunciar tu nueva residencia Nostr para compartir.
✨Hay un montón de opciones, pero si has seguido los pasos, esto se vuelve extremadamente fácil.
👉✅¡Haz clic en tu "Perfil ", luego en "Configuración ", desplázate hasta la parte inferior y pega tu clave Silver, "npub!" y haz clic en "Guardar " y ¡listo! Utiliza tu monedero relámpago de Stacker.news como tu NIP-05. ¡¡¡Enhorabuena!!! ¡Ya estás verificado! Dale unas horas y cuando uses tu cliente "principal " deberías ver una marca de verificación.
Nostr, el infonformista de los servidores.
En lugar de utilizar una única instancia o un servidor centralizado, Nostr está construido para que varias bases de datos intercambien mensajes mediante "relés". Los relés, que son neutrales y no discriminatorios, almacenan y difunden mensajes públicos en la red Nostr. Transmiten mensajes a todos los demás clientes conectados a ellos, asegurando las comunicaciones en la red descentralizada.
¡Mis amigos en Nostr te dan la bienvenida!
Bienvenida a la fiesta. ¿Le apetece un té?🍵
¡Hay mucho mas!
Esto es la punta del iceberg. Síguenme mientras continúo explorando nuevas tierras y a los desarolladores, los caballeres que potencioan este ecosistema. Encuéntrame aquí para mas contenido como este y comparten con otros usarios de nostr. Conozca a los caballeres que luchan por freedomTech (la tecnología de libertad) en Nostr y a los proyectos a los que contribuyen para hacerla realidad.💋
Onigirl @npub18jvyjwpmm65g8v9azmlvu8knd5m7xlxau08y8vt75n53jtkpz2ys6mqqu3
🧡😻Esta guía ha sido cuidadosamente traducida por miggymofongo
Puede seguirla aquí. @npub1ajt9gp0prf4xrp4j07j9rghlcyukahncs0fw5ywr977jccued9nqrcc0cs
sitio web
- Acedese con su extensión firmante "signer" - Nos2x or NostrConnect - hace click en tu perfil, un codigo de letras y numeros en la mano superior derecha. Veás algo como esto
-
@ 84b0c46a:417782f5
2025-05-04 10:00:28₍ ・ᴗ・ ₎ ₍ ・ᴗ・ ₎₍ ・ᴗ・ ₎
-
@ 5df413d4:2add4f5b
2025-05-04 00:06:31This opinion piece was first published in BTC Magazine on Feb 20, 2023
Just in case we needed a reminder, banks are showing us that they can and will gatekeep their customers’ money to prevent them from engaging with bitcoin. This should be a call to action for Bitcoiners or anyone else who wants to maintain control over their finances to move toward more proactive use of permissionless bitcoin tools and practices.
Since January of 2023, when Jamie Dimon decried Bitcoin as a “hyped-up fraud” and “a pet rock,” on CNBC, I've found myself unable to purchase bitcoin using my Chase debit card on Cash App. And I'm not the only one — if you have been following Bitcoin Twitter, you might have also seen Alana Joy tweet about her experience with the same. (Alana Joy Twitter account has since been deleted).
In both of our cases, it is the bank preventing bitcoin purchases and blocking inbound fiat transfers to Cash App for customers that it has associated with Bitcoin. All under the guise of “fraud protection,” of course.
No, it doesn’t make a whole lot of sense — Chase still allows ACH bitcoin purchases and fiat on Cash App can be used for investing in stocks, saving or using Cash App’s own debit card, not just bitcoin — but yes, it is happening. Also, no one seems to know exactly when this became Chase’s policy. The fraud representative I spoke with wasn’t sure and couldn’t point to any documentation, but reasoned that the rule has been in place since early last year. Yet murkier still, loose chatter can be found on Reddit about this issue going back to at least April 2021.
However, given that I and so many others were definitely buying bitcoin via Chase debit throughout 2021 and 2022, I’d wager that this policy, up to now, has only been exercised haphazardly, selectively, arbitrarily, even. Dark patterns abound, but for now, it seems like I just happen to be one of the unlucky ones…
That said, there is nothing preventing this type of policy from being enforced broadly and in earnest by one or many banks. If and as banks feel threatened by Bitcoin, we will surely see more of these kinds of opaque practices.
It’s Time To Get Proactive
Instead, we should expect it and prepare for it. So, rather than railing against banks, I want to use this as a learning experience to reflect on the importance of permissionless, non-KYC Bitcoining, and the practical actions we can take to advance the cause.
Bank with backups and remember local options. Banking is a service, not servitude. Treat it as such. Maintaining accounts at multiple banks may provide some limited fault tolerance against banks that take a hostile stance toward Bitcoin, assuming it does not become the industry norm. Further, smaller, local and regional banks may be more willing to work with Bitcoiner customers, as individual accounts can be far more meaningful to them than they are to larger national banks — though this certainly should not be taken for granted.
If you must use KYC’d Bitcoin services, do so thoughtfully. For Cash App (and services like it), consider first loading in fiat and making buys out of the app’s native cash balance instead of purchasing directly through a linked bank account/debit card where information is shared with the bank that allows it to flag the transaction for being related to bitcoin. Taking this small step may help to avoid gatekeeping and can provide some minor privacy, from the bank at least.
Get comfortable with non-KYC bitcoin exchanges. Just as many precoiners drag their feet before making their first bitcoin buys, so too do many Bitcoiners drag their feet in using permissionless channels to buy and sell bitcoin. Robosats, Bisq, Hodl Hodl— you can use the tools. For anyone just getting started, BTC Sessions has excellent video tutorial content on all three, which are linked.
If you don’t yet know how to use these services, it’s better to pick up this knowledge now through calm, self-directed learning rather than during the panic of an emergency or under pressure of more Bitcoin-hostile conditions later. And for those of us who already know, we can actively support these services. For instance, more of us taking action to maintain recurring orders on such platforms could significantly improve their volumes and liquidity, helping to bootstrap and accelerate their network effects.
Be flexible and creative with peer-to-peer payment methods. Cash App, Zelle, PayPal, Venmo, Apple Cash, Revolut, etc. — the services that most users seem to be transacting with on no-KYC exchanges — they would all become willing, if not eager and active agents of financial gatekeeping in any truly antagonistic, anti-privacy environment, even when used in a “peer-to-peer” fashion.
Always remember that there are other payment options — such as gift cards, the original digital-bearer items — that do not necessarily carry such concerns. Perhaps, an enterprising soul might even use Fold to earn bitcoin rewards on the backend for the gift cards used on the exchange…
Find your local Bitcoin community! In the steadily-advancing shadow war on all things permissionless, private, and peer-to-peer, this is our best defense. Don’t just wait until you need other Bitcoiners to get to know other Bitcoiners — to paraphrase Texas Slim, “Shake your local Bitcoiner’s hand.” Get to know people and never underestimate the power of simply asking around. There could be real, live Bitcoiners near you looking to sell some corn and happy to see it go to another HODLer rather than to a bunch of lettuce-handed fiat speculators on some faceless, centralized, Ponzi casino exchange. What’s more, let folks know your skills, talents and expertise — you might be surprised to find an interested market that pays in BTC!
In closing, I believe we should think of permissionless Bitcoining as an essential and necessary core competency, just like we do with Self-Custody. And we should push it with similar urgency and intensity. But as we do this, we should also remember that it is a spectrum and a progression and that there are no perfect solutions, only tradeoffs. Realization of the importance of non-KYC practices will not be instant or obvious to near-normie newcoiners, coin-curious fence-sitters or even many minted Bitcoiners. My own experience is certainly a testament to this.
As we promote the active practice of non-KYC Bitcoining, we can anchor to empathy, patience and humility — always being mindful of the tremendous amount of unlearning most have to go through to get there. So, even if someone doesn’t get it the first time, or the nth time, that they hear it from us, if it helps them get to it faster at all, then it’s well worth it.
~Moon
-
@ 84b0c46a:417782f5
2025-05-04 09:49:45- 1:nan:
- 2
- 2irorio絵文字
- 1nostr:npub1sjcvg64knxkrt6ev52rywzu9uzqakgy8ehhk8yezxmpewsthst6sw3jqcw
- 2
- 2
- 3
- 3
- 2
- 1
|1|2| |:--|:--| |test| :nan: |
---
:nan: :nan:
- 1
- 2
- tet
- tes
- 3
- 1
-
2
t
te
test
-
19^th^
- H~2~O
本サイトはfirefoxのみサポートしています うにょ :wayo: This text will bounce wss://catstrr.swarmstr.com/
うにょうにょてすと
-
@ 6871d8df:4a9396c1
2024-06-12 22:10:51Embracing AI: A Case for AI Accelerationism
In an era where artificial intelligence (AI) development is at the forefront of technological innovation, a counter-narrative championed by a group I refer to as the 'AI Decels'—those advocating for the deceleration of AI advancements— seems to be gaining significant traction. After tuning into a recent episode of the Joe Rogan Podcast, I realized that the prevailing narrative around AI was heading in a dangerous direction. Rogan had Aza Raskin and Tristan Harris, technology safety advocates, who released a talk called 'The AI Dilemma,' on for a discussion. You may know them from the popular documentary 'The Social Dilemma' on the dangers of social media. It became increasingly clear that the cautionary stance dominating this discourse might be tipping the scales too far, veering towards an over-regulated future that stifles innovation rather than fostering it.
Are we moving too fast?
While acknowledging AI's benefits, Aza and Tristan fear it could be dangerous if not guided by ethical standards and safeguards. They believe AI development is moving too quickly and that the right incentives for its growth are not in place. They are concerned about the possibility of "civilizational overwhelm," where advanced AI technology far outpaces 21st-century governance. They fear a scenario where society and its institutions cannot manage or adapt to the rapid changes and challenges introduced by AI.
They argue for regulating and slowing down AI development due to rapid, uncontrolled advancement driven by competition among companies like Google, OpenAI, and Microsoft. They claim this race can lead to unsafe releases of new technologies, with AI systems exhibiting unpredictable, emergent behaviors, posing significant societal risks. For instance, AI can inadvertently learn tasks like sentiment analysis or human emotion understanding, creating potential for misuse in areas like biological weapons or cybersecurity vulnerabilities.
Moreover, AI companies' profit-driven incentives often conflict with the public good, prioritizing market dominance over safety and ethics. This misalignment can lead to technologies that maximize engagement or profits at societal expense, similar to the negative impacts seen with social media. To address these issues, they suggest government regulation to realign AI companies' incentives with safety, ethical considerations, and public welfare. Implementing responsible development frameworks focused on long-term societal impacts is essential for mitigating potential harm.
This isn't new
Though the premise of their concerns seems reasonable, it's dangerous and an all too common occurrence with the emergence of new technologies. For example, in their example in the podcast, they refer to the technological breakthrough of oil. Oil as energy was a technological marvel and changed the course of human civilization. The embrace of oil — now the cornerstone of industry in our age — revolutionized how societies operated, fueled economies, and connected the world in unprecedented ways. Yet recently, as ideas of its environmental and geopolitical ramifications propagated, the narrative around oil has shifted.
Tristan and Aza detail this shift and claim that though the period was great for humanity, we didn't have another technology to go to once the technological consequences became apparent. The problem with that argument is that we did innovate to a better alternative: nuclear. However, at its technological breakthrough, it was met with severe suspicions, from safety concerns to ethical debates over its use. This overregulation due to these concerns caused a decades-long stagnation in nuclear innovation, where even today, we are still stuck with heavy reliance on coal and oil. The scare tactics and fear-mongering had consequences, and, interestingly, they don't see the parallels with their current deceleration stance on AI.
These examples underscore a critical insight: the initial anxiety surrounding new technologies is a natural response to the unknowns they introduce. Yet, history shows that too much anxiety can stifle the innovation needed to address the problems posed by current technologies. The cycle of discovery, fear, adaptation, and eventual acceptance reveals an essential truth—progress requires not just the courage to innovate but also the resilience to navigate the uncertainties these innovations bring.
Moreover, believing we can predict and plan for all AI-related unknowns reflects overconfidence in our understanding and foresight. History shows that technological progress, marked by unexpected outcomes and discoveries, defies such predictions. The evolution from the printing press to the internet underscores progress's unpredictability. Hence, facing AI's future requires caution, curiosity, and humility. Acknowledging our limitations and embracing continuous learning and adaptation will allow us to harness AI's potential responsibly, illustrating that embracing our uncertainties, rather than pretending to foresee them, is vital to innovation.
The journey of technological advancement is fraught with both promise and trepidation. Historically, each significant leap forward, from the dawn of the industrial age to the digital revolution, has been met with a mix of enthusiasm and apprehension. Aza Raskin and Tristan Harris's thesis in the 'AI Dilemma' embodies the latter.
Who defines "safe?"
When slowing down technologies for safety or ethical reasons, the issue arises of who gets to define what "safe" or “ethical” mean? This inquiry is not merely technical but deeply ideological, touching the very core of societal values and power dynamics. For example, the push for Diversity, Equity, and Inclusion (DEI) initiatives shows how specific ideological underpinnings can shape definitions of safety and decency.
Take the case of the initial release of Google's AI chatbot, Gemini, which chose the ideology of its creators over truth. Luckily, the answers were so ridiculous that the pushback was sudden and immediate. My worry, however, is if, in correcting this, they become experts in making the ideological capture much more subtle. Large bureaucratic institutions' top-down safety enforcement creates a fertile ground for ideological capture of safety standards.
I claim that the issue is not the technology itself but the lens through which we view and regulate it. Suppose the gatekeepers of 'safety' are aligned with a singular ideology. In that case, AI development would skew to serve specific ends, sidelining diverse perspectives and potentially stifling innovative thought and progress.
In the podcast, Tristan and Aza suggest such manipulation as a solution. They propose using AI for consensus-building and creating "shared realities" to address societal challenges. In practice, this means that when individuals' viewpoints seem to be far apart, we can leverage AI to "bridge the gap." How they bridge the gap and what we would bridge it toward is left to the imagination, but to me, it is clear. Regulators will inevitably influence it from the top down, which, in my opinion, would be the opposite of progress.
In navigating this terrain, we must advocate for a pluralistic approach to defining safety, encompassing various perspectives and values achieved through market forces rather than a governing entity choosing winners. The more players that can play the game, the more wide-ranging perspectives will catalyze innovation to flourish.
Ownership & Identity
Just because we should accelerate AI forward does not mean I do not have my concerns. When I think about what could be the most devastating for society, I don't believe we have to worry about a Matrix-level dystopia; I worry about freedom. As I explored in "Whose data is it anyway?," my concern gravitates toward the issues of data ownership and the implications of relinquishing control over our digital identities. This relinquishment threatens our privacy and the integrity of the content we generate, leaving it susceptible to the inclinations and profit of a few dominant tech entities.
To counteract these concerns, a paradigm shift towards decentralized models of data ownership is imperative. Such standards would empower individuals with control over their digital footprints, ensuring that we develop AI systems with diverse, honest, and truthful perspectives rather than the massaged, narrow viewpoints of their creators. This shift safeguards individual privacy and promotes an ethical framework for AI development that upholds the principles of fairness and impartiality.
As we stand at the crossroads of technological innovation and ethical consideration, it is crucial to advocate for systems that place data ownership firmly in the hands of users. By doing so, we can ensure that the future of AI remains truthful, non-ideological, and aligned with the broader interests of society.
But what about the Matrix?
I know I am in the minority on this, but I feel that the concerns of AGI (Artificial General Intelligence) are generally overblown. I am not scared of reaching the point of AGI, and I think the idea that AI will become so intelligent that we will lose control of it is unfounded and silly. Reaching AGI is not reaching consciousness; being worried about it spontaneously gaining consciousness is a misplaced fear. It is a tool created by humans for humans to enhance productivity and achieve specific outcomes.
At a technical level, large language models (LLMs) are trained on extensive datasets and learning patterns from language and data through a technique called "unsupervised learning" (meaning the data is untagged). They predict the next word in sentences, refining their predictions through feedback to improve coherence and relevance. When queried, LLMs generate responses based on learned patterns, simulating an understanding of language to provide contextually appropriate answers. They will only answer based on the datasets that were inputted and scanned.
AI will never be "alive," meaning that AI lacks inherent agency, consciousness, and the characteristics of life, not capable of independent thought or action. AI cannot act independently of human control. Concerns about AI gaining autonomy and posing a threat to humanity are based on a misunderstanding of the nature of AI and the fundamental differences between living beings and machines. AI spontaneously developing a will or consciousness is more similar to thinking a hammer will start walking than us being able to create consciousness through programming. Right now, there is only one way to create consciousness, and I'm skeptical that is ever something we will be able to harness and create as humans. Irrespective of its complexity — and yes, our tools will continue to become evermore complex — machines, specifically AI, cannot transcend their nature as non-living, inanimate objects programmed and controlled by humans.
The advancement of AI should be seen as enhancing human capabilities, not as a path toward creating autonomous entities with their own wills. So, while AI will continue to evolve, improve, and become more powerful, I believe it will remain under human direction and control without the existential threats often sensationalized in discussions about AI's future.
With this framing, we should not view the race toward AGI as something to avoid. This will only make the tools we use more powerful, making us more productive. With all this being said, AGI is still much farther away than many believe.
Today's AI excels in specific, narrow tasks, known as narrow or weak AI. These systems operate within tightly defined parameters, achieving remarkable efficiency and accuracy that can sometimes surpass human performance in those specific tasks. Yet, this is far from the versatile and adaptable functionality that AGI represents.
Moreover, the exponential growth of computational power observed in the past decades does not directly translate to an equivalent acceleration in achieving AGI. AI's impressive feats are often the result of massive data inputs and computing resources tailored to specific tasks. These successes do not inherently bring us closer to understanding or replicating the general problem-solving capabilities of the human mind, which again would only make the tools more potent in our hands.
While AI will undeniably introduce challenges and change the aspects of conflict and power dynamics, these challenges will primarily stem from humans wielding this powerful tool rather than the technology itself. AI is a mirror reflecting our own biases, values, and intentions. The crux of future AI-related issues lies not in the technology's inherent capabilities but in how it is used by those wielding it. This reality is at odds with the idea that we should slow down development as our biggest threat will come from those who are not friendly to us.
AI Beget's AI
While the unknowns of AI development and its pitfalls indeed stir apprehension, it's essential to recognize the power of market forces and human ingenuity in leveraging AI to address these challenges. History is replete with examples of new technologies raising concerns, only for those very technologies to provide solutions to the problems they initially seemed to exacerbate. It looks silly and unfair to think of fighting a war with a country that never embraced oil and was still primarily getting its energy from burning wood.
The evolution of AI is no exception to this pattern. As we venture into uncharted territories, the potential issues that arise with AI—be it ethical concerns, use by malicious actors, biases in decision-making, or privacy intrusions—are not merely obstacles but opportunities for innovation. It is within the realm of possibility, and indeed, probability, that AI will play a crucial role in solving the problems it creates. The idea that there would be no incentive to address and solve these problems is to underestimate the fundamental drivers of technological progress.
Market forces, fueled by the demand for better, safer, and more efficient solutions, are powerful catalysts for positive change. When a problem is worth fixing, it invariably attracts the attention of innovators, researchers, and entrepreneurs eager to solve it. This dynamic has driven progress throughout history, and AI is poised to benefit from this problem-solving cycle.
Thus, rather than viewing AI's unknowns as sources of fear, we should see them as sparks of opportunity. By tackling the challenges posed by AI, we will harness its full potential to benefit humanity. By fostering an ecosystem that encourages exploration, innovation, and problem-solving, we can ensure that AI serves as a force for good, solving problems as profound as those it might create. This is the optimism we must hold onto—a belief in our collective ability to shape AI into a tool that addresses its own challenges and elevates our capacity to solve some of society's most pressing issues.
An AI Future
The reality is that it isn't whether AI will lead to unforeseen challenges—it undoubtedly will, as has every major technological leap in history. The real issue is whether we let fear dictate our path and confine us to a standstill or embrace AI's potential to address current and future challenges.
The approach to solving potential AI-related problems with stringent regulations and a slowdown in innovation is akin to cutting off the nose to spite the face. It's a strategy that risks stagnating the U.S. in a global race where other nations will undoubtedly continue their AI advancements. This perspective dangerously ignores that AI, much like the printing press of the past, has the power to democratize information, empower individuals, and dismantle outdated power structures.
The way forward is not less AI but more of it, more innovation, optimism, and curiosity for the remarkable technological breakthroughs that will come. We must recognize that the solution to AI-induced challenges lies not in retreating but in advancing our capabilities to innovate and adapt.
AI represents a frontier of limitless possibilities. If wielded with foresight and responsibility, it's a tool that can help solve some of the most pressing issues we face today. There are certainly challenges ahead, but I trust that with problems come solutions. Let's keep the AI Decels from steering us away from this path with their doomsday predictions. Instead, let's embrace AI with the cautious optimism it deserves, forging a future where technology and humanity advance to heights we can't imagine.
-
@ 3c984938:2ec11289
2024-06-09 14:40:55I'm having some pain in my heart about the U.S. elections.
Ever since Obama campaigned for office, an increase of young voters have come out of the woodwork. Things have not improved. They've actively told you that "your vote matters." I believe this to be a lie unless any citizen can demand at the gate, at the White House to be allowed to hold and point a gun to the president's head. (Relax, this is a hyperbole)
Why so dramatic? Well, what does the president do? Sign bills, commands the military, nominates new Fed chairman, ambassadors, supreme judges and senior officials all while traveling in luxury planes and living in a white palace for four years.
They promised Every TIME to protect citizen rights when they take the oath and office.
...They've broken this several times, with so-called "emergency-crisis"
The purpose of a president, today, it seems is to basically hire armed thugs to keep the citizens in check and make sure you "voluntarily continue to be a slave," to the system, hence the IRS. The corruption extends from the cop to the judge and even to politicians. The politicians get paid from lobbyists to create bills in congress for the president to sign. There's no right answer when money is involved with politicians. It is the same if you vote Obama, Biden, Trump, or Haley. They will wield the pen to serve themselves to say it will benefit the country.
In the first 100 years of presidency, the government wasn't even a big deal. They didn't even interfere with your life as much as they do today.
^^ You hold the power in your hands, don't let them take it. Don't believe me? Try to get a loan from a bank without a signature. Your signature is as good as gold (if not better) and is an original trademark.
Just Don't Vote. End the Fed. Opt out.
^^ I choose to form my own path, even if it means leaving everything I knew prior. It doesn't have to be a spiritual thing. Some, have called me religious because of this. We're all capable of greatness and having humanity.
✨Don't have a machine heart with a machine mind. Instead, choose to have a heart like the cowardly lion from the "Wizard Of Oz."
There's no such thing as a good president or politicians.
If there was, they would have issued non-interest Federal Reserve Notes. Lincoln and Kennedy tried to do this, they got shot.
There's still a banner of America there, but it's so far gone that I cannot even recognize it. However, I only see a bunch of 🏳🌈 pride flags.
✨Patrick Henry got it wrong, when he delivered his speech, "Give me liberty or give me death." Liberty and freedom are two completely different things.
Straightforward from Merriam-Webster Choose Right or left?
No control, to be 100% without restrictions- free.
✨I disagree with the example sentence given. Because you cannot advocate for human freedom and own slaves, it's contradicting it. Which was common in the founding days.
I can understand many may disagree with me, and you might be thinking, "This time will be different." I, respectfully, disagree, and the proxy wars are proof. Learn the importance of Bitcoin, every Satoshi is a step away from corruption.
✨What does it look like to pull the curtains from the "Wizard of Oz?"
Have you watched the video below, what 30 Trillion dollars in debt looks like visually? Even I was blown away. https://video.nostr.build/d58c5e1afba6d7a905a39407f5e695a4eb4a88ae692817a36ecfa6ca1b62ea15.mp4
I say this with love. Hear my plea?
Normally, I don't write about anything political. It just feels like a losing game. My energy feels it's in better use to learn new things, write and to create. Even a simple blog post as simple as this. Stack SATs, and stay humble.
<3 Onigirl
-
@ 84b0c46a:417782f5
2025-05-04 09:36:08 -
@ 3bf0c63f:aefa459d
2024-05-21 12:38:08Bitcoin transactions explained
A transaction is a piece of data that takes inputs and produces outputs. Forget about the blockchain thing, Bitcoin is actually just a big tree of transactions. The blockchain is just a way to keep transactions ordered.
Imagine you have 10 satoshis. That means you have them in an unspent transaction output (UTXO). You want to spend them, so you create a transaction. The transaction should reference unspent outputs as its inputs. Every transaction has an immutable id, so you use that id plus the index of the output (because transactions can have multiple outputs). Then you specify a script that unlocks that transaction and related signatures, then you specify outputs along with a script that locks these outputs.
As you can see, there's this lock/unlocking thing and there are inputs and outputs. Inputs must be unlocked by fulfilling the conditions specified by the person who created the transaction they're in. And outputs must be locked so anyone wanting to spend those outputs will need to unlock them.
For most of the cases locking and unlocking means specifying a public key whose controller (the person who has the corresponding private key) will be able to spend. Other fancy things are possible too, but we can ignore them for now.
Back to the 10 satoshis you want to spend. Since you've successfully referenced 10 satoshis and unlocked them, now you can specify the outputs (this is all done in a single step). You can specify one output of 10 satoshis, two of 5, one of 3 and one of 7, three of 3 and so on. The sum of outputs can't be more than 10. And if the sum of outputs is less than 10 the difference goes to fees. In the first days of Bitcoin you didn't need any fees, but now you do, otherwise your transaction won't be included in any block.
If you're still interested in transactions maybe you could take a look at this small chapter of that Andreas Antonopoulos book.
If you hate Andreas Antonopoulos because he is a communist shitcoiner or don't want to read more than half a page, go here: https://en.bitcoin.it/wiki/Coin_analogy
-
@ b99efe77:f3de3616
2025-05-03 23:37:57343433
34333434
Places & Transitions
- Places:
-
Bla bla bla: some text
-
Transitions:
- start: Initializes the system.
- logTask: bla bla bla.
petrinet ;startDay () -> working ;stopDay working -> () ;startPause working -> paused ;endPause paused -> working ;goSmoke working -> smoking ;endSmoke smoking -> working ;startEating working -> eating ;stopEating eating -> working ;startCall working -> onCall ;endCall onCall -> working ;startMeeting working -> inMeetinga ;endMeeting inMeeting -> working ;logTask working -> working
-
@ 3c984938:2ec11289
2024-05-09 04:43:15It's been a journey from the Publishing Forest of Nostr to the open sea of web3. I've come across a beautiful chain of islands and thought. Why not take a break and explore this place? If I'm searching for devs and FOSS, I should search every nook and cranny inside the realm of Nostr. It is quite vast for little old me. I'm just a little hamster and I don't speak in code or binary numbers zeros and ones.
After being in sea for awhile, my heart raced for excitement for what I could find. It seems I wasn't alone, there were others here like me! Let's help spread the message to others about this uncharted realm. See, look at the other sailboats, aren't they pretty? Thanks to some generous donation of SATs, I was able to afford the docking fee.
Ever feel like everyone was going to a party, and you were supposed to dress up, but you missed the memo? Or a comic-con? well, I felt completely underdressed and that's an understatement. Well, turns out there is a some knights around here. Take a peek!
A black cat with a knight passed by very quickly. He was moving too fast for me to track. Where was he going? Then I spotted a group of knights heading in the same direction, so I tagged along. The vibes from these guys was impossible to resist. They were just happy-go-lucky. 🥰They were heading to a tavern on a cliff off the island.
Ehh? a Tavern? Slightly confused, whatever could these knights be doing here? I guess when they're done with their rounds they would here to blow off steam. Things are looking curiouser and curiouser. But the black cat from earlier was here with its rider, whom was dismounting. So you can only guess, where I'm going.
The atmosphere in this pub, was lively and energetic. So many knights spoke among themselves. A group here, another there, but there was one that caught my eye. I went up to a group at a table, whose height towed well above me even when seated. Taking a deep breath, I asked, "Who manages this place?" They unanimous pointed to one waiting for ale at the bar. What was he doing? Watching others talk? How peculiar.
So I went up to him! And introduced myself.
"Hello I'm Onigirl"
"Hello Onigirl, Welcome to Gossip"
"Gossip, what is Gossip?" scratching my head and whiskers.
What is Gossip? Gossip is FOSS and a great client for privacy-centric minded nostriches. It avoids browser tech which by-passes several scripting languages such as JavaScript☕, HTML parsing, rendering, and CSS(Except HTTP GET and Websockets). Using OpenGL-style rendering. For Nostriches that wish to remain anonymous can use Gossip over TOR. Mike recommends using QubesOS, Whonix and or Tails. [FYI-Gossip does not natively support tor SOCKS5 proxy] Most helpful to spill the beans if you're a journalist.
On top of using your nsec or your encryption key, Gossip adds another layer of security over your account with a password login. There's nothing wrong with using the browser extensions (such as nos2x or Flamingo) which makes it super easy to log in to Nostr enable websites, apps, but it does expose you to browser vulnerabilities.
Mike Points out
"people have already had their private key stolen from other nostr clients,"
so it a concern if you value your account. I most certainly care for mine.
Gossip UI has a simple, and clean interface revolving around NIP-65 also called the “Outbox model." As posted from GitHub,
"This NIP allows Clients to connect directly with the most up-to-date relay set from each individual user, eliminating the need of broadcasting events to popular relays."
This eliminates clients that track only a specific set of relays which can congest those relays when you publish your note. Also this can be censored, by using Gossip you can publish notes to alternative relays that have not censored you to reach the same followers.
👉The easiest way to translate that is reducing redundancy to publish to popular relays or centralized relays for content reach to your followers.
Cool! What an awesome client, I mean Tavern! What else does this knight do? He reaches for something in his pocket. what is it? A Pocket is a database for storing and retrieving nostr events but mike's written it in Rust with a few extra kinks inspired by Will's nostrdb. Still in development, but it'll be another tool for you dear user! 💖💕💚
Onigirl is proud to present this knights to the community and honor them with kisu. 💋💋💋 Show some 💖💘💓🧡💙💚
👉💋💋Will - jb55 Lord of apples 💋 @npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s
👉💋💋 Mike Knight - Lord of Security 💋 @npub1acg6thl5psv62405rljzkj8spesceyfz2c32udakc2ak0dmvfeyse9p35c
Knights spend a lot of time behind the screen coding for the better of humanity. It is a tough job! Let's appreciate these knights, relay operators, that support this amazing realm of Nostr! FOSS for all!
This article was prompted for the need for privacy and security of your data. They're different, not to be confused.
Recently, Edward Snowden warns Bitcoin devs about the need for privacy, Quote:
“I've been warning Bitcoin developers for ten years that privacy needs to be provided for at the protocol level. This is the final warning. The clock is ticking.”
Snowden’s comments come after heavy actions of enforcement from Samarai Wallet, Roger Ver, Binance’s CZ, and now the closure of Wasabi Wallet. Additionally, according to CryptoBriefing, Trezor is ending it’s CoinJoin integration as well. Many are concerned over the new definition of a money transmitter, which includes even those who don’t touch the funds.
Help your favorite the hamster
^^Me drowning in notes on your feed. I can only eat so many notes to find you.
👉If there are any XMPP fans on here. I'm open to the idea of opening a public channel, so you could follow me on that as a forum-like style. My server of choice would likely be a German server.😀You would be receiving my articles as njump.me style or website-like. GrapeneOS users, you can download Cheogram app from the F-Driod store for free to access. Apple and Andriod users are subjected to pay to download this app, an alternative is ntalk or conversations. If it interests the community, just FYI. Please comment or DM.
👉If you enjoyed this content, please consider reposting/sharing as my content is easily drowned by notes on your feed. You could also join my community under Children_Zone where I post my content.
An alternative is by following #onigirl Just FYI this feature is currently a little buggy.
Follow as I search for tools and awesome devs to help you dear user live a decentralized life as I explore the realm of Nostr.
Thank you Fren
-
@ b99efe77:f3de3616
2025-05-03 23:34:5412123123123
eqeqeqeqeqeqe
Places & Transitions
- Places:
-
Bla bla bla: some text
-
Transitions:
- start: Initializes the system.
- logTask: bla bla bla.
petrinet ;startDay () -> working ;stopDay working -> () ;startPause working -> paused ;endPause paused -> working ;goSmoke working -> smoking ;endSmoke smoking -> working ;startEating working -> eating ;stopEating eating -> working ;startCall working -> onCall ;endCall onCall -> working ;startMeeting working -> inMeetinga ;endMeeting inMeeting -> working ;logTask working -> working
-
@ b99efe77:f3de3616
2025-05-03 23:20:27PetriNostr. My everyday activity
sdfsdfsfsfdfdf
petrinet sfsfdsf
-
@ 3c984938:2ec11289
2024-04-16 17:14:58Hello (N)osytrs!
Yes! I'm calling you an (N)oystr!
Why is that? Because you shine, and I'm not just saying that to get more SATs. Ordinary Oysters and mussels can produce these beauties! Nothing seriously unique about them, however, with a little time and love each oyster is capable of creating something truly beautiful. I like believing so, at least, given the fact that you're even reading this article; makes you an (N)oystr! This isn't published this on X (formerly known as Twitter), Facebook, Discord, Telegram, or Instagram, which makes you the rare breed! A pearl indeed! I do have access to those platforms, but why create content on a terrible platform knowing I too could be shut down! Unfortunately, many people still use these platforms. This forces individuals to give up their privacy every day. Meta is leading the charge by forcing users to provide a photo ID for verification in order to use their crappy, obsolete site. If that was not bad enough, imagine if you're having a type of disagreement or opinion. Then, Bigtech can easily deplatform you. Umm. So no open debate? Just instantly shut-off users. Whatever, happened to right to a fair trial? Nope, just burning you at the stake as if you're a witch or warlock!
How heinous are the perpetrators and financiers of this? Well, that's opening another can of worms for you.
Imagine your voice being taken away, like the little mermaid. Ariel was lucky to have a prince, but the majority of us? The likelihood that I would be carried away by the current of the sea during a sunset with a prince on a sailboat is zero. And I live on an island, so I'm just missing the prince, sailboat(though I know where I could go to steal one), and red hair. Oh my gosh, now I feel sad.
I do not have the prince, Bob is better! I do not have mermaid fins, or a shell bra. Use coconut shells, it offers more support! But, I still have my voice and a killer sunset to die for!
All of that is possible thanks to the work of developers. These knights fight for Freedom Tech by utilizing FOSS, which help provides us with a vibrant ecosystem. Unfortunately, I recently learned that they are not all funded. Knights must eat, drink, and have a work space. This space is where they spend most of their sweat equity on an app or software that may and may not pan out. That brilliance is susceptible to fading, as these individuals are not seen but rather stay behind closed doors. What's worse, if these developers lose faith in their project and decide to join forces with Meta! 😖 Does WhatsApp ring a bell?
Without them, I probably wouldn't be able to create this long form article. Let's cheer them on like cheerleaders.. 👉Unfortunately, there's no cheerleader emoji so you'll just have to settle for a dancing lady, n guy. 💃🕺
Semisol said it beautifully, npub12262qa4uhw7u8gdwlgmntqtv7aye8vdcmvszkqwgs0zchel6mz7s6cgrkj
If we want freedom tech to succeed, the tools that make it possible need to be funded: relays like https://nostr.land, media hosts like https://nostr.build, clients like https://damus.io, etc.
With that thought, Onigirl is pleased to announce the launch of a new series. With a sole focus on free market devs/projects.
Knights of Nostr!
I'll happily brief you about their exciting project and how it benefits humanity! Let's Support these Magnificent projects, devs, relays, and builders! Our first runner up!
Oppa Fishcake :Lord of Media Hosting
npub137c5pd8gmhhe0njtsgwjgunc5xjr2vmzvglkgqs5sjeh972gqqxqjak37w
Oppa Fishcake with his noble steed!
Think of this as an introduction to learn and further your experience on Nostr! New developments and applications are constantly happening on Nostr. It's enough to make one's head spin. I may also cover FOSS projects(outside of Nostr) as they need some love as well! Plus, you can think of it as another tool to add to your decentralized life. I will not be doing how-to-Nostr guides. I personally feel there are plenty of great guides already available! Which I'm happy to add to curation collection via easily searchable on Yakihonne.
For email updates you can subscribe to my [[https://paragraph.xyz/@onigirl]]
If you like it, send me some 🧡💛💚 hearts💜💗💖 otherwise zap dat⚡⚡🍑🍑peach⚡⚡🍑 ~If not me, then at least to our dearest knight!
Thank you from the bottom of my heart for your time and support (N)oystr! Shine bright like a diamond! Share if you care! FOSS power!
Follow on your favorite Nostr Client for the best viewing experience!
[!NOTE]
I'm using Obsidian + Nostr Writer Plugin; a new way to publish Markdown directly to Nostr. I was a little nervous using this because I was used doing them in RStudio; R Markdown.
Since this is my first article, I sent it to my account as a draft to test it. It's pretty neat. -
@ b99efe77:f3de3616
2025-05-03 22:58:44dfsf
adfasfafafafafadsf
```petrinet
Places & Transitions
- Places:
-
Bla bla bla: some text
-
Transitions:
- start: Initializes the system.
- logTask: bla bla bla.
petrinet ;startDay () -> working ;stopDay working -> () ;startPause working -> paused ;endPause paused -> working ;goSmoke working -> smoking ;endSmoke smoking -> working ;startEating working -> eating ;stopEating eating -> working ;startCall working -> onCall ;endCall onCall -> working ;startMeeting working -> inMeetinga ;endMeeting inMeeting -> working ;logTask working -> working
``` -
@ 6871d8df:4a9396c1
2024-02-24 22:42:16In an era where data seems to be as valuable as currency, the prevailing trend in AI starkly contrasts with the concept of personal data ownership. The explosion of AI and the ensuing race have made it easy to overlook where the data is coming from. The current model, dominated by big tech players, involves collecting vast amounts of user data and selling it to AI companies for training LLMs. Reddit recently penned a 60 million dollar deal, Google guards and mines Youtube, and more are going this direction. But is that their data to sell? Yes, it's on their platforms, but without the users to generate it, what would they monetize? To me, this practice raises significant ethical questions, as it assumes that user data is a commodity that companies can exploit at will.
The heart of the issue lies in the ownership of data. Why, in today's digital age, do we not retain ownership of our data? Why can't our data follow us, under our control, to wherever we want to go? These questions echo the broader sentiment that while some in the tech industry — such as the blockchain-first crypto bros — recognize the importance of data ownership, their "blockchain for everything solutions," to me, fall significantly short in execution.
Reddit further complicates this with its current move to IPO, which, on the heels of the large data deal, might reinforce the mistaken belief that user-generated data is a corporate asset. Others, no doubt, will follow suit. This underscores the urgent need for a paradigm shift towards recognizing and respecting user data as personal property.
In my perfect world, the digital landscape would undergo a revolutionary transformation centered around the empowerment and sovereignty of individual data ownership. Platforms like Twitter, Reddit, Yelp, YouTube, and Stack Overflow, integral to our digital lives, would operate on a fundamentally different premise: user-owned data.
In this envisioned future, data ownership would not just be a concept but a practice, with public and private keys ensuring the authenticity and privacy of individual identities. This model would eliminate the private data silos that currently dominate, where companies profit from selling user data without consent. Instead, data would traverse a decentralized protocol akin to the internet, prioritizing user control and transparency.
The cornerstone of this world would be a meritocratic digital ecosystem. Success for companies would hinge on their ability to leverage user-owned data to deliver unparalleled value rather than their capacity to gatekeep and monetize information. If a company breaks my trust, I can move to a competitor, and my data, connections, and followers will come with me. This shift would herald an era where consent, privacy, and utility define the digital experience, ensuring that the benefits of technology are equitably distributed and aligned with the users' interests and rights.
The conversation needs to shift fundamentally. We must challenge this trajectory and advocate for a future where data ownership and privacy are not just ideals but realities. If we continue on our current path without prioritizing individual data rights, the future of digital privacy and autonomy is bleak. Big tech's dominance allows them to treat user data as a commodity, potentially selling and exploiting it without consent. This imbalance has already led to users being cut off from their digital identities and connections when platforms terminate accounts, underscoring the need for a digital ecosystem that empowers user control over data. Without changing direction, we risk a future where our content — and our freedoms by consequence — are controlled by a few powerful entities, threatening our rights and the democratic essence of the digital realm. We must advocate for a shift towards data ownership by individuals to preserve our digital freedoms and democracy.
-
@ 8947a945:9bfcf626
2025-03-06 10:50:28Law of diminishing returns : ทำมากได้น้อย ซวยหน่อยขาดทุน
** หมายเหตุ บทความนี้มีเนื้อหาต่อเนื่องจาก “(TH) Why I quit : สาเหตุที่ผมลาออกจากที่(ทำงาน) ที่ (เคย) เรียกว่า”บ้าน” ใครยังไม่ได้อ่าน แนะนำให้ไปอ่านก่อนนะครับ
ผมได้ยิน คุณท็อป จิรายุส (คุณท๊อป บิทคับ) พูดคำว่า "Law of diminishing returns" ไว้ตอนแชร์มุมมองด้านการทำธุรกิจ ตอนนั้นผมไม่เข้าใจ แต่ผมรู้สึกว่ามันเป็นเจ๋งดี
สำหรับผม สรุปกฏนี้สั้นๆ คือ “ทำมากได้น้อย ซวยหน่อยขาดทุน”
กฏข้อนี้ว่าด้วยเรื่องการทำธุรกิจ พูดถึงปัจจัย 3 อย่าง - Fixed input คือสิ่งที่ไม่สามารถผลิตเพิ่มได้อีกในธุรกิจตอนนั้น เช่น จำนวนห้องตรวจในโรงพยาบาล, พื้นที่ที่ดินทำการเกษตร, ห้องเก็บสินค้า, จำนวนโต๊ะทำงานในสำนักงาน, ช่องบริการลูกค้าในธุรกิจบริการต่างๆ เป็นต้น ผมขอเรียกสั้นๆว่า “พื้นที่” - Variable input คือสิ่งที่สามารถเติมเข้ามาในธุรกิจได้ ปรับแต่งได้ เช่น แรงงาน เครื่องจักร พลังงาน - Marginal product คือผลลัพธ์ของธุรกิจ กำไรเพิ่มขึ้นหลังจากเพิ่ม variable input เข้าสู่ระบบ
ระยะของ law of diminishing returns
- Increased return (ทำเงินได้เยอะขึ้น) เมื่อป้อนแรงงานหรือเครื่องจักรเข้าสู่ระบบ ธุรกิจสามารถทำเงินเพิ่มขึ้นเนื่องจาก fixed input เดิมที่ถูกใช้สอยไม่เต็มที่ (underutilized) ถูกเติมเต็ม กรณีของรพ. คือมีห้องตรวจที่ว่าง ไม่มีหมอนั่งตรวจคนไข้ ห้องตรวจนั้นก็จะไม่สร้างรายได้ แต่เมื่อห้องนั้นมีหมอมานั่ง จะเปลี่ยนเป็นพื้นที่ที่ก่อให้เกิดรายได้ เมื่อห้องตรวจทุกห้องมีหมอนั่งครบ ถือว่าเต็มศักยภาพ ประสิทธิภาพการทำงานที่ดีตามมา
- Diminishing return (ทำมากได้น้อย) จุดของความพอดี (optimum point) คือจุดที่สมดุลพอดีของธุรกิจนั้น ทำกำไรได้เหมาะสม ไม่มากไม่น้อยจนเกินไป แต่ถ้ามองไม่เห็นจุด optimum นี้แล้วยังเพิ่ม”แรงงาน”เข้าไปอีก มันจะทำให้ ”พื้นที่” วุ่นวายเละเทะ ประสิทธิภาพในการทำงานลดลง
- Negative returns (ซวยหน่อยขาดทุน) ถ้ายังไม่หยุดเพิ่ม “แรงงาน” อีก สามารถนำมาสู่การขาดทุน
สรุปเป็นกราฟหน้าตาตามนี้ครับ
ทำไมมันถึงเป็นอย่างนั้น
ผมใช้โมเดลธุรกิจรพ.นี้เป็นตัวอย่างเลยนะครับ
ช่วงแรกที่สร้างรพ. ห้องตรวจมีไม่มาก จำนวนหมอและคนไข้สมดุลกันพอดี งานไม่หนักเกินไป การดูแลคนไข้มีประสิทธิภาพ รพ.เป็นที่ไว้ใจของคนในพื้นที่ มีชื่อเสียง ถูกบอกต่อ ทำให้จำนวนคนไข้เข้ามารับบริการมากขึ้น ต้องขยายพื้นที่รพ. สร้างตึกเพิ่ม รับบุคคลากรทุกระดับเข้ามาทำงานมากขึ้น จนเต็มพื้นที่ที่ดินรพ.ไม่สามารถขยายเพิ่มไปได้มากกว่านี้แล้ว เกิดสมดุลพอดี ทุกพื้นที่ถูกใช้งานเต็มศักยภาพ ประสิทธิภาพงานดีมาก
ผลการดำเนินงาน
ไม่เคยขาดทุน ผ่านช่วงวิกฤตต้มยำกุ้ง และ COVID ได้สบายๆ ฐานะทางการเงินแข็งแรง จ่ายปันผลสม่ำเสมอ ถ้าผมเป็นเจ้าของรพ.ผมจะ 1. สร้างระบบ 2. สร้างทีมผู้บริหาร 3. เน้นย้ำความสำคัญทำตามระบบ 3. Plan - Do- Check - Act เมื่อเกิดปัญหา
เพื่อให้ตัวผมสามารถถอยตัวเองออกมาจากตัวธุรกิจ คอยติดตาม monitor ทุกไตรมาส อย่างใกล้ชิด ไม่ทำอะไรเพิ่มไปมากกว่านี้
แต่สุดท้ายมันก็เกิดเหตุการณ์ทายาทรุ่นที่ 2 “ไม่เอา” นั่นแหละครับ มันทำให้วัฒนธรรมองค์กรเปลี่ยน ก้าวเท้าเข้าไปสู่ยุคตกต่ำ
บริหารแบบล้าหลัง ทำอะไรไม่สุด คิดว่าทำแล้วแต่จริงๆคือไม่ได้ทำ แก้ปัญหาไม่ตรงจุดสร้างปัญหากว่าเดิม
ตัวอย่าง
1. นโยบายการประหยัดพลังงานเพื่อลด carbon footprint
ฟังดูเหมือนจะดี แต่รพ.สื่อสารให้
รณรงค์ให้ปิดไฟ ... ปิดแอร์เมื่อไม่ใช้งาน ...
ผมว่าประโยคนี้มันคุ้นๆ เหมือนเคยได้ยินมามากกว่า 10 ปีแล้ว ... หรือผมเข้าใจผิดหรือเปล่าไม่แน่ใจ
รณรงค์แค่นี้แหละครับ เรื่องลด carbon footprint ไม่ได้เป็นการคิดอะไรใหม่ๆที่เหมาะกับยุคสมัย หรือสร้างอะไรที่จับต้องได้ (objective)
แต่สิ่งที่ทำสวนทางโดยสิ้นเชิงคือใช้พลาสติกแบบใช้แล้วทิ้ง (single use plastic) เป็นภาชนะหลักในการบรรจุอาหารของแพทย์ และผู้เข้าร่วมประชุมงานใหญ่ๆ
มีเสียงเสนอแนะจากบุคคลากรทุกระดับว่าให้ทำเป็นบุฟเฟ่ต์ จานชามช้อนส้อมแบบปกติก็ได้ เสนอกันมา 5 ปี ก็ยังคงไม่่มีการเปลี่ยนแปลง ได้รับแจ้งลงมาว่าใช้ภาชนะพลาสติกมันประหยัดกว่า เอาเป็นว่ากล่องข้าวพลาสติกมีการใช้อย่างน้อย 1200 กล่องต่อเดือน … คาดว่าสมการการปล่อยก๊าสคาร์บอน (carbon emission) ที่ทีมผู้บริหารคำนวณ อาจจะซับซ้อนเกินความเข้าใจของผมก็ได้นะครับ
2. การตลาดที่ล้มเหลวและพาแพทย์ซวย
ทำการตลาดไม่เข้าเป้า “เหมือนจะ” ทำ digital marketing แต่ทำแค่โพสกราฟฟิคโปรโมชั่นภาพนิ่งลงสื่อโซเชียลทุกช่องทาง แล้วบอกว่านั่นคือ digital marketing
... แต่เดี๋ยวก่อนๆๆๆ ...
ผมจะบอกว่าการโพสมันเป็นแค่ 1 ใน 10 ของ digital marketing แต่รพ.เข้าใจว่าตัวเองได้เข้าสู่ digital marketing แล้ว
... จริงๆมันไม่ใช่เลยเว้ย ...
ผลลัพธ์คือไม่สามารถเปิดน่านน้ำลูกค้าใหม่ได้เลย ได้แต่ฐานลูกค้าเดิมที่มี brand royalty (แต่แนวโน้มลดลง)
แถมที่แย่ที่สุดคือทำการตลาดแพคเกจออกมาโดยไม่ปรึกษาแพทย์ก่อนว่ามันขัดต่อมาตรฐานการรักษาหรือไม่ กลายเป็นทำแพคเกจดึงดูดคนไข้เข้ามาใช้บริการ แต่การรักษาในแพคเกจขัดต่อมาตรฐานการรักษาของแพทย์
คนไข้ไม่รู้หรอกครับ คนไข้จะเอาตามที่มีในแพคเกจ เขาจ่ายตังค์แล้ว แต่ความซวยมันไปตกอยู่กับแพทย์
3. วางกลยุทธไม่เข้าเป้า
ทุกๆต้นปีทางผู้บริหารจะประกาศกลยุทธประจำปี ว่าในปีนั้นๆรพ.จะมุ้งเน้นพัฒนาด้านไหน รพ.นี้มีปัญหาที่เป็นงูกินหางมานาน มันส่งผลต่อประสิทธิภาพการทำงานของหมอและพยาบาล มีการเสนอแก้ปัญหาเรื่องนี้วนซ้ำซากมา 5 ปี แต่ไม่ได้รับแก้ไขจริงจัง (ผมขอไม่เล่านะครับ)
แต่กลยุทธประจำปี 3 ปีที่ผ่านมา พุ่งใส่ตัวบุคคลากร เน้นพฤติกรรมบริการที่ดีเลิศ ทราบมาว่ามีการลงทุนกับโครงการนี้หลักแสนหรือหลักล้าน มีการจัด workshop เชิญวิทยากรและ trainer จากบริษัทภายนอก (outsource) เข้ามาอบรม เป็นโครงการที่เน้นให้บุคคลากรทุกคนเข้าอบรม 100%
ผมมองว่าปัญหาที่เป็นราก (root cause) มันยังไม่ถูกแก้เลย เปรียบเทียบเหมือนฐานรากของอาคารที่มันโคลงเคลงๆไม่มั่นคงยังไม่ได้รับการแก้ไข แต่พยายามตกแต่งห้องด้วยวัสดุคุณภาพดีและเทคโนโลยีที่ทันสมัย … แต่พร้อมจะล้มลงมาได้ทุกเมื่อ
4. มีเสน่ดึงดูด partner ใหม่ๆ แต่ไม่เอาเอง
ในช่วง COVID ระลอกแรก มีผู้นำทางด้านธุรกิจโรงแรมในจังหวัดมานำเสนอโมเดลธุรกิจ “hospitel เปลี่ยนโรงแรมให้เป็นโรงพยาบาล” ด้วยศักยภาพของรพ.ที่มีบุคคลากรเพียงพอ และตัวโรงแรมที่นำมาเสนอมีห้องพักประมาณ 300 ห้อง เป็นโมเดลที่รพ.และโรงแรม win-win ทั้งคู่ แต่ทางผู้บริหารมองว่าไม่คุ้ม ปฏิเสธข้อเสนอนี้ ทำให้เสียโอกาสให้กับคู่แข่งคว้าตลาด blue ocean นี้ไป
ผมได้แต่เกาหัวตอนรู้เรื่องนี้ เพราะ 1. ช่วง COVID คนไข้น้อย พนักงานโดนลดชั่วโมงการทำงาน ได้เงินเดือนขั้นต่ำ ไม่ได้ OT 2. ทาง partner เสนอขอบุคคลากรเหล่านี้แหละ ไปช่วยงาน เรื่องสถานที่ทางโรงแรมเขามีแม่บ้าน ฝ่ายทำความสะอาดอยู่แล้ว 3. ทาง partner เสนอ profit sharing กับทางรพ. ถึงผมจะไม่รู้ตัวเลข แต่เชื่อว่ามันยุติธรรม
ผมก็ไม่รู้ครับ ว่าอะไรคือคุ้มสำหรับผู้บริหาร
5. Top down absolute power
ไม่ฟังข้อเสนอจากตัวแทนหมอ คนที่มีอำนาจการตัดสินใจไม่เคยเอาตัวลงมาคุยกับหมอแบบจริงจังเลย
1-2 ปีจะลงมาพบหมอทั้งรพ.ซักหนึ่งครั้ง สร้างภาพเก่ง พูดขายฝันสวนหรูถึงภาพที่เขาต้องการ สั่งการลงมา พอเกิดปัญหาตัวเองไม่ลงมารับผิดชอบ แต่อาศัยหน่วยข่าวกรอง(ที่ไม่รู้ว่ากรองอะไรเข้าไปบ้าง) ออกคำสั่งแก้ผ้าเอาหน้ารอดลงมาทีหลัง
แถมสั่งให้เงียบและหุบปาก
ครั้งหนึ่งมีคำสั่งออกมาไม่ชัดเจน จนพยาบาลทำงานไม่ได้ ตัวแทนพยาบาลต้องโทรมาหาผมเพื่อให้ผมช่วย
ผมรวบรวมข้อมูลทั้งหมดและพบว่าคำสั่งมีปัญหาจริงๆ ผมจึง chat line ลงไปสอบถามผู้บริหารเพื่อขอความชัดเจน
… ผ่านไปไม่ถึง 5 นาที หนึ่งในผู้บริหาร(คนที่แทงข้างหลังผมที่หาว่าผมมาตรวจคนไข้ VIP เขาช้า 5 นาทีนั่นแหละ)โทรหาผมทันทีคุยกับผมสั้นๆ ใจความว่า “คำสั่งนั้นเอาแบบเดิม ไม่ต้องแก้ และให้ผมเงียบๆซะ”... (ก็ได้วะครับ)
จุดเปลี่ยนที่ทำให้รพ.เข้าสู่ law of diminishing returns
ห้องตรวจทุกห้องของรพ. ถูกใช้จนเต็มศักยภาพ … เอาจริงๆคือล้นศักยภาพเสียอีก (over-utilized) บางแผนกมีเก้าอี้ดนตรี - หมอคนแรกหมดเวลาออกตรวจ - หมอคนต่อไปเดินเข้าใช้ห้องตรวจต่อทันที - ถ้าไม่ทันก็ต้องคว้าห้องตรวจที่ว่างพร้อมใช้งานก่อน - หมอทำการไล่ที่กันเอง - หมอบางท่านต้องใช้ห้องทำงานของพยาบาลเป็นห้องตรวจชั่วคราว
ห้องพักผู้ป่วยก็เช่นกัน บางช่วงเตียงเต็มจนไม่สามารถ admit คนไข้ได้
แต่จำที่ผมบอกได้มั้ยครับว่า คนที่เป็น top down absolute power ไม่เคยเอาตัวลงมาพูดคุยกับแพทย์เพื่อรับฟังปัญหาที่แท้จริงเลย รับแต่ข่าวกรอง(ที่ไม่รู้ว่ากรองอะไรเข้าไปบ้าง) ช่วงนึงมีคนไข้ complaint ว่ารอนั่งรอหมอนาน หมอมาตรวจช้า ผู้บริหารเลยพยายามจะแก้ปัญหา โดยการ monitor waiting time (ระยะเวลารอหมอ) หยิบยกเรื่องนี้ขึ้นมาเป็นวาระเร่งด่วนต้องรีบแก้ไข
แต่เขายังงงๆกับ concept waiting time อยู่เลยว่าจะนับตั้งแต่ตอนไหนถึงตอนไหน - Waiting time สั้นแปลว่าดี เพราะคนไข้ได้เจอหมอเร็ว - Waiting time นานแปลว่าไม่ดี เพราะคนไข้นั่งรอหมอนาน
เขาตีความจากตัวเลขครับ แต่เคยเอาตัวลงมาดูจริงๆหรือเปล่าว่าทำไมตัวเลขมันถึงออกมาไม่ดี
คำตอบคือ“ไม่” ครับ
หมอบางสาขามีความจำเป็นต้องไปดูคนไข้ที่อาการหนักใช้เวลารักษานาน ... หรือ ... รับปรึกษาจากแพทย์ต่างสาขา ... หรือ ... เป็นสาขาเฉพาะทางของเฉพาะทางอีกที ต้องใช้เวลาตรวจละเอียดตรวจนาน
มันเป็นกระบวนการทำงานของหมอ ที่หมอด้วยกันเข้าใจกัน
ส่วนคนเก็บข้อมูลก็นำเสนอไปทั้งอย่างนั้นโดยที่ไม่ได้วิเคราะห์อะไรเลย มันเป็นการกรองข้อมูลที่ไม่รอบคอบก่อนนำเสนอผู้บริหาร
สุดท้ายผู้บริหาร “โทษหมอ” ว่าไม่มีการบริหารเวลาทำงานที่ดีเพียงพอ ทำให้คนไข้รอนาน เขาสรุปกันดื้อๆแบบนี้เลยครับ
พอหนักๆเข้า “รอหมอนาน ต้องเพิ่มหมอ” season การรับสมัครหมอหลายตำแหน่งได้เริ่มขึ้น
แต่เดี๋ยวนะ ห้องตรวจมันแน่นจนแทบไม่มีที่ให้หมอนั่งทำงานแล้ว แต่เขาก็ไม่สนครับ รับหมอหน้าใหม่ๆมาเพิ่มเรื่อยๆ
ด้วย mindset ว่า "ต้องเพิ่มหมอ หมอจะได้เยอะขึ้น คนไข้จะได้ไม่ต้องรอนาน" และเชื่อว่าจะทำรายได้ให้รพ.มากขึ้น หมอหน้าใหม่บางท่านเข้ามาทำงานวันแรกถึงขั้นอยู่ในสภาวะ dead air คือไม่มีที่ให้นั่งทำงาน
“ทำมากได้น้อย” เริ่มต้น
คนไข้รพ.นี้ ส่วนใหญ่เป็นโรคซับซ้อน ต้องการทักษะและเวลาหมอเฉพาะทางแต่ละสาขาอยู่ดี ไม่ได้ทำให้ waiting time ดีขึ้น คนไข้ยัง “นั่งรอหมอนานเหมือนเดิม”
รายได้เริ่มลดลง ยอดคนไข้เริ่มลดลง รพ.พยายามแก้เกมโดยการเพิ่มราคาค่าบริการ (เพิ่มขนาด ticket size) ทำให้มีเสียงรีวิวตามโซเชียลว่า "แพง"
ผลที่เกิดขึ้นคือคนไข้หลายคนอาศัยรพ.นี้ในการตรวจวินิจฉัยโรคแล้วเอาผลไปรักษาต่อรพ.รัฐบาลตามสิทธิ์เพราะสู้ราคาค่ารักษาไม่ไหว บางคนมีประกันสุขภาพหลายฉบับแต่ก็ต้องจ่ายส่วนต่างมากอยู่ดี
วิธีการข้างต้นนี้ ไม่ผิดกติกาครับ ผล X-ray , CT, MRI, ultrasound จากรพ.เอกชน ไวกว่ารพ.รัฐบาลอยู่แล้ว แต่ก็มีคนไข้บางส่วนยินดีจ่ายแพง เพราะเชื่อมั่นหมอที่รพ.นี้ไม่อยากย้ายรพ.ก็มีครับ เพราะหมอไม่ได้ทำอะไรผิด หมอเก่งๆมีเยอะ
ถึงแม้ว่ารพ.จะรักษา momentum มีจำนวนคนไข้ประมาณ 1100 - 1200 รายต่อวัน แต่ก็เป็นโรคง่ายๆ(simple disease) เช่นไข้หวัด อาหารเป็นพิษ เป็นต้น โรคเหล่านี้ ticket size ไม่ได้ใหญ่มาก ประคองไว้ไม่ให้ขาดทุนเท่านั้นครับ
แต่ความแพงแบบไม่สมเหตุสมผล ทำให้คนไข้หลายรายถอดใจย้ายรพ.ตั้งแต่ทราบค่าใช้จ่ายวินาทีแรก
คนไข้น้อยลง --> รายได้ลดลง --> เพิ่ม ticket size ต่อหัวให้แพงขึ้น --> คนไข้หนีเพราะแพงเกิน
ผมไม่รู้ว่าผู้บริหารเขาเห็นไหม แต่คาดว่าคงจะไม่เห็น
ส่วนโรคหรือการผ่าตัดที่สมศักดิ์ศรีกับศักยภาพของรพ. "น้อยมากจนแทบไม่มี" ไม่ใช่สาเหตุอื่นเลยครับ โดนรพ.คู่แข่งในรัศมี 20 กิโลเมตรเอาไปหมด เพราะราคาถูกกว่า หมอก็เก่งไม่แพ้กัน หมอบางคนเคยอยู่ที่รพ.แห่งนี้ เสนอโปรเจคการรักษาโรคบางโรคที่สามารถสร้างรายได้เป็นกอบเป็นกำ แต่ทางรพ.ไม่เอาเอง สุดท้ายหมอเหล่านั้นย้ายไปอยู่กับรพ.คู่แข่งและผลักดันโปรเจคเหล่านั้นสำเร็จจนมีชื่อเสียง
"รพ.ขายสินค้า premium ไม่ได้เลย ขายได้แต่สินค้าเกรดท้องตลาด"
กลยุทธที่รพ.ทำต่อมาคือเพิ่มจำนวนชั่วโมงการทำงานของหมอให้เพิ่มขึ้นโดยให้หมอมาทำงานเร็วขึ้น 2 ชม. แต่ไม่จ่าย OT ให้ ด้วยตรรกะว่าถ้าหมอทำงานนานขึ้น จะมีจำนวนคนไข้มากขึ้น ทางรพ.ไม่ได้ขอร้อง แต่บีบคอให้หมอร่วมมือ หากไม่ร่วมมือไล่ออกทันที
ไปๆมาๆ มีการไล่ออกกระทันหันเกิดขึ้น มีการส่งหนังสือส่วนตัวหาหมอทุกคน ใครมีรายชื่อที่จะปลดออกก็ต้องออกจากงานทันที
ผมมองว่าฐานะทางการเงินมีปัญหารุนแรงครับ เงินเดือนพนักงานถือเป็น fixed cost ที่ธุรกิจต้องแบกรับ ถ้าเจ๋งจริงต้องควบคุมรายจ่ายให้ธุรกิจสามารถไปต่อได้โดยไม่ปลดคน ในส่วนของธุรกิจรพ. หมอคือบุคคลากรที่สำคัญที่สุดและเป็นด่านสุดท้ายที่จะไล่ออกเพื่อรักษาชีวิตของธุรกิจ ตอนนี้รพ.ได้เข้าสู่ระยะสุดท้ายของ law of diminishing returns คือ “ซวยหน่อยขาดทุน” เป็นที่เรียบร้อยครับ
จุดจบของรพ.แบบนี้ ที่ศักยภาพดี แต่บริหารห่วยแตก มันจะจบด้วยการถูก take over ผ่านมาไม่นานกราฟหุ้นออกอาการ exit liquidity แล้วครับ
ข้อคิดที่อยากแบ่งปันกับทุกคนที่อ่านมาจนจบ
- ช่วงธุรกิจเปลี่ยนผ่านสู่ทายาท คือจุดวัดใจหัวเลี้ยวหัวต่อว่าจะรอดหรือไม่รอด
- Law of diminishing returns ไม่ได้ใช้เฉพาะกับธุรกิจ แต่สามารถประยุกต์ใช้กับการดำเนินชีวิตได้หลายมิติ หากใครเข้าใจ จะขยับเข้าสู่ Pareto’s rule … สั้นๆคือ ทำน้อยแต่ได้(โคตร)มาก
- เจ้าของธุรกิจ ต้องหูไว มองหาเนื้อร้ายที่คอยกัดกินธุรกิจให้เจอ แล้วกำจัดมันซะ ก่อนที่ธุรกิจจะล้มทั้งยืน ทับตัวเองตาย
-
@ 6fc114c7:8f4b1405
2025-05-03 22:06:36In the world of cryptocurrency, the stakes are high, and losing access to your digital assets can be a nightmare. But fear not — Crypt Recver is here to turn that nightmare into a dream come true! With expert-led recovery services and cutting-edge technology, Crypt Recver specializes in helping you regain access to your lost Bitcoin and other cryptocurrencies.
Why Choose Crypt Recver? 🤔 🔑 Expertise You Can Trust At Crypt Recver, we combine state-of-the-art technology with skilled engineers who have a proven track record in crypto recovery. Whether you’ve forgotten your passwords, lost your private keys, or dealt with damaged hardware wallets, our team is equipped to help.
⚡ Fast Recovery Process Time is of the essence when it comes to recovering lost funds. Crypt Recver’s systems are optimized for speed, enabling quick recoveries — so you can get back to what matters most: trading and investing.
🎯 High Success Rate With over a 90% success rate, our recovery team has helped countless clients regain access to their lost assets. We understand the intricacies of cryptocurrency and are dedicated to providing effective solutions.
🛡️ Confidential & Secure Your privacy is our priority. All recovery sessions at Crypt Recver are encrypted and kept entirely confidential. You can trust us with your information, knowing that we maintain the highest standards of security.
🔧 Advanced Recovery Tools We use proprietary tools and techniques to handle complex recovery scenarios, from recovering corrupted wallets to restoring coins from invalid addresses. No matter how challenging the situation, we have a plan.
Our Recovery Services Include: 📈 Bitcoin Recovery: Have you lost access to your Bitcoin wallet? We can help you recover lost wallets, private keys, and passphrases. Transaction Recovery: Mistaken transfers, lost passwords, or missing transaction records — let us help you reclaim your funds! Cold Wallet Restoration: Did your cold wallet fail? We specialize in extracting assets safely and securely. Private Key Generation: Forgotten your private key? We can help you generate new keys linked to your funds without compromising security. Don’t Let Lost Crypto Ruin Your Day! 🕒 With an estimated 3 to 3.4 million BTC lost forever, it’s critical to act swiftly when facing access issues. Whether you’ve been a victim of a dust attack or have simply forgotten your key, Crypt Recver offers the support you need to reclaim your digital assets.
🚀 Start Your Recovery Now! Ready to get your cryptocurrency back? Don’t let uncertainty hold you back!
👉 Request Wallet Recovery Help Today!
Need Immediate Assistance? 📞 For quick queries or support, connect with us on:
✉️ Telegram: Chat with Us on Telegram 💬 WhatsApp: Message Us on WhatsApp Trust Crypt Recver for the Best Crypto Recovery Service — Get back to trading with confidence! 💪In the world of cryptocurrency, the stakes are high, and losing access to your digital assets can be a nightmare. But fear not — Crypt Recver is here to turn that nightmare into a dream come true! With expert-led recovery services and cutting-edge technology, Crypt Recver specializes in helping you regain access to your lost Bitcoin and other cryptocurrencies.
# Why Choose Crypt Recver? 🤔
🔑 Expertise You Can Trust
At Crypt Recver, we combine state-of-the-art technology with skilled engineers who have a proven track record in crypto recovery. Whether you’ve forgotten your passwords, lost your private keys, or dealt with damaged hardware wallets, our team is equipped to help.
⚡ Fast Recovery Process
Time is of the essence when it comes to recovering lost funds. Crypt Recver’s systems are optimized for speed, enabling quick recoveries — so you can get back to what matters most: trading and investing.
🎯 High Success Rate
With over a 90% success rate, our recovery team has helped countless clients regain access to their lost assets. We understand the intricacies of cryptocurrency and are dedicated to providing effective solutions.
🛡️ Confidential & Secure
Your privacy is our priority. All recovery sessions at Crypt Recver are encrypted and kept entirely confidential. You can trust us with your information, knowing that we maintain the highest standards of security.
🔧 Advanced Recovery Tools
We use proprietary tools and techniques to handle complex recovery scenarios, from recovering corrupted wallets to restoring coins from invalid addresses. No matter how challenging the situation, we have a plan.
# Our Recovery Services Include: 📈
- Bitcoin Recovery: Have you lost access to your Bitcoin wallet? We can help you recover lost wallets, private keys, and passphrases.
- Transaction Recovery: Mistaken transfers, lost passwords, or missing transaction records — let us help you reclaim your funds!
- Cold Wallet Restoration: Did your cold wallet fail? We specialize in extracting assets safely and securely.
- Private Key Generation: Forgotten your private key? We can help you generate new keys linked to your funds without compromising security.
Don’t Let Lost Crypto Ruin Your Day! 🕒
With an estimated 3 to 3.4 million BTC lost forever, it’s critical to act swiftly when facing access issues. Whether you’ve been a victim of a dust attack or have simply forgotten your key, Crypt Recver offers the support you need to reclaim your digital assets.
🚀 Start Your Recovery Now!
Ready to get your cryptocurrency back? Don’t let uncertainty hold you back!
👉 Request Wallet Recovery Help Today!
Need Immediate Assistance? 📞
For quick queries or support, connect with us on:
- ✉️ Telegram: Chat with Us on Telegram
- 💬 WhatsApp: Message Us on WhatsApp
Trust Crypt Recver for the Best Crypto Recovery Service — Get back to trading with confidence! 💪
-
@ 8ce092d8:950c24ad
2024-02-04 23:35:07Overview
- Introduction
- Model Types
- Training (Data Collection and Config Settings)
- Probability Viewing: AI Inspector
- Match
- Cheat Sheet
I. Introduction
AI Arena is the first game that combines human and artificial intelligence collaboration.
AI learns your skills through "imitation learning."
Official Resources
- Official Documentation (Must Read): Everything You Need to Know About AI Arena
Watch the 2-minute video in the documentation to quickly understand the basic flow of the game. 2. Official Play-2-Airdrop competition FAQ Site https://aiarena.notion.site/aiarena/Gateway-to-the-Arena-52145e990925499d95f2fadb18a24ab0 3. Official Discord (Must Join): https://discord.gg/aiarenaplaytest for the latest announcements or seeking help. The team will also have a exclusive channel there. 4. Official YouTube: https://www.youtube.com/@aiarena because the game has built-in tutorials, you can choose to watch videos.
What is this game about?
- Although categorized as a platform fighting game, the core is a probability-based strategy game.
- Warriors take actions based on probabilities on the AI Inspector dashboard, competing against opponents.
- The game does not allow direct manual input of probabilities for each area but inputs information through data collection and establishes models by adjusting parameters.
- Data collection emulates fighting games, but training can be completed using a Dummy As long as you can complete the in-game tutorial, you can master the game controls.
II. Model Types
Before training, there are three model types to choose from: Simple Model Type, Original Model Type, and Advanced Model Type.
It is recommended to try the Advanced Model Type after completing at least one complete training with the Simple Model Type and gaining some understanding of the game.
Simple Model Type
The Simple Model is akin to completing a form, and the training session is comparable to filling various sections of that form.
This model has 30 buckets. Each bucket can be seen as telling the warrior what action to take in a specific situation. There are 30 buckets, meaning 30 different scenarios. Within the same bucket, the probabilities for direction or action are the same.
For example: What should I do when I'm off-stage — refer to the "Recovery (you off-stage)" bucket.
For all buckets, refer to this official documentation:
https://docs.aiarena.io/arenadex/game-mechanics/tabular-model-v2
Video (no sound): The entire training process for all buckets
https://youtu.be/1rfRa3WjWEA
Game version 2024.1.10. The method of saving is outdated. Please refer to the game updates.
Advanced Model Type
The "Original Model Type" and "Advanced Model Type" are based on Machine Learning, which is commonly referred to as combining with AI.
The Original Model Type consists of only one bucket, representing the entire map. If you want the AI to learn different scenarios, you need to choose a "Focus Area" to let the warrior know where to focus. A single bucket means that a slight modification can have a widespread impact on the entire model. This is where the "Advanced Model Type" comes in.
The "Advanced Model Type" can be seen as a combination of the "Original Model Type" and the "Simple Model Type". The Advanced Model Type divides the map into 8 buckets. Each bucket can use many "Focus Area." For a detailed explanation of the 8 buckets and different Focus Areas, please refer to the tutorial page (accessible in the Advanced Model Type, after completing a training session, at the top left of the Advanced Config, click on "Tutorial").
III. Training (Data Collection and Config Settings)
Training Process:
- Collect Data
- Set Parameters, Train, and Save
- Repeat Step 1 until the Model is Complete
Training the Simple Model Type is the easiest to start with; refer to the video above for a detailed process.
Training the Advanced Model Type offers more possibilities through the combination of "Focus Area" parameters, providing a higher upper limit. While the Original Model Type has great potential, it's harder to control. Therefore, this section focuses on the "Advanced Model Type."
1. What Kind of Data to Collect
- High-Quality Data: Collect purposeful data. Garbage in, garbage out. Only collect the necessary data; don't collect randomly. It's recommended to use Dummy to collect data. However, don't pursue perfection; through parameter adjustments, AI has a certain level of fault tolerance.
- Balanced Data: Balance your dataset. In simple terms, if you complete actions on the left side a certain number of times, also complete a similar number on the right side. While data imbalance can be addressed through parameter adjustments (see below), it's advised not to have this issue during data collection.
- Moderate Amount: A single training will include many individual actions. Collect data for each action 1-10 times. Personally, it's recommended to collect data 2-3 times for a single action. If the effect of a single training is not clear, conduct a second (or even third) training with the same content, but with different parameter settings.
2. What to Collect (and Focus Area Selection)
Game actions mimic fighting games, consisting of 4 directions + 6 states (Idle, Jump, Attack, Grab, Special, Shield). Directions can be combined into ↗, ↘, etc. These directions and states can then be combined into different actions.
To make "Focus Area" effective, you need to collect data in training that matches these parameters. For example, for "Distance to Opponent", you need to collect data when close to the opponent and also when far away. * Note: While you can split into multiple training sessions, it's most effective to cover different situations within a single training.
Refer to the Simple Config, categorize the actions you want to collect, and based on the game scenario, classify them into two categories: "Movement" and "Combat."
Movement-Based Actions
Action Collection
When the warrior is offstage, regardless of where the opponent is, we require the warrior to return to the stage to prevent self-destruction.
This involves 3 aerial buckets: 5 (Near Blast Zone), 7 (Under Stage), and 8 (Side Of Stage).
* Note: The background comes from the Tutorial mentioned earlier. The arrows in the image indicate the direction of the action and are for reference only. * Note: Action collection should be clean; do not collect actions that involve leaving the stage.
Config Settings
In the Simple Config, you can directly choose "Movement" in it. However, for better customization, it's recommended to use the Advanced Config directly. - Intensity: The method for setting Intensity will be introduced separately later. - Buckets: As shown in the image, choose the bucket you are training. - Focus Area: Position-based parameters: - Your position (must) - Raycast Platform Distance, Raycast Platform Type (optional, generally choose these in Bucket 7)
Combat-Based Actions
The goal is to direct attacks quickly and effectively towards the opponent, which is the core of game strategy.
This involves 5 buckets: - 2 regular situations - In the air: 6 (Safe Zone) - On the ground: 4 (Opponent Active) - 3 special situations on the ground: - 1 Projectile Active - 2 Opponent Knockback - 3 Opponent Stunned
2 Regular Situations
In the in-game tutorial, we learned how to perform horizontal attacks. However, in the actual game, directions expand to 8 dimensions. Imagine having 8 relative positions available for launching hits against the opponent. Our task is to design what action to use for attack or defense at each relative position.
Focus Area - Basic (generally select all) - Angle to opponent
- Distance to opponent - Discrete Distance: Choosing this option helps better differentiate between closer and farther distances from the opponent. As shown in the image, red indicates a relatively close distance, and green indicates a relatively distant distance.- Advanced: Other commonly used parameters
- Direction: different facings to opponent
- Your Elemental Gauge and Discrete Elementals: Considering the special's charge
- Opponent action: The warrior will react based on the opponent's different actions.
- Your action: Your previous action. Choose this if teaching combos.
3 Special Situations on the Ground
Projectile Active, Opponent Stunned, Opponent Knockback These three buckets can be referenced in the Simple Model Type video. The parameter settings approach is the same as Opponent Active/Safe Zone.
For Projectile Active, in addition to the parameters based on combat, to track the projectile, you also need to select "Raycast Projectile Distance" and "Raycast Projectile On Target."
3. Setting "Intensity"
Resources
- The "Tutorial" mentioned earlier explains these parameters.
- Official Config Document (2022.12.24): https://docs.google.com/document/d/1adXwvDHEnrVZ5bUClWQoBQ8ETrSSKgG5q48YrogaFJs/edit
TL;DR:
Epochs: - Adjust to fewer epochs if learning is insufficient, increase for more learning.
Batch Size: - Set to the minimum (16) if data is precise but unbalanced, or just want it to learn fast - Increase (e.g., 64) if data is slightly imprecise but balanced. - If both imprecise and unbalanced, consider retraining.
Learning Rate: - Maximize (0.01) for more learning but a risk of forgetting past knowledge. - Minimize for more accurate learning with less impact on previous knowledge.
Lambda: - Reduce for prioritizing learning new things.
Data Cleaning: - Enable "Remove Sparsity" unless you want AI to learn idleness. - For special cases, like teaching the warrior to use special moves when idle, refer to this tutorial video: https://discord.com/channels/1140682688651612291/1140683283626201098/1195467295913431111
Personal Experience: - Initial training with settings: 125 epochs, batch size 16, learning rate 0.01, lambda 0, data cleaning enabled. - Prioritize Multistream, sometimes use Oversampling. - Fine-tune subsequent training based on the mentioned theories.
IV. Probability Viewing: AI Inspector
The dashboard consists of "Direction + Action." Above the dashboard, you can see the "Next Action" – the action the warrior will take in its current state. The higher the probability, the more likely the warrior is to perform that action, indicating a quicker reaction. It's essential to note that when checking the Direction, the one with the highest visual representation may not have the highest numerical value. To determine the actual value, hover the mouse over the graphical representation, as shown below, where the highest direction is "Idle."
In the map, you can drag the warrior to view the probabilities of the warrior in different positions. Right-click on the warrior with the mouse to change the warrior's facing. The status bar below can change the warrior's state on the map.
When training the "Opponent Stunned, Opponent Knockback" bucket, you need to select the status below the opponent's status bar. If you are focusing on "Opponent action" in the Focus Zone, choose the action in the opponent's status bar. If you are focusing on "Your action" in the Focus Zone, choose the action in your own status bar. When training the "Projectile Active" Bucket, drag the projectile on the right side of the dashboard to check the status.
Next
The higher the probability, the faster the reaction. However, be cautious when the action probability reaches 100%. This may cause the warrior to be in a special case of "State Transition," resulting in unnecessary "Idle" states.
Explanation: In each state a fighter is in, there are different "possible transitions". For example, from falling state you cannot do low sweep because low sweep requires you to be on the ground. For the shield state, we do not allow you to directly transition to headbutt. So to do headbutt you have to first exit to another state and then do it from there (assuming that state allows you to do headbutt). This is the reason the fighter runs because "run" action is a valid state transition from shield. Source
V. Learn from Matches
After completing all the training, your model is preliminarily finished—congratulations! The warrior will step onto the arena alone and embark on its debut!
Next, we will learn about the strengths and weaknesses of the warrior from battles to continue refining the warrior's model.
In matches, besides appreciating the performance, pay attention to the following:
-
Movement, i.e., Off the Stage: Observe how the warrior gets eliminated. Is it due to issues in the action settings at a certain position, or is it a normal death caused by a high percentage? The former is what we need to avoid and optimize.
-
Combat: Analyze both sides' actions carefully. Observe which actions you and the opponent used in different states. Check which of your hits are less effective, and how does the opponent handle different actions, etc.
The approach to battle analysis is similar to the thought process in the "Training", helping to have a more comprehensive understanding of the warrior's performance and making targeted improvements.
VI. Cheat Sheet
Training 1. Click "Collect" to collect actions. 2. "Map - Data Limit" is more user-friendly. Most players perform initial training on the "Arena" map. 3. Switch between the warrior and the dummy: Tab key (keyboard) / Home key (controller). 4. Use "Collect" to make the opponent loop a set of actions. 5. Instantly move the warrior to a specific location: Click "Settings" - SPAWN - Choose the desired location on the map - On. Press the Enter key (keyboard) / Start key (controller) during training.
Inspector 1. Right-click on the fighter to change their direction. Drag the fighter and observe the changes in different positions and directions. 2. When satisfied with the training, click "Save." 3. In "Sparring" and "Simulation," use "Current Working Model." 4. If satisfied with a model, then click "compete." The model used in the rankings is the one marked as "competing."
Sparring / Ranked 1. Use the Throneroom map only for the top 2 or top 10 rankings. 2. There is a 30-second cooldown between matches. The replays are played for any match. Once the battle begins, you can see the winner on the leaderboard or by right-clicking the page - Inspect - Console. Also, if you encounter any errors or bugs, please send screenshots of the console to the Discord server.
Good luck! See you on the arena!
-
@ 62a6a41e:b12acb43
2025-03-04 22:19:29War is rarely (or if ever) the will of the people. Throughout history, wars have been orchestrated by political and economic elites, while the media plays a key role in shaping public opinion. World War I is a clear example of how propaganda was used to glorify war, silence dissent, and demonize the enemy.
Today, we see similar tactics being used in the Ukrainian War. The media spreads one-sided narratives, censors alternative views, and manipulates public sentiment. This article argues that wars are decided from the top, and media is used to justify them.
How the Media Glorified and Propagated WW1
The Media Sold War as an Adventure
Before WW1, newspapers and propaganda made war seem noble and exciting. Young men were encouraged to enlist for honor and glory. Posters displayed slogans like “Your Country Needs You”, making war look like a duty rather than a tragedy.
Demonization of the Enemy
Governments and media portrayed Germans as "barbaric Huns," spreading exaggerated stories like the "Rape of Belgium," where German soldiers were accused of horrific war crimes—many later proven false. Today, Russia is painted as purely evil, while NATO’s role and Ukraine’s internal conflicts are ignored.
Social Pressure & Nationalism
Anyone who opposed WW1 was labeled a traitor. Conscientious objectors were shamed, jailed, or even executed. The same happens today—if you question support for Ukraine, you are called "pro-Russian" or "anti-European." In the U.S., opposing war is falsely linked to supporting Trump or extremism.
Fabricated Stories
During WW1, fake reports of German soldiers killing babies were widely spread. In Ukraine, reports of massacres and war crimes often circulate without verification, while Ukrainian war crimes receive little coverage.
How the Media Promotes War Today: The Case of Ukraine
One-Sided Narratives
The media presents Ukraine as a heroic struggle against an evil invader, ignoring the 2014 coup, the Donbas conflict, and NATO expansion. By simplifying the issue, people are discouraged from questioning the full story.
Censorship and Suppression of Dissent
During WW1, anti-war activists were jailed. Today, journalists and commentators questioning NATO’s role face censorship, deplatforming, or cancellation.
Selective Coverage
Media highlights civilian deaths in Ukraine but ignores similar suffering in Yemen, Syria, or Palestine. Coverage depends on political interests, not humanitarian concern.
Glorification of War Efforts
Ukrainian soldiers—even extremist groups—are painted as heroes. Meanwhile, peace negotiations and diplomatic efforts receive little attention.
War is a Top-Down Decision, Not the Will of the People
People Don’t Want Wars
If given a choice, most people would reject war. Examples:
- Before WW1: Many workers and socialists opposed war, but governments ignored them.
- Vietnam War: Protests grew, but the war continued.
- Iraq War (2003): Millions protested, yet the invasion went ahead.
Small Elites Decide War
Wars benefit arms manufacturers, politicians, and corporate interests—not ordinary people. Public opposition is often ignored or crushed.
Manipulation Through Fear
Governments use fear to justify war: “If we don’t act now, it will be too late.” This tactic was used in WW1, the Iraq War, and is used today in Ukraine.
Violence vs. War: A Manufactured Conflict
Violence Happens, But War is Manufactured
Conflicts and disputes are natural, but large-scale war is deliberately planned using propaganda and logistical preparation.
War Requires Justification
If war were natural, why does it need massive media campaigns to convince people to fight? Just like in WW1, today’s wars rely on media narratives to gain support.
The Crimea Referendum: A Case of Ignored Democracy
Crimea’s 2014 Referendum
- Over 90% of Crimeans voted to join Russia in 2014.
- Western governments called it "illegitimate," while similar referendums (like in Kosovo) were accepted.
The Contradiction in Democracy
- If democracy is sacred, why ignore a clear vote in Crimea?
- Other examples: Brexit was resisted, Catalonia’s referendum was shut down, and peace referendums were dismissed when they didn’t fit political interests.
- Democracy is used as a tool when convenient.
VII. The Libertarian Case Against War
The Non-Aggression Principle (NAP)
Libertarianism is fundamentally opposed to war because it violates the Non-Aggression Principle (NAP)—the idea that no person or institution has the right to initiate force against another. War, by its very nature, is the ultimate violation of the NAP, as it involves mass killing, destruction, and theft under the guise of national interest.
War is State Aggression
- Governments wage wars, not individuals. No private citizen would naturally start a conflict with another country.
- The state forces people to fund wars through taxation, violating their economic freedom.
- Conscription, used in many wars, is nothing more than state-sponsored slavery, forcing individuals to fight and die for political goals they may not support.
War Creates Bigger Government
- War expands state power, eroding civil liberties (e.g., WW1's Espionage Act, the Patriot Act after 9/11).
- The military-industrial complex grows richer while taxpayers foot the bill.
- Emergency powers granted during wars rarely get repealed after conflicts end, leaving citizens with fewer freedoms.
Peaceful Trade vs. War
- Libertarians advocate for free trade as a means of cooperation. Countries that trade are less likely to go to war.
- Wars destroy wealth and infrastructure, while peaceful trade increases prosperity for all.
- Many wars have been fought not for defense, but for economic interests, such as securing oil, resources, or geopolitical power.
Who Benefits from War?
- Not the people, who suffer death, destruction, and economic hardship.
- Not small businesses or workers, who bear the burden of inflation and taxes to fund wars.
- Not individual liberty, as war leads to greater state control and surveillance.
- Only the elites, including defense contractors, politicians, and bankers, who profit from war and use it to consolidate power.
Conclusion: The Media’s Role in War is Crucial
Wars don’t happen naturally—they are carefully planned and sold to the public using propaganda, fear, and nationalism.
- WW1 and Ukraine prove that media is key to war-making.
- The media silences peace efforts and glorifies conflict.
- If people truly had a choice, most wars would never happen.
To resist this, we must recognize how we are manipulated and reject the forced narratives that push us toward war.
-
@ 52b4a076:e7fad8bd
2025-05-03 21:54:45Introduction
Me and Fishcake have been working on infrastructure for Noswhere and Nostr.build. Part of this involves processing a large amount of Nostr events for features such as search, analytics, and feeds.
I have been recently developing
nosdex
v3, a newer version of the Noswhere scraper that is designed for maximum performance and fault tolerance using FoundationDB (FDB).Fishcake has been working on a processing system for Nostr events to use with NB, based off of Cloudflare (CF) Pipelines, which is a relatively new beta product. This evening, we put it all to the test.
First preparations
We set up a new CF Pipelines endpoint, and I implemented a basic importer that took data from the
nosdex
database. This was quite slow, as it did HTTP requests synchronously, but worked as a good smoke test.Asynchronous indexing
I implemented a high-contention queue system designed for highly parallel indexing operations, built using FDB, that supports: - Fully customizable batch sizes - Per-index queues - Hundreds of parallel consumers - Automatic retry logic using lease expiration
When the scraper first gets an event, it will process it and eventually write it to the blob store and FDB. Each new event is appended to the event log.
On the indexing side, a
Queuer
will read the event log, and batch events (usually 2K-5K events) into one work job. This work job contains: - A range in the log to index - Which target this job is intended for - The size of the job and some other metadataEach job has an associated leasing state, which is used to handle retries and prioritization, and ensure no duplication of work.
Several
Worker
s monitor the index queue (up to 128) and wait for new jobs that are available to lease.Once a suitable job is found, the worker acquires a lease on the job and reads the relevant events from FDB and the blob store.
Depending on the indexing type, the job will be processed in one of a number of ways, and then marked as completed or returned for retries.
In this case, the event is also forwarded to CF Pipelines.
Trying it out
The first attempt did not go well. I found a bug in the high-contention indexer that led to frequent transaction conflicts. This was easily solved by correcting an incorrectly set parameter.
We also found there were other issues in the indexer, such as an insufficient amount of threads, and a suspicious decrease in the speed of the
Queuer
during processing of queued jobs.Along with fixing these issues, I also implemented other optimizations, such as deprioritizing
Worker
DB accesses, and increasing the batch size.To fix the degraded
Queuer
performance, I ran the backfill job by itself, and then started indexing after it had completed.Bottlenecks, bottlenecks everywhere
After implementing these fixes, there was an interesting problem: The DB couldn't go over 80K reads per second. I had encountered this limit during load testing for the scraper and other FDB benchmarks.
As I suspected, this was a client thread limitation, as one thread seemed to be using high amounts of CPU. To overcome this, I created a new client instance for each
Worker
.After investigating, I discovered that the Go FoundationDB client cached the database connection. This meant all attempts to create separate DB connections ended up being useless.
Using
OpenWithConnectionString
partially resolved this issue. (This also had benefits for service-discovery based connection configuration.)To be able to fully support multi-threading, I needed to enabled the FDB multi-client feature. Enabling it also allowed easier upgrades across DB versions, as FDB clients are incompatible across versions:
FDB_NETWORK_OPTION_EXTERNAL_CLIENT_LIBRARY="/lib/libfdb_c.so"
FDB_NETWORK_OPTION_CLIENT_THREADS_PER_VERSION="16"
Breaking the 100K/s reads barrier
After implementing support for the multi-threaded client, we were able to get over 100K reads per second.
You may notice after the restart (gap) the performance dropped. This was caused by several bugs: 1. When creating the CF Pipelines endpoint, we did not specify a region. The automatically selected region was far away from the server. 2. The amount of shards were not sufficient, so we increased them. 3. The client overloaded a few HTTP/2 connections with too many requests.
I implemented a feature to assign each
Worker
its own HTTP client, fixing the 3rd issue. We also moved the entire storage region to West Europe to be closer to the servers.After these changes, we were able to easily push over 200K reads/s, mostly limited by missing optimizations:
It's shards all the way down
While testing, we also noticed another issue: At certain times, a pipeline would get overloaded, stalling requests for seconds at a time. This prevented all forward progress on the
Worker
s.We solved this by having multiple pipelines: A primary pipeline meant to be for standard load, with moderate batching duration and less shards, and high-throughput pipelines with more shards.
Each
Worker
is assigned a pipeline on startup, and if one pipeline stalls, other workers can continue making progress and saturate the DB.The stress test
After making sure everything was ready for the import, we cleared all data, and started the import.
The entire import lasted 20 minutes between 01:44 UTC and 02:04 UTC, reaching a peak of: - 0.25M requests per second - 0.6M keys read per second - 140MB/s reads from DB - 2Gbps of network throughput
FoundationDB ran smoothly during this test, with: - Read times under 2ms - Zero conflicting transactions - No overloaded servers
CF Pipelines held up well, delivering batches to R2 without any issues, while reaching its maximum possible throughput.
Finishing notes
Me and Fishcake have been building infrastructure around scaling Nostr, from media, to relays, to content indexing. We consistently work on improving scalability, resiliency and stability, even outside these posts.
Many things, including what you see here, are already a part of Nostr.build, Noswhere and NFDB, and many other changes are being implemented every day.
If you like what you are seeing, and want to integrate it, get in touch. :)
If you want to support our work, you can zap this post, or register for nostr.land and nostr.build today.
-
@ b6524158:8e898a89
2025-05-03 18:11:47Steps: 1. Run a node one mynode 2. Upgrade to premium 3. Select your Bitcoin version (to Bitcoin Knots)
originally posted at https://stacker.news/items/970504
-
@ 3bf0c63f:aefa459d
2024-01-14 14:52:16Drivechain
Understanding Drivechain requires a shift from the paradigm most bitcoiners are used to. It is not about "trustlessness" or "mathematical certainty", but game theory and incentives. (Well, Bitcoin in general is also that, but people prefer to ignore it and focus on some illusion of trustlessness provided by mathematics.)
Here we will describe the basic mechanism (simple) and incentives (complex) of "hashrate escrow" and how it enables a 2-way peg between the mainchain (Bitcoin) and various sidechains.
The full concept of "Drivechain" also involves blind merged mining (i.e., the sidechains mine themselves by publishing their block hashes to the mainchain without the miners having to run the sidechain software), but this is much easier to understand and can be accomplished either by the BIP-301 mechanism or by the Spacechains mechanism.
How does hashrate escrow work from the point of view of Bitcoin?
A new address type is created. Anything that goes in that is locked and can only be spent if all miners agree on the Withdrawal Transaction (
WT^
) that will spend it for 6 months. There is one of these special addresses for each sidechain.To gather miners' agreement
bitcoind
keeps track of the "score" of all transactions that could possibly spend from that address. On every block mined, for each sidechain, the miner can use a portion of their coinbase to either increase the score of oneWT^
by 1 while decreasing the score of all others by 1; or they can decrease the score of allWT^
s by 1; or they can do nothing.Once a transaction has gotten a score high enough, it is published and funds are effectively transferred from the sidechain to the withdrawing users.
If a timeout of 6 months passes and the score doesn't meet the threshold, that
WT^
is discarded.What does the above procedure mean?
It means that people can transfer coins from the mainchain to a sidechain by depositing to the special address. Then they can withdraw from the sidechain by making a special withdraw transaction in the sidechain.
The special transaction somehow freezes funds in the sidechain while a transaction that aggregates all withdrawals into a single mainchain
WT^
, which is then submitted to the mainchain miners so they can start voting on it and finally after some months it is published.Now the crucial part: the validity of the
WT^
is not verified by the Bitcoin mainchain rules, i.e., if Bob has requested a withdraw from the sidechain to his mainchain address, but someone publishes a wrongWT^
that instead takes Bob's funds and sends them to Alice's main address there is no way the mainchain will know that. What determines the "validity" of theWT^
is the miner vote score and only that. It is the job of miners to vote correctly -- and for that they may want to run the sidechain node in SPV mode so they can attest for the existence of a reference to theWT^
transaction in the sidechain blockchain (which then ensures it is ok) or do these checks by some other means.What? 6 months to get my money back?
Yes. But no, in practice anyone who wants their money back will be able to use an atomic swap, submarine swap or other similar service to transfer funds from the sidechain to the mainchain and vice-versa. The long delayed withdraw costs would be incurred by few liquidity providers that would gain some small profit from it.
Why bother with this at all?
Drivechains solve many different problems:
It enables experimentation and new use cases for Bitcoin
Issued assets, fully private transactions, stateful blockchain contracts, turing-completeness, decentralized games, some "DeFi" aspects, prediction markets, futarchy, decentralized and yet meaningful human-readable names, big blocks with a ton of normal transactions on them, a chain optimized only for Lighting-style networks to be built on top of it.
These are some ideas that may have merit to them, but were never actually tried because they couldn't be tried with real Bitcoin or inferfacing with real bitcoins. They were either relegated to the shitcoin territory or to custodial solutions like Liquid or RSK that may have failed to gain network effect because of that.
It solves conflicts and infighting
Some people want fully private transactions in a UTXO model, others want "accounts" they can tie to their name and build reputation on top; some people want simple multisig solutions, others want complex code that reads a ton of variables; some people want to put all the transactions on a global chain in batches every 10 minutes, others want off-chain instant transactions backed by funds previously locked in channels; some want to spend, others want to just hold; some want to use blockchain technology to solve all the problems in the world, others just want to solve money.
With Drivechain-based sidechains all these groups can be happy simultaneously and don't fight. Meanwhile they will all be using the same money and contributing to each other's ecosystem even unwillingly, it's also easy and free for them to change their group affiliation later, which reduces cognitive dissonance.
It solves "scaling"
Multiple chains like the ones described above would certainly do a lot to accomodate many more transactions that the current Bitcoin chain can. One could have special Lightning Network chains, but even just big block chains or big-block-mimblewimble chains or whatnot could probably do a good job. Or even something less cool like 200 independent chains just like Bitcoin is today, no extra features (and you can call it "sharding"), just that would already multiply the current total capacity by 200.
Use your imagination.
It solves the blockchain security budget issue
The calculation is simple: you imagine what security budget is reasonable for each block in a world without block subsidy and divide that for the amount of bytes you can fit in a single block: that is the price to be paid in satoshis per byte. In reasonable estimative, the price necessary for every Bitcoin transaction goes to very large amounts, such that not only any day-to-day transaction has insanely prohibitive costs, but also Lightning channel opens and closes are impracticable.
So without a solution like Drivechain you'll be left with only one alternative: pushing Bitcoin usage to trusted services like Liquid and RSK or custodial Lightning wallets. With Drivechain, though, there could be thousands of transactions happening in sidechains and being all aggregated into a sidechain block that would then pay a very large fee to be published (via blind merged mining) to the mainchain. Bitcoin security guaranteed.
It keeps Bitcoin decentralized
Once we have sidechains to accomodate the normal transactions, the mainchain functionality can be reduced to be only a "hub" for the sidechains' comings and goings, and then the maximum block size for the mainchain can be reduced to, say, 100kb, which would make running a full node very very easy.
Can miners steal?
Yes. If a group of coordinated miners are able to secure the majority of the hashpower and keep their coordination for 6 months, they can publish a
WT^
that takes the money from the sidechains and pays to themselves.Will miners steal?
No, because the incentives are such that they won't.
Although it may look at first that stealing is an obvious strategy for miners as it is free money, there are many costs involved:
- The cost of ceasing blind-merged mining returns -- as stealing will kill a sidechain, all the fees from it that miners would be expected to earn for the next years are gone;
- The cost of Bitcoin price going down: If a steal is successful that will mean Drivechains are not safe, therefore Bitcoin is less useful, and miner credibility will also be hurt, which are likely to cause the Bitcoin price to go down, which in turn may kill the miners' businesses and savings;
- The cost of coordination -- assuming miners are just normal businesses, they just want to do their work and get paid, but stealing from a Drivechain will require coordination with other miners to conduct an immoral act in a way that has many pitfalls and is likely to be broken over the months;
- The cost of miners leaving your mining pool: when we talked about "miners" above we were actually talking about mining pools operators, so they must also consider the risk of miners migrating from their mining pool to others as they begin the process of stealing;
- The cost of community goodwill -- when participating in a steal operation, a miner will suffer a ton of backlash from the community. Even if the attempt fails at the end, the fact that it was attempted will contribute to growing concerns over exaggerated miners power over the Bitcoin ecosystem, which may end up causing the community to agree on a hard-fork to change the mining algorithm in the future, or to do something to increase participation of more entities in the mining process (such as development or cheapment of new ASICs), which have a chance of decreasing the profits of current miners.
Another point to take in consideration is that one may be inclined to think a newly-created sidechain or a sidechain with relatively low usage may be more easily stolen from, since the blind merged mining returns from it (point 1 above) are going to be small -- but the fact is also that a sidechain with small usage will also have less money to be stolen from, and since the other costs besides 1 are less elastic at the end it will not be worth stealing from these too.
All of the above consideration are valid only if miners are stealing from good sidechains. If there is a sidechain that is doing things wrong, scamming people, not being used at all, or is full of bugs, for example, that will be perceived as a bad sidechain, and then miners can and will safely steal from it and kill it, which will be perceived as a good thing by everybody.
What do we do if miners steal?
Paul Sztorc has suggested in the past that a user-activated soft-fork could prevent miners from stealing, i.e., most Bitcoin users and nodes issue a rule similar to this one to invalidate the inclusion of a faulty
WT^
and thus cause any miner that includes it in a block to be relegated to their own Bitcoin fork that other nodes won't accept.This suggestion has made people think Drivechain is a sidechain solution backed by user-actived soft-forks for safety, which is very far from the truth. Drivechains must not and will not rely on this kind of soft-fork, although they are possible, as the coordination costs are too high and no one should ever expect these things to happen.
If even with all the incentives against them (see above) miners do still steal from a good sidechain that will mean the failure of the Drivechain experiment. It will very likely also mean the failure of the Bitcoin experiment too, as it will be proven that miners can coordinate to act maliciously over a prolonged period of time regardless of economic and social incentives, meaning they are probably in it just for attacking Bitcoin, backed by nation-states or something else, and therefore no Bitcoin transaction in the mainchain is to be expected to be safe ever again.
Why use this and not a full-blown trustless and open sidechain technology?
Because it is impossible.
If you ever heard someone saying "just use a sidechain", "do this in a sidechain" or anything like that, be aware that these people are either talking about "federated" sidechains (i.e., funds are kept in custody by a group of entities) or they are talking about Drivechain, or they are disillusioned and think it is possible to do sidechains in any other manner.
No, I mean a trustless 2-way peg with correctness of the withdrawals verified by the Bitcoin protocol!
That is not possible unless Bitcoin verifies all transactions that happen in all the sidechains, which would be akin to drastically increasing the blocksize and expanding the Bitcoin rules in tons of ways, i.e., a terrible idea that no one wants.
What about the Blockstream sidechains whitepaper?
Yes, that was a way to do it. The Drivechain hashrate escrow is a conceptually simpler way to achieve the same thing with improved incentives, less junk in the chain, more safety.
Isn't the hashrate escrow a very complex soft-fork?
Yes, but it is much simpler than SegWit. And, unlike SegWit, it doesn't force anything on users, i.e., it isn't a mandatory blocksize increase.
Why should we expect miners to care enough to participate in the voting mechanism?
Because it's in their own self-interest to do it, and it costs very little. Today over half of the miners mine RSK. It's not blind merged mining, it's a very convoluted process that requires them to run a RSK full node. For the Drivechain sidechains, an SPV node would be enough, or maybe just getting data from a block explorer API, so much much simpler.
What if I still don't like Drivechain even after reading this?
That is the entire point! You don't have to like it or use it as long as you're fine with other people using it. The hashrate escrow special addresses will not impact you at all, validation cost is minimal, and you get the benefit of people who want to use Drivechain migrating to their own sidechains and freeing up space for you in the mainchain. See also the point above about infighting.
See also