-
@ 32e18276:5c68e245
2023-07-11 21:23:37You can use github PRs to submit code but it is not encouraged. Damus is a decentralized social media protocol and we prefer to use decentralized techniques during the code submission process.
[Email patches][git-send-email] to patches@damus.io are preferred, but we accept PRs on GitHub as well. Patches sent via email may include a bolt11 lightning invoice, choosing the price you think the patch is worth, and we will pay it once the patch is accepted and if I think the price isn't unreasonable. You can also send an any-amount invoice and I will pay what I think it's worth if you prefer not to choose. You can include the bolt11 in the commit body or email so that it can be paid once it is applied.
Recommended settings when submitting code via email:
$ git config sendemail.to "patches@damus.io" $ git config format.subjectPrefix "PATCH damus" $ git config format.signOff yes
You can subscribe to the [patches mailing list][patches-ml] to help review code.
Submitting patches
Most of this comes from the linux kernel guidelines for submitting patches, we follow many of the same guidelines. These are very important! If you want your code to be accepted, please read this carefully
Describe your problem. Whether your patch is a one-line bug fix or 5000 lines of a new feature, there must be an underlying problem that motivated you to do this work. Convince the reviewer that there is a problem worth fixing and that it makes sense for them to read past the first paragraph.
Once the problem is established, describe what you are actually doing about it in technical detail. It's important to describe the change in plain English for the reviewer to verify that the code is behaving as you intend it to.
The maintainer will thank you if you write your patch description in a form which can be easily pulled into Damus's source code tree.
Solve only one problem per patch. If your description starts to get long, that's a sign that you probably need to split up your patch. See the dedicated
Separate your changes
section because this is very important.When you submit or resubmit a patch or patch series, include the complete patch description and justification for it (-v2,v3,vn... option on git-send-email). Don't just say that this is version N of the patch (series). Don't expect the reviewer to refer back to earlier patch versions or referenced URLs to find the patch description and put that into the patch. I.e., the patch (series) and its description should be self-contained. This benefits both the maintainers and reviewers. Some reviewers probably didn't even receive earlier versions of the patch.
Describe your changes in imperative mood, e.g. "make xyzzy do frotz" instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy to do frotz", as if you are giving orders to the codebase to change its behaviour.
If your patch fixes a bug, use the 'Closes:' tag with a URL referencing the report in the mailing list archives or a public bug tracker. For example:
Closes: https://github.com/damus-io/damus/issues/1234
Some bug trackers have the ability to close issues automatically when a commit with such a tag is applied. Some bots monitoring mailing lists can also track such tags and take certain actions. Private bug trackers and invalid URLs are forbidden.
If your patch fixes a bug in a specific commit, e.g. you found an issue using
git bisect
, please use the 'Fixes:' tag with the first 12 characters of the SHA-1 ID, and the one line summary. Do not split the tag across multiple lines, tags are exempt from the "wrap at 75 columns" rule in order to simplify parsing scripts. For example::Fixes: 54a4f0239f2e ("Fix crash in navigation")
The following
git config
settings can be used to add a pretty format for outputting the above style in thegit log
orgit show
commands::[core] abbrev = 12 [pretty] fixes = Fixes: %h (\"%s\")
An example call::
$ git log -1 --pretty=fixes 54a4f0239f2e Fixes: 54a4f0239f2e ("Fix crash in navigation")
Separate your changes
Separate each logical change into a separate patch.
For example, if your changes include both bug fixes and performance enhancements for a particular feature, separate those changes into two or more patches. If your changes include an API update, and a new feature which uses that new API, separate those into two patches.
On the other hand, if you make a single change to numerous files, group those changes into a single patch. Thus a single logical change is contained within a single patch.
The point to remember is that each patch should make an easily understood change that can be verified by reviewers. Each patch should be justifiable on its own merits.
If one patch depends on another patch in order for a change to be complete, that is OK. Simply note "this patch depends on patch X" in your patch description.
When dividing your change into a series of patches, take special care to ensure that the Damus builds and runs properly after each patch in the series. Developers using
git bisect
to track down a problem can end up splitting your patch series at any point; they will not thank you if you introduce bugs in the middle.If you cannot condense your patch set into a smaller set of patches, then only post say 15 or so at a time and wait for review and integration.
-
@ 3f770d65:7a745b24
2023-07-07 17:05:06Zaps on Nostr are payments that can be sent to other users as a way of tipping, showing appreciation, or providing feedback to content creators. Zaps utilize the Bitcoin Lightning network for payments. These are transmitted over the Lightning network instantly with essentially zero transaction fees.
Zaps represent the only fundamentally new innovation in social media. Everything else is a distraction. - Jack Dorsey
Zaps were publicly introduced in February 2023 with the release of NIP-57 which defined a new Nostr event type called "Lightning Zaps". Zap request events grab the data from a Lightning invoice, namely the payment amount, payee, and payer and then forms a new event that can be captured by relays and displayed by clients.
https://nostr.build/p/nb3285.png
Zaps are a great way to show appreciation for content that one consumes on Nostr or to simply tip someone for their time and effort. To send a Zap, you simply need to tap or click on the Lightning bolt icon next to the profile or note that you want to tip or proceed in an exchange of value for value. You will then be prompted to enter the amount of satoshis you want to send. Once you have sent the Zap, the recipient will be notified and the payment will be instantly sent over the Lightning network.
Zaps are better than traditional payment methods.
- They are fast and efficient.
- Zaps are transmitted over the Lightning network, so you can be sure that your payment will be received instantly.
- They have no transaction fees. This makes them a more cost-effective way to send payments.
- They are easy to use. Sending a zap is as simple as clicking on a button. You do not need to create a Lightning invoice or worry about the technical details of the Lightning network.
- If you are looking for a way to show appreciation for content on Nostr or to simply send payments to other users, then Zaps are a great option.
Zaps are also a fun and friction-less introduction to Bitcoin and the Lightning network. Since Zaps are integrated with many Lightning wallets, the wallets communicates back to Nostr that the payment was successful and the Zap amount is now displayed and tallied on the user's note or user's profile.
The following Bitcoin Lightning Wallets are NIP-57 compatible, meaning that they communicate with Nostr as mentioned above:
Custodial Wallets: * strike.army * vida.page * stacker.news * Bitcoin Jungle * ln.tips (LightningTipBot) * Geyser * Bitcoin Beach * Current (Client+Wallet) * Wallet of Satoshi * Zebedee * Alby * AnonSats * Strike
Self Custodial Solutions: * BTCPay Server * nostdress
Value for value is a relatively new way of thinking about value on the Internet. It is based on the idea that value should be exchanged directly between users, without the need for a third party. This makes it more efficient, more secure, more privacy-preserving, and directly provides value to content creators.
Nostr is one of the first protocols to implement value for value on a large scale. It is already being used by thousands of users to send payments and provide feedback for a variety of purposes. As Nostr and the Lightning network continues to grow, value for value is likely to become even more popular.
Nostr just passed an incredible milestone, surpassing one million Zaps being sent on Nostr in roughly three months time. Those statistics are fairly incredible seeing as the concept of value for value is still not fully understood by the general public and Nostr utilizes Bitcoin's Lightning network as a payment network, which isn't adopted by everyone yet.
https://nostr.build/i/b9def1af659f1c16b08ca5af31b4877232493338d5b3220943b3d0c96b83533b.jpg
One of the most important concepts about Zaps is that Zaps are better than "Likes". Social media and social networking platforms have taught us over the past two decades that Likes are a way to indicate the value of a piece of content. However, these Likes have no value. Likes are hollow. By replacing Likes with Zaps, we are transforming a meaningless action into an action that has true value, monetary value.
https://nostr.build/p/nb5963.gif
We're building the value for value economy on Nostr, one Zap at at time.
-
@ b9e76546:612023dc
2023-06-07 22:12:51
#Nostr isn't just a social network, that's simply the first use case to sprout from the Nostr tree.
Simple Blocks, Complex Change
Nostr isn't just a social network, in a similar way that Bitcoin isn't just a transaction network. Both of these things are true, but they each miss the more significant elements of what they accomplish.
In my mind, the source of Nostr's true potential is two fold; first, in fundamentally changing the centralized server model into an open environment of redundant relays; and second, it eliminates the association of clients with their IP address and metadata, and replaces it with identification via public keys. Within this one-two punch lies the most important tools necessary to actually rearchitect all of the major services on the internet, not just social media. Social is simply the interface by which we will connect all of them.
The combination of this simple data & ID management protocol with decentralized money in #Bitcoin and #Lightning as a global payments network, enables nostr to build marketplaces, "websites," podcast feeds, publishing of articles/video/media of all kinds, auction networks, tipping and crowdfunding applications, note taking, data backups, global bookmarks, decentralized exchanges and betting networks, browser or app profiles that follow you wherever you go, and tons more - except these can be built without all of the negative consequences of being hosted and controlled by central servers.
It separates both the data and client identity from the server hosting it. Handing the ownership back to the owner of the keys. We could think of it loosely as permission-less server federations (though this isn't entirely accurate, its useful imo). Anyone can host, anyone can join, and the data is agnostic to the computer it sits on at any given time. The walls are falling away.
Efficiency vs Robustness
There is also a major secondary problem solved by these building blocks. A byproduct of solving censorship is creating robustness, both in data integrity, but also in data continuity. While the fiat world is so foolishly focused on "efficiency" as the optimal goal of all interaction, it naively ignores the incredible fragility that comes with it. It is far more "efficient" for one big factory to produce all of the computer chips in the world. Why build redundant manufacturing to produce the same thing when one factory can do it just fine? Yet anyone who thinks for more than a few seconds about this can see just how vulnerable it would leave us, as well as how much corruption such "efficiency" would wind up enabling.
Nostr is not alone either. Holepunch is a purely P2P model (rather than based on relays) that accomplishes the same separation in a different way. One where the clients and servers become one in the same - everyone is the host. Essentially a bittorrent like protocol that removes the limitation of the data being static. The combination of their trade offs & what these protocols can do together is practically limitless. While Nostr begins building its social network, combining it with what Synonym is building with their Web of trust, (the critical ingredient of which is public key identification) we can "weigh" information by the trust of our social graph.
Not too long ago, a friend and I used Nostr to verify who we were communicating with, we shared a Keet (built on Holepunch) room key over encrypted nostr DM, and opened a P2P, encrypted chat room where we could troubleshoot a bitcoin wallet problem and safely and privately share very sensitive data. The casual ease by which we made this transaction enabled by these tools had us both pause in awe of just how powerful they could be for the privacy and security of all communication. And this is just the very beginning. The glue of #Lightning and #Bitcoin making possible the direct monetization of the infrastructure in all of the above has me more bullish on the re-architecting of the internet than ever in my life. It cannot be reasonably called an insignificant change in incentives to remove both the advertiser and the centralized payment processor from inbetween the provider and the customers online. The base plumbing of the internet itself may very well be on the verge of the greatest shift it has ever gone through.
A Tale of Two Network Effects
I would argue the most significant historical shift in the internet architecture was the rise of social media. It was when we discovered the internet was about connecting people rather than computers. The social environment quickly became the dominant window by which the average person looked into the web. It's the place where we go to be connected to others, and get a perspective of the world and a filter for its avalanche of information as seen through our trust networks and social circles. But consider how incredibly neutered the experience really is when money isn't allowed to flow freely in this environment, and how much it actually would flow, if not for both centralized payment processors and the horrible KYC and regulatory hurdle it entails for large, centralized entities.
The first time around we failed to accomplish a global, open protocol for user identity, and because of this our social connections were owned by the server on which we made them. They owned our digital social graph, without them, it is erased. This is an incredible power. The pressures of the network effect to find people, rather than websites, took a decentralized, open internet protocol, and re-centralized it into silos controlled by barely a few major corporations. The inevitable abuse of this immense social power for political gain is so blatantly obvious in retrospect that it's almost comical.
But there is a kind of beautiful irony here - the flip side of the network effect's negative feedback that centralized us into social media silos, is the exact same effect that could present an even greater force in pushing us back toward decentralization. When our notes & highlights have the same social graph as our social media, our "instagram" has the same network as our "twitter," our podcasts reach the same audience, our video publishing has the same reach, our marketplace is built in, our reputation carries with us to every application, our app profiles are encrypted and can't be spied on, our data hosting can be paid directly with zaps, our event tickets can be permanently available, our history, our personal Ai, practically anything. And every bit of it is either encrypted or public by our sole discretion, and is paid for in a global, open market of hosts competing to provide these services for the fewest sats possible. (Case in point, I'm paying sats for premium relays, and I'm paying a simple monthly fee to nostr.build for hosting media)
All of this without having to keep up with 1,000 different fucking accounts and passwords for every single, arbitrarily different utility under the sun. Without having to setup another account to try another service offering a slightly different thing or even just one feature you want to explore. Where the "confirm with your email" bullshit is finally relegated to the hack job, security duck tape that it really is. The frustrating and post-hoc security design that is so common on the internet could finally become a thing of the past and instead just one or a few secure cryptographic keys give us access & control over our digital lives.
The same network effect that centralized the internet around social media, will be the force that could decentralize it again. When ALL of these social use cases and connections compound on each other's network effect, rather than compete with each other, what centralized silo in the world can win against that?
This is not to dismiss the number of times others have tried to build similar systems, or that it's even close to the first time it was attempted to put cryptographic keys at the heart of internet communications. Which brings me to the most important piece of this little puzzle... it actually works!
I obviously don't know exactly how this will play out, and I don't know what becomes dominant in any particular area, how relays will evolve, or what applications will lean toward the relay model, while others may lean P2P, and still others may remain client/server. But I do think the next decade will experience a shift in the internet significant enough that the words "relay" and "peer" may very well, with a little hope and lot of work, replace the word "server" in the lexicon of the internet.
The tools are here, the network is proving itself, the applications are coming, the builders are building, and nostr, holepunch, bitcoin and their like are each, slowly but surely, taking over a new part of my digital life every week. Case in point; I'm publishing this short article on blogstack.io, it will travel across all of nostr, I'm accepting zaps with my LNURL, it is available on numerous sites that aggregate Kind:30023 articles, my entire social graph will get it in their feed, & there will be a plethora of different clients/apps/websites/etc through which the users will see this note, each with their own features and designs...
Seriously, why the fuck would I bother starting a Substack and beg people for their emails?
This is only the beginning, and I'm fully here for it. I came for the notes and the plebs, but it's the "Other Stuff" that will change the world.
-
@ cc8d072e:a6a026cb
2023-06-04 13:15:43欢迎来到Nostr
以下是使您的 Nostr 之旅更顺畅的几个步骤
本指南适用于: * 英语 原作者 nostr:npub10awzknjg5r5lajnr53438ndcyjylgqsrnrtq5grs495v42qc6awsj45ys7 * 法语 感谢 nostr:npub1nftkhktqglvcsj5n4wetkpzxpy4e5x78wwj9y9p70ar9u5u8wh6qsxmzqs * 俄语
你好,Nostrich同胞!
Nostr 是一种全新的模式,有几个步骤可以让您的加入流程更加顺畅,体验更加丰富。
👋欢迎
由于您正在阅读本文,因此可以安全地假设您已经通过下载应用程序加入了 Nostr 您可能正在使用移动设备(例如 Damus、Amethyst,Plebstr) 或Nostr网络客户端(例如 snort.social、Nostrgram、Iris)。 对于新手来说,按照您选择的应用程序建议的步骤进行操作非常重要——欢迎程序提供了所有基础知识,您不必做更多的调整除非您真的很需要。 如果您偶然发现这篇文章,但还没有 Nostr“帐户”,您可以按照这个简单的分步指南 作者是nostr:npub1cly0v30agkcfq40mdsndzjrn0tt76ykaan0q6ny80wy034qedpjsqwamhz --
npub1cly0v30agk cfq40mdsndzjrn0tt76ykaan0q6ny80wy034qedpjsqwamhz
。
🤙玩得开心
Nostr 的建立是为了确保人们可以在此过程中建立联系、被听到发声并从中获得乐趣。 这就是重点(很明显,有很多严肃的用例,例如作为自由斗士和告密者的工具,但这值得单独写一篇文章),所以如果你觉得使用过程有任何负担,请联系更有经验的Nostriches,我们很乐意提供帮助。 与Nostr互动一点也不难,但与传统平台相比它有一些特点,所以你完全被允许(并鼓励)提出问题。 这是一份 非官方 的 Nostr 大使名单,他们很乐意帮助您加入: nostr:naddr1qqg5ummnw3ezqstdvfshxumpv3hhyuczypl4c26wfzswnlk2vwjxky7dhqjgnaqzqwvdvz3qwz5k3j4grrt46qcyqqq82vgwv96yu 名单上的所有nostriches都获得了 Nostr Ambassador 徽章,方便您查找、验证和关注它们
## ⚡️ 启用 Zaps Zaps 是加入 Nostr 后人们可能会注意到的第一个区别。 它们允许 Nostr 用户立即发送价值并支持创建有用和有趣的内容。 这要归功于比特币和闪电网络。 这些去中心化的支付协议让你可以立即发送一些 sats(比特币网络上的最小单位),就像在传统社交媒体平台上给某人的帖子点赞一样容易。 我们称此模型为 Value-4-Value,您可以在此处找到有关此最终货币化模型的更多信息:https://dergigi.com/value/ 查看由nostr:npub18ams6ewn5aj2n3wt2qawzglx9mr4nzksxhvrdc4gzrecw7n5tvjqctp424创建的这篇笔记,nostr:note154j3vn6eqaz43va0v99fclhkdp8xf0c7l07ye9aapgl29a6dusfslg8g7g 这是对 zaps 的一个很好的介绍: 即使您不认为自己是内容创建者,您也应该启用 Zaps——人们会发现您的一些笔记很有价值,并且可能想给您发送一些 sats。 开始在 Nostr onley 上获得价值的最简单方法需要几个步骤:
0 为您的移动设备下载 Wallet of Santoshi^1(可能是比特币和闪电网络新手的最佳选择)^2 1 点击“接收” 2 点击您在屏幕上看到的 Lightning 地址(看起来像电子邮件地址的字符串)将其复制到剪贴板。
3 将复制的地址粘贴到您的 Nostr 客户端的相应字段中(该字段可能会显示“比特币闪电地址”、“LN 地址”或任何类似内容,具体取决于您使用的应用程序)。
📫 获取 Nostr 地址
Nostr 地址,通常被 Nostr OG 称为“NIP-05 标识符”,看起来像一封电子邮件,并且: 🔍 帮助您使您的帐户易于发现和分享 ✔️ 证明您是人类 --- 这是 Nostr 地址的示例:Tony@nostr.21ideas.org
它很容易记住并随后粘贴到任何 Nostr 应用程序中以找到相应的用户。
要获得 Nostr 地址,您可以使用免费服务,例如 Nostr Check(由 nostr:npub138s5hey76qrnm2pmv7p8nnffhfddsm8sqzm285dyc0wy4f8a6qkqtzx624)或付费服务,例如 Nostr Plebs 了解有关此方法的更多信息。
🙇♀️ 学习基础知识
在后台,Nostr 与传统社交平台有很大不同,因此对它的内容有一个基本的了解对任何新手来说都是有益的。 请不要误会,我并不是建议您学习编程语言或协议的技术细节。 我的意思是看到更大的图景并理解 Nostr 和 Twitter / Medium / Reddit 之间的区别会有很大帮助。 例如,没有密码和登录名,取而代之的是私钥和公钥。 我不会深入探讨,因为有一些详尽的资源可以帮助您理解 Nostr。 由 nostr:npub12gu8c6uee3p243gez6cgk76362admlqe72aq3kp2fppjsjwmm7eqj9fle6 和 💜 准备的在这个组织整齐的登陆页面 收集了所有值得您关注的内容
上述资源提供的信息也将帮助您保护您的 Nostr 密钥(即您的帐户),因此请务必查看。
🤝 建立连接
与才华横溢的[^3]人建立联系的能力使 Nostr 与众不同。 在这里,每个人都可以发表意见,没有人会被排除在外。 有几种简单的方法可以在 Nostr 上找到有趣的人: * 查找您在 Twitter 上关注的人:https://www.nostr.directory/ 是一个很好的工具。 * 关注您信任的人:访问与您有共同兴趣的人的个人资料,查看他们关注的人的列表并与他们联系。
* 访问全球订阅源:每个 Nostr 客户端(一个 Nostr 应用程序,如果你愿意这样说的话)都有一个选项卡,可以让你切换到全球订阅源,它汇总了所有 Nostr 用户的所有笔记。 只需关注您感兴趣的人(不过请耐心等待——您可能会遇到大量垃圾邮件)。
🗺️探索
上面提到的 5 个步骤是一个很好的开始,它将极大地改善您的体验,但还有更多的东西有待发现和享受! Nostr 不是 Twitter 的替代品,它的可能性仅受想象力的限制。
查看有趣且有用的 Nostr 项目列表: * https://nostrapps.com/ Nostr 应用列表 * https://nostrplebs.com/ – 获取您的 NIP-05 和其他 Nostr 功能(付费) * https://nostrcheck.me/ – Nostr 地址、媒体上传、中继 * https://nostr.build/ – 上传和管理媒体(以及更多) * https://nostr.band/ – Nostr 网络和用户信息 * https://zaplife.lol/ – zapping统计 * https://nostrit.com/ – 定时发送帖子 * https://nostrnests.com/ – Twitter 空间 2.0 * https://nostryfied.online/ - 备份您的 Nostr 信息 * https://www.wavman.app/ Nostr 音乐播放器 ---
📻 中继
熟悉 Nostr 后,请务必查看我关于 Nostr 中继的快速指南:https://lnshort.it/nostr-relays。 这不是您旅程开始时要担心的话题,但在以后深入研究绝对重要。
📱 手机上的 Nostr
在移动设备上流畅的 Nostr 体验是可行的。 本指南将帮助您在智能手机上的 Nostr Web 应用程序中无缝登录、发帖、zap 等:https://lnshort.it/nostr-mobile
感谢阅读,我们在兔子洞的另一边见 nostr:npub10awzknjg5r5lajnr53438ndcyjylgqsrnrtq5grs495v42qc6awsj45ys7
发现这篇文章有价值吗 Zap⚡ 21ideas@getalby.com 关注:
npub10awzknjg5r5lajnr53438ndcyjylgqsrnrtq5grs495v42qc6awsj45ys7
查看我的项目 https://bitcal.21ideas.org/about/
[^3]:nostr:npub1fl7pr0azlpgk469u034lsgn46dvwguz9g339p03dpetp9cs5pq5qxzeknp 是其中一个Nostrich,他设计了本指南的封面上使用的徽标
译者: Sherry, 数据科学|软件工程|nossence|nostr.hk|组织过一些nostr meetup|写一些文章来将nostr带到每个人身边
Zap⚡ spang@getalby.com 关注:
npub1ejxswthae3nkljavznmv66p9ahp4wmj4adux525htmsrff4qym9sz2t3tv
-
@ 32e18276:5c68e245
2023-06-01 04:17:00Double-entry accounting is a tried and true method for tracking the flow of money using a principle from physics: the conservation of energy. If we account for all the inflows and outflows of money, then we know that we can build an accurate picture of all of the money we've made and spent.
Bitcoin is particularly good at accounting in this sense, since transaction inflows and outflows are checked by code, with the latest state of the ledger stored in the UTXO set.
What about lightning? Every transaction is not stored on the blockchain, so we need same way to account for all the incoming and outgoing lightning transactions. Luckily for us, core-lightning (CLN) comes with a plugin that describes these transactions in detail!
For every transaction, CLN stores the amount credited and debited from your node: routed payments, invoices, etc. To access this, you just need to run the
lightning-cli bkpr-listaccountevents
command:lightning-cli bkpr-listaccountevents | jq -cr '.events[] | [.type,.tag,.credit_msat,.debit_msat,.timestamp,.description] | @tsv' > events.txt
This will save a tab-separated file with some basic information about each credit and debit event on your node.
channel invoice 232000000 0 1662187126 Havana channel invoice 2050000 0 1662242391 coinos voucher channel invoice 0 1002203 1662463949 lightningpicturebot channel invoice 300000 0 1663110636 [["text/plain","jb55's lightning address"],["text/identifier","jb55@sendsats.lol"]] channel invoice 0 102626 1663483583 Mile high lightning club
Now here's comes the cool part, we can take this data and build a ledger-cli file. ledger is a very powerful command-line accounting tool built on a plaintext transaction format. Using the tab-separated file we got from CLN, we can build a ledger file with a chart-of-accounts that we can use for detailed reporting. To do this, I wrote a script for converting
bkpt
reports to ledger:http://git.jb55.com/cln-ledger
The ledger file looks like so:
``` 2023-05-31 f10074c748917a2ecd8c5ffb5c3067114e2677fa6152d5b5fd89c0aec7fd81c5 expenses:zap:1971 1971000 msat assets:cln -1971000 msat
2023-05-31 damus donations income:lnurl:damus@sendsats.lol -111000 msat assets:cln 111000 msat
2023-05-31 Zap income:zap:event:f8dd1e7eafa18add4aa8ff78c63f17bdb2fab3ade44f8980f094bdf3fb72d512 -10000000 msat assets:cln 10000000 msat ```
Each transaction has multiple postings which track the flow of money from one account to another. Once we have this file we can quickly build reports:
Balance report
Here's the command for "account balance report since 2023-05 in CAD"
$ ledger -b 2023-05-01 -S amount -X CAD -f cln.ledger bal
``` CAD5290 assets:cln CAD2202 expenses CAD525 routed CAD1677 unknown CAD-7492 income CAD-587 unknown CAD-526 routed CAD-1515 lnurl CAD-614 jb55@sendsats.lol CAD-1 tipjar CAD-537 damus@sendsats.lol CAD-364 gpt3@sendsats.lol CAD-4012 merch CAD-2571 tshirt CAD-1441 hat CAD-852 zap CAD-847 event CAD-66 30e763a1206774753da01ba4ce95852a37841e1a1777076ba82e068f6730b75d CAD-60 f9cda1d7b6792e5320a52909dcd98d20e7f95003de7a813fa18aa8c43ea66710 CAD-49 5ae0087aa6245365a6d357befa9a59b587c01cf30bd8580cd4f79dc67fc30aef CAD-43 a4d44469dd3db920257e0bca0b6ee063dfbf6622514a55e2d222f321744a2a0e ...
0
```
As we can see it shows a breakdown of all the sats we've earned (in this case converted to fiat). We can have a higher-level summary using the depth argument:
$ ledger -M -S amount -X sat -f cln.ledger bal
``` sat14694904 assets:cln sat6116712 expenses sat1457926 routed sat4658786 unknown sat-20811616 income sat-1630529 unknown sat-1461610 routed sat-4207647 lnurl sat-11144666 merch sat-2367164 zap
0
```
As we can see we made 14 million sats this month, not bad! The number at the bottom balances to zero which means we've properly accounted for all income and expenses.
Daily Damus Donation Earnings
To support damus, some users have turned on a feature that sends zaps to support damus development. This simply sends a payment to the damus@sendsats.lol lightning address. Since we record these we can build a daily report of damus donations:
$ ledger -D -V -f cln.ledger reg damus
23-May-15 - 23-May-15 ..damus@sendsats.lol CAD-46 CAD-46 23-May-16 - 23-May-16 ..damus@sendsats.lol CAD-73 CAD-120 23-May-17 - 23-May-17 ..damus@sendsats.lol CAD-41 CAD-161 23-May-18 - 23-May-18 ..damus@sendsats.lol CAD-37 CAD-197 23-May-19 - 23-May-19 ..damus@sendsats.lol CAD-35 CAD-233 23-May-20 - 23-May-20 ..damus@sendsats.lol CAD-28 CAD-261 23-May-21 - 23-May-21 ..damus@sendsats.lol CAD-19 CAD-280 23-May-22 - 23-May-22 ..damus@sendsats.lol CAD-29 CAD-309 23-May-23 - 23-May-23 ..damus@sendsats.lol CAD-19 CAD-328 23-May-24 - 23-May-24 ..damus@sendsats.lol CAD-25 CAD-353 23-May-25 - 23-May-25 ..damus@sendsats.lol CAD-36 CAD-390 23-May-26 - 23-May-26 ..damus@sendsats.lol CAD-37 CAD-426 23-May-27 - 23-May-27 ..damus@sendsats.lol CAD-25 CAD-451 23-May-28 - 23-May-28 ..damus@sendsats.lol CAD-25 CAD-476 23-May-29 - 23-May-29 ..damus@sendsats.lol CAD-12 CAD-488 23-May-30 - 23-May-30 ..damus@sendsats.lol CAD-29 CAD-517 23-May-31 - 23-May-31 ..damus@sendsats.lol CAD-21 CAD-537
Not making bank or anything but this covered the relay server costs this month!
Hopefully ya'll found this useful, feel free to fork the script and try it out!
-
@ b9e76546:612023dc
2023-05-23 18:13:45The real power of AI will be in its integration of other tools to use in specific situation and recognizing what those tools are. There will be extremely specific & curated AI models (or just basic software circuits) on certain topics, tasks, or concepts. And this will also be a crucial way in which we keep AI safe, and give it understanding of its own actions. In other words, how we prevent it from going insane. I've recently experienced a tiny microcosm of what that might look like...
— ie. a general language model that knows to call on the conceptual math language model, that then makes sense of the question and knows to input it into the calculator app for explicit calculations when solving complex or tricky word problems. And then to apply this in the realm of safety and morals, a specific model that an AI calls on for understanding the consequences and principles of any actions it takes in the real world.
I believe there needs to be an AI "Constitution" (a particular term I heard used to describe it) where there is a specific set of ideas and actions it is enabled to perform, and a particular set of "moral weights" it must assess before taking action. Anyone who's read Asimov will recognize this as "The Three Laws" and that's basically what it would be. This is critical because an AI running an actual humanoid machine like Boston Dynamics could go ape shit literally because it is emulating trolling someone by doing the opposite of what they asked -- I just had a LLM troll me yesterday & go a bit haywire when i asked it to be concise, and every answer afterward was then the wordiest and longest bunch of nonsense imaginable... it was funny, but also slightly sobering to think how these things could go wrong when controlling something in the real world. Now imagine a robot that thinks Kick Ass is a funny movie and starts emulating its behavior thinking it's being funny because it has no model to assess the importance of the humans whose skulls it's smashing and thinks the more blood it can splatter everywhere makes it a more comical experience for those in the room. That's essentially the "real world robot" version of asking a LLM to be concise and instead getting an avalanche of BS. Ask a robot to be funny and maybe it crushes your skull.
Because of that, I think there will be certain "anchors" or particular "circuits" for these LLMs to be constrained by for certain things. Essentially action specific built governors that add meaning to the actions and things they are doing. A very simple version mentioned above would be a calculator. If you ask an LLM right now to do basic math, it screws up all the time. It has no idea how to generate a true answer. It just predicts what an answer might sound like. So even extremely simple and common sense requests turn up idiotic answers sometimes. But if it can recognize that you are asking a math problem, find the relevant mathematical elements, and then call on the hardcoded & built-in calculator circuit, then the LLM isn't doing the calculation, it's simply the interface between the calculation tool and the human interaction.
What I think will be critical as we integrate these into real world machines over time, and as their capabilities become more generalized and layered, will be to build in a sort of moral constitution that behaves like a concrete engine (a calculator), that has the model recognize when something might be a questionable behavior or cause an undesirable outcome, and then call on the "constitution" to make the decision to act or not. In that way, it may actually prevent itself from doing something stupid or terrible that even a human hadn't realized the full consequences of. — ie. it won't help a child get to the roof of his building so he can fly off the side with his cardboard wings.
It will be very interesting to watch these come about because the failure more of AI will be a critically important thing to consider, and unfortunately from an engineering and cultural standpoint, "failure modes" are something that have been underrepresented and increasingly ignored. A simple example is a modern washing machine; when something entirely arbitrary or some silly little feature breaks, the whole thing is useless and you have to bring a technician out to fix it, when a sensible failure mode would be that it simply routes around what arbitrary feature failed, and continues working normally. This, unfortunately, has become the norm for tons of "modern" devices and appliances. they are simultaneously increasingly "advance" and "stupid" at the same time. It's largely a product of the high time preference mindset, and we need MUCH more low time preference consideration as we unleash AI onto the world. It will matter exponentially more when we start making machines that can operate autonomously, can maintain themselves, and learn through their own interactions and environment... and we aren't very far away.
Learn as fast as you can, understand the tools, and stay safe.
grownostr #AI_Unchained
(my very first post on BlogStack.io)
-
@ 52b4a076:e7fad8bd
2023-05-01 19:37:20What is NIP-05 really?
If you look at the spec, it's a way to map Nostr public keys to DNS-based internet identifiers, such as
name@example.com
.If you look at Nostr Plebs:
It's a human readable identifier for your public key. It makes finding your profile on Nostr easier. It makes identifying your account easier.
If you look at basically any client, you see a checkmark, which you assume means verification.
If you ask someone, they probably will call it verification.
How did we get here?
Initially, there was only one client, which was (kind of) the reference implementation: Branle.
When it added support for NIP-05 identifiers, it used to replace the display name with the NIP-05 identifier, and it had to distinguish a NIP-05 from someone setting their display name to a NIP-05. So they added a checkmark...
Then there was astral.ninja and Damus: The former was a fork of Branle, and therefore inherited the checkmark. Damus didn't implement NIP-05 until a while later, and they added a checkmark because Astral and other clients were doing it.
And then came new clients, all copying what the previous ones did... (Snort originally did not have a checkmark, but that changed later.)
The first NIP-05 provider
Long story short, people were wondering what NIP-05 is and wanted it, and that's how Nostr Plebs came to be.
They initially called their service verification. Somewhere between January and February, they removed all mentions to verification except one (because people were searching for it), and publicly said that NIP-05 is not verification. But that didn't work.
Then, there were the new NIP-05 providers, some understood perfectly what a NIP-05 identifier is and applied the correct nomenclature. Others misnamed it as verification, adding confusion to users. This made the problem worse on top of the popular clients showing checkmarks.
(from this point in the article we'll refer to it as a Nostr address)
And so, the scams begin
Spammers and scammers started to abuse Nostr addresses to scam people: - Some providers has been used by fake crypto airdrop bots. - A few Nostr address providers have terminated multitude of impersonating and scam identifiers over the past weeks.
This goes to show that Nostr addresses don't verify anything, they are just providers of human readable handles.
Nostr addresses can be proof of association
Nostr addresses can be a proof of association. The easiest analogy to understand is email:
jack@cash.app -> You could assume this is the Jack that works at Cash App.
jack@nostr-address-provider.example.com -> This could be any Jack.
What now?
We urge that clients stop showing a checkmark for all Nostr addresses, as they are not useful for verification.
We also urge that clients hide checkmarks for all domain names, without exception in the same way we do not show checkmarks for emails.
Lastly, NIP-05 is a nostr address and that is why we urge all clients to use the proper nomenclature.
Signed:
- Semisol, Nostr Plebs (semisol@nostrplebs.com)
- Quentin, nostrcheck.me (quentin@nostrcheck.me)
- Derek Ross, Nostr Plebs (derekross@nostrplebs.com)
- Bitcoin Nostrich, Bitcoin Nostr (BitcoinNostrich@BitcoinNostr.com)
- Remina, zaps.lol (remina@zaps.lol)
- Harry Hodler, nostr-check.com (harryhodler@nostr-check.com)
-
@ 82341f88:fbfbe6a2
2023-04-11 19:36:53There’s a lot of conversation around the #TwitterFiles. Here’s my take, and thoughts on how to fix the issues identified.
I’ll start with the principles I’ve come to believe…based on everything I’ve learned and experienced through my past actions as a Twitter co-founder and lead:
- Social media must be resilient to corporate and government control.
- Only the original author may remove content they produce.
- Moderation is best implemented by algorithmic choice.
The Twitter when I led it and the Twitter of today do not meet any of these principles. This is my fault alone, as I completely gave up pushing for them when an activist entered our stock in 2020. I no longer had hope of achieving any of it as a public company with no defense mechanisms (lack of dual-class shares being a key one). I planned my exit at that moment knowing I was no longer right for the company.
The biggest mistake I made was continuing to invest in building tools for us to manage the public conversation, versus building tools for the people using Twitter to easily manage it for themselves. This burdened the company with too much power, and opened us to significant outside pressure (such as advertising budgets). I generally think companies have become far too powerful, and that became completely clear to me with our suspension of Trump’s account. As I’ve said before, we did the right thing for the public company business at the time, but the wrong thing for the internet and society. Much more about this here: https://twitter.com/jack/status/1349510769268850690
I continue to believe there was no ill intent or hidden agendas, and everyone acted according to the best information we had at the time. Of course mistakes were made. But if we had focused more on tools for the people using the service rather than tools for us, and moved much faster towards absolute transparency, we probably wouldn’t be in this situation of needing a fresh reset (which I am supportive of). Again, I own all of this and our actions, and all I can do is work to make it right.
Back to the principles. Of course governments want to shape and control the public conversation, and will use every method at their disposal to do so, including the media. And the power a corporation wields to do the same is only growing. It’s critical that the people have tools to resist this, and that those tools are ultimately owned by the people. Allowing a government or a few corporations to own the public conversation is a path towards centralized control.
I’m a strong believer that any content produced by someone for the internet should be permanent until the original author chooses to delete it. It should be always available and addressable. Content takedowns and suspensions should not be possible. Doing so complicates important context, learning, and enforcement of illegal activity. There are significant issues with this stance of course, but starting with this principle will allow for far better solutions than we have today. The internet is trending towards a world were storage is “free” and infinite, which places all the actual value on how to discover and see content.
Which brings me to the last principle: moderation. I don’t believe a centralized system can do content moderation globally. It can only be done through ranking and relevance algorithms, the more localized the better. But instead of a company or government building and controlling these solely, people should be able to build and choose from algorithms that best match their criteria, or not have to use any at all. A “follow” action should always deliver every bit of content from the corresponding account, and the algorithms should be able to comb through everything else through a relevance lens that an individual determines. There’s a default “G-rated” algorithm, and then there’s everything else one can imagine.
The only way I know of to truly live up to these 3 principles is a free and open protocol for social media, that is not owned by a single company or group of companies, and is resilient to corporate and government influence. The problem today is that we have companies who own both the protocol and discovery of content. Which ultimately puts one person in charge of what’s available and seen, or not. This is by definition a single point of failure, no matter how great the person, and over time will fracture the public conversation, and may lead to more control by governments and corporations around the world.
I believe many companies can build a phenomenal business off an open protocol. For proof, look at both the web and email. The biggest problem with these models however is that the discovery mechanisms are far too proprietary and fixed instead of open or extendable. Companies can build many profitable services that complement rather than lock down how we access this massive collection of conversation. There is no need to own or host it themselves.
Many of you won’t trust this solution just because it’s me stating it. I get it, but that’s exactly the point. Trusting any one individual with this comes with compromises, not to mention being way too heavy a burden for the individual. It has to be something akin to what bitcoin has shown to be possible. If you want proof of this, get out of the US and European bubble of the bitcoin price fluctuations and learn how real people are using it for censorship resistance in Africa and Central/South America.
I do still wish for Twitter, and every company, to become uncomfortably transparent in all their actions, and I wish I forced more of that years ago. I do believe absolute transparency builds trust. As for the files, I wish they were released Wikileaks-style, with many more eyes and interpretations to consider. And along with that, commitments of transparency for present and future actions. I’m hopeful all of this will happen. There’s nothing to hide…only a lot to learn from. The current attacks on my former colleagues could be dangerous and doesn’t solve anything. If you want to blame, direct it at me and my actions, or lack thereof.
As far as the free and open social media protocol goes, there are many competing projects: @bluesky is one with the AT Protocol, nostr another, Mastodon yet another, Matrix yet another…and there will be many more. One will have a chance at becoming a standard like HTTP or SMTP. This isn’t about a “decentralized Twitter.” This is a focused and urgent push for a foundational core technology standard to make social media a native part of the internet. I believe this is critical both to Twitter’s future, and the public conversation’s ability to truly serve the people, which helps hold governments and corporations accountable. And hopefully makes it all a lot more fun and informative again.
💸🛠️🌐 To accelerate open internet and protocol work, I’m going to open a new category of #startsmall grants: “open internet development.” It will start with a focus of giving cash and equity grants to engineering teams working on social media and private communication protocols, bitcoin, and a web-only mobile OS. I’ll make some grants next week, starting with $1mm/yr to Signal. Please let me know other great candidates for this money.
-
@ d61f3bc5:0da6ef4a
2023-03-15 01:10:05The idea that a user can sign a note and publish it to any number of relays is incredibly simple and powerful. That signed note can then be relayed further and with every new copy it becomes harder to censor. This core simplicity has made Nostr very popular with developers and users.
However, the brunt of the work is currently being done by just a handful of relays:
Given today’s network topology, it is not clear how Nostr could support say 100M users. In addition, the current breed of Nostr clients – while being impressive achievements of decentralized social media – suffer from sluggish UIs when compared to their legacy centralized counterparts.
Let’s consider an approach that might help with scaling, UX, and perhaps even decentralization.
Caching Services
Imagine a service with the following characteristics: - Stores all public Nostr content. Connects to all known Nostr relays and collects content in real time: user metadata, notes, reactions, all events. In short, the entire public Nostr network.
-
Can keep full archive or pruned content. Content can be pruned when the allocated disk space runs out. Service operators can decide how much of Nostr they wish to keep.
-
Provides fast response times. Clients connecting to the service can expect response times that match or beat the legacy centralized networks. Most content is served from the RAM.
- Provides simple aggregations. Counters for likes, replies, reposts, zaps, and sats zapped are included with every post in the feed.
Such a service is definitely useful for many different applications. But wouldn’t standing up a service like this introduce a centralizing factor to Nostr? Let’s take this thought experiment a step further.
Now imagine that anyone can stand up a caching service with minimal effort and a modest hosting budget. Imagine that caching services are built in a standard and open manner, so that they can interoperate, sync content, and help bootstrap new instances. Considering the incentives, we could end up with hundreds of Nostr caches all over the Internet. Each new copy makes Nostr more robust.
Client Behavior
For Nostr clients, there are pros and cons to using a caching service. The obvious benefits are the UI speed and the improved UX. The downside is that a certain amount of trust is placed in the caching service. Let’s take a closer look at how this would work and how trust could be reduced.
The client connects to a caching service and immediately receives and displays the full feed for the specified user. To reduce trust, the client connects to a subset of relays in the background and fetches the content for comparison. Since all content is signed, the caching service can only lie by omission. The client displays and clearly visually marks any content it found on the relays that was not sent by the service. If the user loses confidence in a cache instance, they could simply point their client to another one, or turn caching off altogether. The client should be fully functional when caching is turned off. The ability to work with a caching service is an extension of Nostr client functionality, not a replacement for standard client capabilities.
When it comes to publishing, there are no changes to the standard Nostr client behavior: all content is published directly to the relays. Caching services should be viewed as a transient layer whose purpose is to improve the UX and reduce the load on the relays.
A Scaling Scenario
Let’s now imagine that Nostr is a raging success. The network has grown to 100M active users. There are hundreds of Nostr apps and services, thousands of active relays. Apps range from highly specialized “micro apps”, to the more elaborate Nostr “everything apps”, dedicated Nostr browsers, and other amazing things we can’t even imagine today.
In this scenario, it is likely that Nostr apps will use a range of caching strategies to serve their users. The most popular apps, with millions of active users, are likely to invest in their caching infrastructure. The up-and-coming apps that wish to compete with the best, but don’t yet have a lot of users are likely going to stand up small scale caching services and grow them as their userbase grows. Finally, there would be a number of apps that don’t use a caching service at all.
The beauty of this outcome is that even users who have millions of followers could publish their content to a handful of low-powered relays. As long as those relays are publicly accessible, the caching services will pick up their content and dramatically reduce the load on them. In addition, taking down any individual cache instance does nothing to hurt Nostr. Users can simply point their clients to any other cache instance, or the relays themselves. Censorship is strictly harder in a world where caching services exist. Those who wish to enforce censorship would have to take down all the relevant relays plus all the cache services.
Conclusion
Caching solutions for Nostr are inevitable. They are very useful, and the incentives are there for them to be built. The only question is whether they will be done in an open and interoperable way or a closed and proprietary way.
If you’ve made it this far, I know what you’re thinking: “Can we see this in action?”
Yes! A preview of this concept is available at primal.net.
The app itself is not fully functional yet, but you can definitely see caching in action. It’s fast. :)
We are actively developing Primal, so make sure you check back often. We have many juicy features in the pipeline. Feel free to reach out with feedback and feature requests. If you are going to Nostrica, then I’ll see you there in a few days! 🤙
-
-
@ 3f770d65:7a745b24
2023-02-21 01:55:08Habla is a new blogging platform that's based on Nostr. Or as various Nostr client developers like to call it:
"A way to get Derek Ross to stop sending walls of text".
You can edit posts, which is very nice.
-
@ d2e97f73:ea9a4d1b
2023-04-11 19:36:53There’s a lot of conversation around the #TwitterFiles. Here’s my take, and thoughts on how to fix the issues identified.
I’ll start with the principles I’ve come to believe…based on everything I’ve learned and experienced through my past actions as a Twitter co-founder and lead:
- Social media must be resilient to corporate and government control.
- Only the original author may remove content they produce.
- Moderation is best implemented by algorithmic choice.
The Twitter when I led it and the Twitter of today do not meet any of these principles. This is my fault alone, as I completely gave up pushing for them when an activist entered our stock in 2020. I no longer had hope of achieving any of it as a public company with no defense mechanisms (lack of dual-class shares being a key one). I planned my exit at that moment knowing I was no longer right for the company.
The biggest mistake I made was continuing to invest in building tools for us to manage the public conversation, versus building tools for the people using Twitter to easily manage it for themselves. This burdened the company with too much power, and opened us to significant outside pressure (such as advertising budgets). I generally think companies have become far too powerful, and that became completely clear to me with our suspension of Trump’s account. As I’ve said before, we did the right thing for the public company business at the time, but the wrong thing for the internet and society. Much more about this here: https://twitter.com/jack/status/1349510769268850690
I continue to believe there was no ill intent or hidden agendas, and everyone acted according to the best information we had at the time. Of course mistakes were made. But if we had focused more on tools for the people using the service rather than tools for us, and moved much faster towards absolute transparency, we probably wouldn’t be in this situation of needing a fresh reset (which I am supportive of). Again, I own all of this and our actions, and all I can do is work to make it right.
Back to the principles. Of course governments want to shape and control the public conversation, and will use every method at their disposal to do so, including the media. And the power a corporation wields to do the same is only growing. It’s critical that the people have tools to resist this, and that those tools are ultimately owned by the people. Allowing a government or a few corporations to own the public conversation is a path towards centralized control.
I’m a strong believer that any content produced by someone for the internet should be permanent until the original author chooses to delete it. It should be always available and addressable. Content takedowns and suspensions should not be possible. Doing so complicates important context, learning, and enforcement of illegal activity. There are significant issues with this stance of course, but starting with this principle will allow for far better solutions than we have today. The internet is trending towards a world were storage is “free” and infinite, which places all the actual value on how to discover and see content.
Which brings me to the last principle: moderation. I don’t believe a centralized system can do content moderation globally. It can only be done through ranking and relevance algorithms, the more localized the better. But instead of a company or government building and controlling these solely, people should be able to build and choose from algorithms that best match their criteria, or not have to use any at all. A “follow” action should always deliver every bit of content from the corresponding account, and the algorithms should be able to comb through everything else through a relevance lens that an individual determines. There’s a default “G-rated” algorithm, and then there’s everything else one can imagine.
The only way I know of to truly live up to these 3 principles is a free and open protocol for social media, that is not owned by a single company or group of companies, and is resilient to corporate and government influence. The problem today is that we have companies who own both the protocol and discovery of content. Which ultimately puts one person in charge of what’s available and seen, or not. This is by definition a single point of failure, no matter how great the person, and over time will fracture the public conversation, and may lead to more control by governments and corporations around the world.
I believe many companies can build a phenomenal business off an open protocol. For proof, look at both the web and email. The biggest problem with these models however is that the discovery mechanisms are far too proprietary and fixed instead of open or extendable. Companies can build many profitable services that complement rather than lock down how we access this massive collection of conversation. There is no need to own or host it themselves.
Many of you won’t trust this solution just because it’s me stating it. I get it, but that’s exactly the point. Trusting any one individual with this comes with compromises, not to mention being way too heavy a burden for the individual. It has to be something akin to what bitcoin has shown to be possible. If you want proof of this, get out of the US and European bubble of the bitcoin price fluctuations and learn how real people are using it for censorship resistance in Africa and Central/South America.
I do still wish for Twitter, and every company, to become uncomfortably transparent in all their actions, and I wish I forced more of that years ago. I do believe absolute transparency builds trust. As for the files, I wish they were released Wikileaks-style, with many more eyes and interpretations to consider. And along with that, commitments of transparency for present and future actions. I’m hopeful all of this will happen. There’s nothing to hide…only a lot to learn from. The current attacks on my former colleagues could be dangerous and doesn’t solve anything. If you want to blame, direct it at me and my actions, or lack thereof.
As far as the free and open social media protocol goes, there are many competing projects: @bluesky is one with the AT Protocol, nostr another, Mastodon yet another, Matrix yet another…and there will be many more. One will have a chance at becoming a standard like HTTP or SMTP. This isn’t about a “decentralized Twitter.” This is a focused and urgent push for a foundational core technology standard to make social media a native part of the internet. I believe this is critical both to Twitter’s future, and the public conversation’s ability to truly serve the people, which helps hold governments and corporations accountable. And hopefully makes it all a lot more fun and informative again.
💸🛠️🌐 To accelerate open internet and protocol work, I’m going to open a new category of #startsmall grants: “open internet development.” It will start with a focus of giving cash and equity grants to engineering teams working on social media and private communication protocols, bitcoin, and a web-only mobile OS. I’ll make some grants next week, starting with $1mm/yr to Signal. Please let me know other great candidates for this money.