This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Nivenly Foundation Documentation

Introduction and FAQ for the Nivenly Foundation.

Hello and welcome to our FAQ page. Here are the most commonly asked questions about the Nivenly Foundation and membership. We cover a lot of the below FAQs on our launch blog post as well.

FAQ

What is the Nivenly Foundation?

The Nivenly Foundation is a non-profit co-op that provides governance and legal support to open source projects.

Why is the Foundation a non-profit?

There are three main reasons for choosing a non-profit:

  1. Makes it easier for board members and other volunteers to work on the Foundation and its projects while also having their dayjob.
  2. Makes it easier to partner with other entities, both for-profit and non-profit.
  3. Makes it easier to prioritize the community needs over business needs.

Why is Nivenly a co-op?

We, the founders, believe that it is critical for members to have the ability to partake in decisions that govern Nivenly as well as decisions that will steward the future for the open source projects we choose to take on. This means that those who use, build, and finance the software and services need to have clear channels of communciation and well understood, easy to use, methods to make decisions.

What are the goals for the Nivenly Foundation?

The initial goals of the Nivenly Foundation are to provide our open source projects and servcies assistance with:

  • Managing funding
  • Community management
  • Decision-Making
  • Filing and holding trademarks for IP protection

How do I sign up to be a member?

There are three paths to membership with the Nivenly Foundation.

General Members

If you are an individual, you can sign up for membership on our Open Collective page.

Note that for general membership:

  • The membership fee is $7/mo. (USD)
  • General members vote directly in general elections.
  • All members must abide by the Nivenly Covenant.
Project Members

Project membership is only open to maintainers of current Nivenly projects. Each project manages their own maintainer onboarding and offboarding. In order to be a maintainer for a specific project, you must reach out to that project.

Note that for project membership:

  • There is no cost associated with project membership.
    • Individuals can have a general membership and be a project maintainer at the same time. This will not duplicate their vote.
  • Projects delegate a maintainer to sit in the Nivenly Senate and represent their project.
  • All members must abide by the Nivenly Covenant.
Trade Members

Trade memberships are for businesses, companies, non-profits, and other entities. If you are interested in being a trade member, please reach out to us at info@nivenly.org.

Note that for trade membership:

  • Cost of membership depends on organization size. For more information please see our Governance page.
    • Individuals can have a general membership and be a member of / employed by a trade member organization. This will note duplicate their vote.
  • Trade member organizations delegate a representative to sit in the Nivenly Senate and represent their organization.
  • All members must abide by the Nivenly Covenant.

How do I submit a project to the Nivenly Foundation?

Projects are currently submitted by emailing us at info@nivenly.org. When you submit a project, please include:

  • A complete description of the project.
  • What you need from us, for example:
    • Assistance with governance
    • Assistance with trademarks
    • Assistance with licensing
    • Assistance with networking for additional maintainers

Project requirements:

  • Project must already have a working release (it does not need to be a 1.0 release).
  • Project must have a clear scope of the problem space it is intending to solve.
  • Project must document how it is solving the inteneded problems in its scope.
  • Projects that handle data must have clearly defined and documented:
    • What data is being gathered
    • How it is being stored
    • How it is being protected
    • How the project implements consent
    • How the project handles any ethical concerns with the project data

Where can I buy swag?

The Nivenly Foundation Swag Store! Currently hosting swag for Nivenly itself as well as its projects Hachyderm and Aurae. Note that the product search bar is for the Nivenly store only, not sitewide for SpreadShop.

Where can I donate to Nivenly?

If you would like to donate to the Nivenly Foundation without becoming a member, we recommend using Nivenly’s GitHub Sponsors page.

⚠️ Donations are not the same as memberships. The above path is for supporting the Nivenly Foundation without partaking in the governance structure. If you would like to be a member, please use one of the paths to membership outlined above.

How can I help the Nivenly Foundation?

We are looking for projects and trade members that will help us test and iterate on all of our processes in our first year. This is to help ensure that they both function as we intend them to as well as don’t introduce unintended consequences or difficulties in their implementation.

1 - Papers

Papers that the Nivenly Foundation has written or supported.

Collection of research, papers, and white papers produced by the Nivenly Foundation or authors supported by the Nivenly Foundation.

1.1 - Federation Safety Enhancement Project (FSEP)

With the surging popularity of federating tools, how do we make it easier to make safety the default?

Title: Federation Safety Enahncement Project
v1.0 Author: Roland X. Pulliam
Contributors: The Bad Space, IFTAS, Nivenly Foundation, Oliphant.social, as well as many individuals who provided feedback and expertise. Thank you!

Table of Contents

Context and Summary

As collective dissatisfaction with centralized social media grows with how they encourage people’s worst impulses to monetize them for the benefit of a few, decentralized options are getting more attention now than ever before.

The idea of decentralized platforms has marched into the collective consciousness of internet users globally. Adopting ActivityPub as a standard protocol has enabled online communities to create spaces tailored to their unique needs.

However, with these new opportunities come new challenges. One of these new challenges is moderation. Though moderation was flawed on centralized platforms, there was at least a cursory attempt to curtail poor behavior by a team dedicated to these efforts.

The decentralized nature of platforms such as Misskey, Akkoma, Mastodon, etc., makes this a particularly challenging topic because there needs to be a consensus about what constitutes quality moderation, as every site is responsible for itself.

This document proposes a solution to this challenge that will attempt to normalize moderation standards to increase safety and lessen the impact of poor behavior in the federated network.

Challenge

The Fediverse is defined as (a portmanteau of “federation” and “universe”), an ensemble of federated (i.e., interconnected) servers that are used for web publishing (i.e., social networking, microblogging, blogging, or websites) and file hosting, which, while independently hosted, can communicate with each other.

There have been several protocols that have been used to achieve federation between servers. This document will focus on ActivityPub, an officially W3C-recognized standard based on the Activity Streams 2.0 format.

The rise of the ActivityPub protocol has spawned several independent projects capable of speaking to each other, i.e., Misskey, Mastodon, Funkwhale, and PeerTube, to name a few.

This has enabled an exciting new chapter of social media networking where individuals can band together to create dynamic online experiences that are not subject to the changing whims of centralized services.

However, with these new opportunities comes new challenges. One of the most extended ongoing challenges in centralized and decentralized spaces has been moderation: normalizing a standard of behavior that encourages healthy engagement.

While the history of moderation in centralized places such as Twitter and Instagram has been rocky, there were resources dedicated to wrestling with the complexities of this issue over time.

The decentralized space has wiped this slate clean. And while the openness of the fediverse is a massive step forward in independent networks becoming an everyday experience on the web, the downside is that many of these spaces have become unregulated with little to no moderation at all.

These unregulated spaces attract hateful and aggressive people looking to shake off the restrictions by centralized platforms to let their bigotry fly without fear of consequences as the larger fediverse looks to find its feet. And unfortunately, this lack of attention can result in monocultures that can have (content warning - violence) devastating consequences.

One convention that federated platforms have adopted to combat harassment and abuse has been blocklists, which enable sites to refuse to federate with any URL deemed a threat to said site. The now-defunct Block Together used a similar methodology to enhance safety and abuse mitigation on Twitter.

The obvious issue is that every site can have its blocklists, which is excellent for individual setups. However, this makes collaboration between these lists challenging, which can hamper coordination around dealing with abusive sites that perpetually engage in harassment and bad faith engagement.

To prioritize this collaboration and make it easier to defederate with sites with a history of poor behavior, The Bad Space was created, a searchable database populated with sites that multiple instances have blocked. While still in alpha, this site provides new opportunities to enhance safety and abuse mitigation and improve onboarding and blocklist maintenance by providing an easy way to keep blocklists up to date from a central location through dynamically created export lists and public API that can be used to search the database programmatically.

These features can be integrated into existing platforms to enhance the overall experience on ActivityPub-enabled applications by automatically limiting the opportunity for bad-faith actors to interact.

Functional Requirements

Service Feature Integration

The first step in this process is to make decentralized systems aware of the features provided by services such as The Bad Space, specifically its public search API and dynamically created and updated blocklists.

Federation is the exchange of messages through a common language, in this case, ActivityPub. A note is created using a specific format, and platforms that understand that format can read and respond to the original message. When this connection happens, sites are officially federated with each other.

Image has two groups of people. On the left is a smiling Black woman and on the
right is a group of four smiling people - a Black man, an Indian woman, a woman wearing
a Hijab, and a Japanese man with his hair styled in a Samurai bun.
FIG. 01 - FEDERATION PROCESS

Currently, federation can be prevented by adding a URL to the platform’s blocklist, which will, in turn, refuse to accept activity from the site requesting communication, disallowing federation to happen.

While serviceable, this process has several challenges, such as:

  • Exposing users to harmful content as means of filtering.
  • Each problematic site must be manually added
  • No mechanism for automation, which requires blocklists to be manually maintained and updated
  • No tool to validate connection requests at the user level for harmful content
  • Underdeveloped blocklist maintenance options to fine-tune what sites can communicate based on the administrator’s discretion.

Integration of The Bad Space’s features into backend functionality will improve not only the administration experience but the individual users as well by:

  • Allowing blocklists to be automatically imported during onboarding dramatically reduces the opportunity to be exposed to harmful content unnecessarily.
  • The ability to request an updated blocklist or automate the process to check on its own periodically.
  • Give the users the ability to vet incoming connection requests to validate that they are not from problematic sites.
  • Expand blocklist management by listing why a site is blocked, access to available examples, and when the site was last updated.

While adding these features will enhance the usability of blocklists and upgrade the end-user experience, corresponding front-end changes are required in several areas to ask permission for these services to be used and to make how those services are being used clear and easy to understand.

Onboarding

The administration onboarding process can be enhanced by leveraging the additions to the back end detailed in the Service Feature Integration.

IDFEATUREPRIORITY
1.1At setup time, administrators should be able to upload a deny list or select a default to deny list provided by a deny list provider. For MVP, only one deny list provider needs to be supported in this list (The Bad Space). Ultimately, the goal is to allow users to choose a deny list from one of several vetted providers.P1
1.2At setup time, administrators should be able to select whether or not they want the default deny list to auto-update. If the admin selects “yes,” the list should attempt to auto-update once every 24 hrs.P1
1.3At setup time, administrators should be able to view the techniques and policies used by each provider to create each deny list in full, during the setup process, from the list of providers, by clicking on links associated directly with each deny list and provider.P1

Deny List Management

Deny list management will be extended with new features to provide more information and nuanced options to curate a specific experience.

IDFEATUREPRIORITY
2.1The administration panel should have a new option to manage “deny lists” and blocked sites. This list should contain all deny lists that the admin has subscribed to.P1
2.2Auto updates: It should be possible to have the system check for deny list subscription updates at regular intervals, i.e., once every 24 hours. Local status overrides will take precedence over imported instance statuses.P1
2.3Check for updates: It should be possible for an admin to manually update any of the deny lists that they are subscribed to outside of the automated update time.P1
2.4Overrides: It should be possible for an admin to add a manual override for a deny list. The manual override should be applied over all deny lists.P1
2.5Remove Instance from deny list: It should be possible for an administrator to remove instances where deemed appropriate. These changes only apply to local deny lists and will not affect subscribed deny lists.P1

Moderation Data

Moderation capabilities will be expanded to provide more context and information to make informed decisions.

IDFEATUREPRIORITY
3.1It should be possible for admins to see how many of their users’ followers / following would be impacted by a deny list. For example: “When you subscribe to [The Bad Space], [Number] of your users will no longer be able to communicate with [number] followers and [number] following.”P1
3.2Check for updates: It should be possible for an admin to manually update any of the deny lists that they are subscribed to outside of the automated update time. Administrators should be prompted with moderation information and be able to confirm the status before accepting an update.P1
3.3Administrators should be able to see a log of updates (temporal diffs) for sites on the deny list.P1

Non-Functional Requirements

IDFEATUREPRIORITY
4.1The update time should apply jitter so that all instances don’t try to update simultaneously.P2
4.2Users should be able to add up to 100 default deny lists. For v1, there will only be one default. But the UI should consider that future updates will add more options.P3
4.3Inclusive language: The terminology should be “deny list.”P3

Design

Onboarding UI

The onboarding process flow for platforms on the fediverse involves inputting various identifying pieces of information such as email, username, and password for the administration account, deciding on multiple features of the chosen platform, and confirming that these details are correct. The following sample UI mocks are non-implementation specific, so they should be applicable across several fediverse applications ( Mastodon, MissKey, CalcKey, etc.).

An additional UI component can be added to this flow to inform the administrator of the features detailed in the Onboarding section.

The first panel of the new component will describe what The Bad Space is, what it does, and how it will be used. It will also contain an element to confirm approval by the administrator to use The Bad Space’s feature set in the platform’s experience. If the administrator declines, the flow will continue with the default onboarding process.

Image of a mock admin panel. Panel has options to import a blocklist or
enable an integration with The Bad Space.
FIG. 02 - ONBOARDING BLOCKLIST PANEL 1

Upon consenting to the system to use the additional features, a second panel will be revealed to ask if they would like the dynamic blocklist automatically imported from The Bad Space to populate the system’s blocklist with the option to check for updates.

Image of a mock admin panel. Panel is showing mock options and short
description of a The Bad Space integration.
FIG. 03 - ONBOARDING BLOCKLIST PANEL 2

Image of a mock admin panel. In this view, an import of a dynamic blocklist
is in the process of uploading.
Fig. 04 - ONBOARDING PANEL PROGRESS

If they accept, a progress indicator will show how many sites have been imported and alert upon completion. If the administrator declines the import, the system will default to standard blocklist behavior, and nothing will be imported.

Upon completion of the authorization and import process flow, onboarding will continue.

Blocklist UI

The current standard blocklist UI consists of a simple list of sites that will be denied federation, with various options to populate this list depending on the features the specific platform has available.

This UI will be upgraded to take advantage of the additional functionality provided by feature integration described in the Service Feature Integration section.

Assuming permission was given to import the dynamic blocklist automatically, and the affiliated sites will be listed as UI elements containing information describing the site’s current status(silence or suspend) when it was last updated, a visual indicator if screens of inappropriate content are available, and a link to its listing on The Bad Space.

Image of a mock admin panel. This view shows a mock blocklist management panel,
with silenced and suspended sites indicated.
Fig. 05 ENHANCED BLOCKLIST MANAGEMENT

Image of a mock admin panel. In this view, in this view is a deeper detail of
a mock suspended site, showing a description and history log for the site.
Fig. 06 ENHANCED BLOCKLIST MANAGEMENT DETAIL

Each listing will apply the status received from the imported blocklist, but it will give the administrator the ability to override a specific site by a visual toggle. This will not affect the imported blocklist and will only be applied to the site’s blocklist implementation.

An additional component will be created to display the decisions made in the onboarding process concerning automated blocklist handling.

If the option to import the blocklist were declined, a simple element would indicate that this feature is currently turned off with an opportunity to activate it.

Assuming the option to use the imported block list is approved and the automated option checked, the component will display the current count of sites listed on the block list and the last time the list was updated, with an opportunity to deactivate automated updates.

If the automated option was not checked in the onboarding process, the component will display when the import happened and a chance to activate automated updates.

Following UI Integration

Following an account in the fediverse works the same as on any centralized platform: subscribing to the corresponding account so their updates appear in a curated timeline.

The difference on a decentralized platform is that accounts not on the venue where the original requesting account is found can follow any account that uses the ActivityPub protocol, allowing cross-site communication.

The challenge with this kind of open communication with no set standard of moderation is that it is difficult to filter through benign accounts in their attempts to connect and those that are not.

The inclusion of The Bad Space public search API provides the ability to validate incoming subscribe requests to check if the platform where the requests originate has a history of engaging in poor behavior.

If the user consents to the public API being allowed to search the URLs of incoming requests, the URL will be searched against The Bad Space database to see if it is present. If the URL is listed, a visual indicator will appear with the subscribe request with a link to the offending site to be confirmed by the user, who can then choose if they will allow the account to connect. The request will be handled normally if the user does not consent to allow their incoming requests to be validated against the database.

Image of a mock user interface with a follow request. The follow requests indicate
which are from federating users and which is from a user from a moderated instance
and a public reason for moderation.
Fig. 07 - FOLLOW REQUEST ENHANCEMENT

Conclusion

To be sustainable long-term, decentralized social media platforms need unique solutions to meet the needs of a diverse and global community.

Integrating modular tools such as The Bad Space allows sites both large and small to increase their safety significantly with tools that make moderation easier and more manageable from an administration perspective and for the everyday user.

Version History

v1.0 - 10 August 2023 - Initial Release
v1.1 - 13 September 2023 - Minor updates and an added Onboarding requirement to see information about how the blocklist is built to improve informed choice.
v1.2 - 13 September 2023 - Syncing live version with minor edits in Google Doc

More information

How to make changes

Minor feedback is anything that clarifies, but does not significantly alter, intent and existing content. Typos, grammar, a bullet point that doesn’t render, adding a forgotten word or reworking a run-on sentence would all fall under “minor”.

Major feedback is anything that fundamentally creates, updates, or removes / deletes (CRUD) a concept in the document. An example of major feedback could be adding a new feature or criteria for Deny List Management or Onboarding.

Minor feedback / changes will be merged regularly. Major feedback will only be merged if/when any discussions around them resolve and have a consensus.

Where to have longer form discussions for major changes and feedback

Right now, we’re using comments in the Google Doc to ensure that the conversation is not only limited to those comfortable and confident using GitHub and its tools. For those who are, you may also open a Discussion on our Nivenly Community repo.

We’re actively working with the paper author and evaluating tools for a sustainable, shared, place for longer form discussion. We’ll be giving NextCloud a live trial with the author Q&A. Depending on the outcome of the trial we may use that or a different tool as a more permanent home for Discussions.

What do you need to get started

Just you. It is important to Nivenly’s mission that people have agency and empowerment to make their voices heard. While this requirements doc, and its resulting implementation, are collaborative you do not need to ask permission or go through a gatekeeper to get started.

2 - Project Applications

Submitted project applications. Note that not all applications are accepted.

Collection of project applications to the Nivenly Foundation. Project applications are added this section once they have passed the board Q&A phase and enter the member Q&A phase.

2.1 - Pachli - Nivenly Application

Pachli is an Android native application. It is a fork of the Tusky project.

Title: Pachli - Nivenly Application
Submitted by: Nik Clayton
Project site: pachli.app
Project GitHub: github.com/pachli/
Project Fedi: @pachli@mastodon.social

Table of Contents

Abstract

The Pachli project exists to create best-in-class software for users of the Mastodon social network, and servers that implement the Mastodon API.

The first application is an Android-native Mastodon client, suitable for use by anyone with an Android device and a Mastodon account.

This is needed because the official Mastodon Android app is a second-class citizen to the iOS app, and both apps are missing features supported by the web client, as explained in GitHub - mastodon/mastodon-android: Official Android app for Mastodon.

Other Android apps exist (including Tusky, Megalodon, Moshidon, Husky, Yuito, Trunks, Ivory, Fedilab, Tooot) but are problematic for one or more of the following reasons.

  • Managed by a solo developer, with no continuity plan
  • Not open source
  • No project governance model
  • Development has stagnated

Project Description

As noted, the app is Android-native and is a fork of a project that has existed for ~ 7 years. As is to be expected there is quite a lot of cruft in the codebase.

  • Most of it is Kotlin, some key functionality is still implemented in Java
    • Kotlin compiles to JVM bytecode, but as an implementation language is more concise than Java, supports more convenient higher-order programming concepts, has a sensible coroutine model, and distinguishes between null/non-null at the type level instead of requiring annotations.
  • The code is a mix of programming styles. In particular, the most “modern” Android architecture patterns (https://developer.android.com/topic/architecture) are only followed in some places in the app. This complicates adding new features, and can make onboarding new developers onto the product more difficult.
    • I’m actively working on improving this at the moment. It’s not a public roadmap item because the effects aren’t user visible
    • This includes automating as much of the review process as possible using / writing decent linting tools.
  • The UX is inconsistent. There’s no design system for the UI, different screens have slightly different layouts, margins, etc. Some areas of the app (drafts, scheduled posts, and announcements in particular) have a very bare-bones UI compared to the rest of the app.
    • Fixing this is challenging because there are currently no real UI tests, so changes currently require a lot of manual testing. I intend to incrementally build out a suite of UI tests (chiefly, screenshot tests) over the next four to five months so that large scale UX work can be undertaken confidently, without worrying about breaking a key corner case somewhere.
  • There are a number of outstanding issues that have been caught by lint scanning tools. A baseline has been put in place to ensure that no new issues make it into the code, and refactorings and other changes are slowly driving down the number of open lint issues in the baseline.

Current effort is spent in a roughly 50/50 split between implementing new features and fixing these issues.

The project’s code is hosted on GitHub (https://github.com/pachli/pachli-android), and uses additional GitHub infrastructure:

  • Runners for CI
  • Pull request workflow for managing contributions
    • Managing CLA signatures
    • Automated tests and linting
  • Issues for tracking problems
  • Discussions for non-ephemeral discussions
  • Project Management tools
  • https://pachli.app is hosted on GitHub Pages (https://github.com/pachli/website)

The project deliberately does not use real-time chat (Discord, etc) for project communication; I think it’s too ephemeral, favours people who can be online more than others, and is a poor archive of previous discussions and decisions.

The code and related material is licensed under GPL 3.0.

Project Scope

For users:

Right now the project’s single application is a native Android client, usable by any Mastodon user.

Pachli-the-application started as a fork of Tusky, and new features / bug fixes are rapidly being implemented to improve the user experience compared to both Tusky and other client apps, including:

  • Seamless loading of content from the timeline (most other apps require the user to regularly tap a “Load more” button)
  • A range of accessible fonts can be chosen in-app
  • Support additional Mastodon features, like “trending posts”, and marking lists as exclusive.

There’s a tentative roadmap for user-visible future development at https://github.com/orgs/pachli/projects/1, key goals include:

  • Support for translation (off-device, using the Mastodon API, and on-device using translation libraries)
  • Work around Mastodon federation issues and allow the user to fetch content from servers other than their “home” server
  • Improve the UX for users with larger devices (tablets, foldables, etc)
  • Extend the application to support Mastodon-like services – servers that are similar to Mastodon and provide a close-enough API. Features that are now in stock-Mastodon often appear in these other services first (e.g., support for bookmarking posts)

There are related Fediverse services, like Lemmy or KBin, PeerTube, Pixelfed, etc, that would also benefit from a polished native Android app.

Pachli-the-app does not support those (and probably won’t in the future, as the user interaction model can be quite different), but if this project is successful I would welcome others who want to develop apps for those platforms under the Pachli brand. There is definitely scope for collaboration and sharing code.

For contributors (developers and non-developers):

  • Be an exemplar of good, idiomatic Android code, demonstrating appropriate best practices
  • Make it easy for new contributors to onboard
    • Clearly describe project norms
    • Provide an onramp for new contributors to make their first contribution
    • Encourage appropriate tooling to simplify and speed up the contributor experience
  • Encourage a culture of ownership, where contributors can report, propose fixes, and implement fixes to issues whatever their focus
  • Encourage a culture of quality work
    • Provide thoughtful, actionable feedback on PRs that helps developers grow their skills
  • Enable rapid feedback on developer contributions
    • Set clear expectations on how long a PR review cycle should take
    • Release on a regular schedule, so developers get real-world feedback on their work, and the satisfaction of seeing users benefiting from their contributions

As the Pachli codebase is rewritten to be more modular it might also make sense to spin off some of those modules into separate libraries so that other applications can benefit from them.

I also want the Pachli developer community to participate in the broader Mastodon-and-related-services developer community, e.g., through membership of the Mastodon developer-only Discord groups to provide feedback on current and future API direction, assist developers of other apps, and so on.

For members:

Pachli-the-association is intended to provide a first class organisation to manage the development of the application under the 7 cooperative principles:

  1. Voluntary and open membership
  2. Democratic member control
  3. Member economic participation
  4. Autonomy and independence
  5. Education, Training, and Information
  6. Cooperation among Cooperatives
  7. Concern for Community

Intended Use

Install from their app store of choice (currently served from Google Play and F-Droid, adding others is possible if there is demand), login, and get started.

Ideally Pachli would be usable to onboard new users who don’t have an account, but the Mastodon API does not permit deleting an account, and upcoming changes to Google Play policies require that if an app allows the user to create an account in-app then they must also be able to delete the account from the same app.

Anticipated Misuse

As a client app that interacts with the user’s server through the Mastodon API there’s little scope for misuse of the software by the account owner.

As currently written the app assumes that the user maintains control of their device. If a user would be comfortable staying logged in to a Mastodon server in their device’s browser, and have the browser remember their username and password then Pachli provides roughly equivalent security while the user is logged in to their account.

The user can log out of their account in Pachli, which removes the account metadata, cached timelines, and authentication tokens from the user’s device.

Countermeasures

A hypothetical “secure” mode of operation is possible. If toggled through the settings this might:

  • Require the user to reauthenticate (e.g., face lock, pin, passcode) whenever returning to the app, before any content is displayed
  • Use the relevant Android API to mark the app as sensitive, disabling screenshots
  • Obfuscate / anonymise the names of accounts the app is signed in to until the user reauthenticates
    • Note that this would hide the account details, but it would not prevent disclosure that the app is signed in with multiple accounts.

None of this stops a determined adversary with access to the target’s device. For example, they could:

  • Root the device, and copy the Pachli databases
  • Build a trojaned version of the app from the open source code, and deploy it to the device

So these are offered up as examples rather than specific things to do in the future; either way they’d need to be written with a specific threat model in mind.

Accessibility

For maintainers and contributors, honestly, that’s not something I’ve given much thought to. Recommendations for best practices would be welcome.

For users the app integrates with Android’s screen reader (“Talkback”) and gesture navigation support. I also implemented features to allow the user to choose from a range of different accessible fonts, increase the font size through the app, and use colour schemes with greater contrast.

Maintaining the quality of that coverage is a challenge. It’s not something there are currently Pachli-specific UX tests for. The standard Android tooling provides some support – warning that a UX element is missing content for a screen reader, or that a UX affordance is too small.

Needs

Broadly:

  • A legal entity that can sign contracts for resources the project needs. Things like:
    • Domain names; I’ve already bought a range, including pachli.app, pachli.org, pachli.ch (I’m based in Switzerland), pach.li.
    • CI infrastructure; as noted earlier the project currently uses the free tier of GitHub workflows. That’s fine at the moment, but I expect that will become a problem once screenshot tests are introduced. And even before then, GitHub CI can take 10-12 minutes to run a series of tests that take less than two minutes locally.
    • Trademarks; Pachli is not currently trademarked.
    • Artwork; for use within the app and app store listings
  • Fiscal host; Pachli is not currently accepting any funding until there’s a governance model in place. Once there is, and there’s members paying fees that money will have to be handled transparently.
  • A governance model; https://pachli.app/about/ sets out the goal of an organisation run along the 7 cooperative principles. The ideas I’m considering are very similar to how Nivenly already works.
    • If Pachli moves to become part of the Nivenly Foundation I wouldn’t expect there to be different “Nivenly members” and “Pachli members”, there would just be “Nivenly members”, who would have a stake in Pachli project governance.

      (Broadly, I’m trying to sidestep having to form a separate Pachli Association)
    • Infrastructure to help implement that governance model; mechanisms and tools for signing up new members, processing payments, votes, capturing proposals from the membership, recording decisions, helping to ensure the membership remains engaged with the project.
    • Relevant training for the people that will be doing the work
  • A grants policy; it’s not there yet, but once there’s fee paying members I hope to be able to use some of that money to offer grants to people who have the skills to contribute to an open source project but need financial support to do so. This doesn’t have to be writing code, it could be project management, or a detailed UX review, or committing to a certain number of hours of user support per week.