How To Create a Global Education Database

Dan Genduso
5 min readJun 20, 2018

“Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.” Education is the catalyst for all change. However, before we can create a global education database that can be leveraged to initiate rapid and agile change around the world, we must first teach people how to create that source of learning materials. We can do this by leveraging the infrastructure for a Decentralized Autonomous Nation, as described in my previous article.

Imagine a decentralized group of teachers and subject matter experts, whose only linkages are the skills and attributes stored in their individual profiles. These people have never met and don’t work for the same company/organization. Now imagine that somebody, or even a group of people, want to have access to a certain type of textbook within a global education database (for the sake of this article, let’s call that database Everipedia). The demand is present in the form of the need for a textbook, and the workforce is present to service the need and generate the supply, so long as they have the structure of an organization to collaborate and work within. Let’s look at how this textbook can quickly (and cheaply) evolve from a need to a usable asset in a series of steps.

Step 1: Create the demand

A user on Everipedia creates a request for content with a willingness to pay in IQ Tokens (a form of cryptocurrency). Similar requests pool together, creating a bounty of the combined willingness to pay for whomever can provide the content. Once the bounty/budget becomes sufficient for a project, the project is triggered. A request may come about, for example, to create a textbook for “Introduction to Computer Systems.”

Step 2: Leverage “how to” wikis

In order to unlock the potential of a Decentralized Autonomous Nation, the foundation of Everipedia should be built on “how to” articles that “teach users to fish.” In this case, an article about “How to create a textbook” would first have to exist before the workflow can be started to actually create the textbook, “Introduction to Computer Systems.” If data categories are added to different sections of the knowledge article, we can then properly align milestones (create the table of contents), activities (write a chapter), tasks (write a section of the chapter/create a graphic), and sub-tasks (write a sub section) with other applications used for managing work, like Jira or Honeybook.

Step 3: Make the workflow actionable

Once the workflow for creating the textbook is established in the wiki layer, we can pull each component (milestone, activity, task, sub-task) into a workflow tool, similar to Jira. If this workflow tool is operating on top of the Decentralized Autonomous Nation, it will personalize the matching of content providers (workers) to the specific milestones, activities, tasks, or sub-tasks that the content provider is most capable of servicing/completing. This matching is enabled by user profiles with validated skills and verified identities (among other things), which also allows for the user ID to stick with the assigned task and pass through to the published state.

Step 4: Distribute the tasks to workers

Workers have the option to accept or reject a content creation task that the AI engine matches them with at the milestone, activity, task, or sub-task level. These accept/reject options, as well as management of the assigned job, should take place within a CRM tool, like Honeybook, that allows for inputs from all work-related blockchain applications. At the same time, newly assigned workers should be pulled into a collaboration tool, like Quip, that allows for all content creators working on the same milestone or activity to interact, create, and iterate upon an agreed upon, cohesive content section or table of contents.

Step 5: Review and Edit

Once content is created, the Decentralized Autonomous Nation is leveraged again to match a list of capable content reviewers to each milestone, activity, or task. Content is sent for approval using mechanical turk (i.e., Gems), providing an earning opportunity for reviewers, while leveraging staking and trust mechanisms to ensure quality content from the creator. Content providers are required to stake a portion of their earnings as a form of quality guarantee, while also having a reputation score tracked in the system. If the content is accepted by the reviewer, then the reputation score increases, thereby increasing trust in that worker. If the content is rejected by the reviewer, then the content creator has one chance to retry the task, while also getting a ding in their reputation score. If the content is rejected a second time, the system automatically reassigns the job to a new content creator.

Step 6: Pull it all together in a wiki

Once each content area is approved, all pieces of the textbook will be pulled back together and published to Everipedia in wiki format. Each section is tied to the user ID of the content creator, as well as the approver, and their identities are logged in the wiki as contributors. As such, those contributors will be paid out in the IQ Tokens that are distributed from the bounty/budget discussed in Step 1. Once the approved content for the textbook is all together in wiki format, the content can easily be translated to other languages and iterated upon over time. This would turn into a living textbook on “Introduction to Computer Systems” that is always up-to-date and openly accessible to students across the globe.

Step 7: Leverage articles in a personalized learning tool

If the textbooks and educational content are assigned the proper data categories at each section of the wiki (as described in Step 2), then the wiki can be pulled into a personalized learning application, in the same way that Salesforce Service Cloud pulls knowledge articles into a workflow. By doing this, we can start to build out new workflows or learning paths in the application layer, which can then be categorized and aligned with pieces of relevant learning content.

By using this model for educational content creation, we can start to publish entire textbooks in a matter of days. The agility provided by this model allows for real-time updates to content, removing the need to continually publish revised versions of textbooks. As the education database continues to grow, we can start to think about new ways to distribute that information to places, like Africa, that are in desperate need of educational resources. We’re almost there. In fact, there have already been instances of entire villages with no electricity being able to access a wiki database via a $40 Raspberry Pi computer. The possibilities are endless once the right type of learning content — with the proper data category labeling — is accessible in a free, global knowledgebase.

In my next article, I will go into greater detail on the personalized learning application layer, while exploring ways we can combine skills-based education with task-based work to create a very iterative form of on-the-job learning.

--

--

Dan Genduso

Shaping the future of communities and accelerating the technology transformation for democratic government www.dangenduso.com