Replies: 3 comments
-
Sounds reasonable, I see no real technical hurdles: I believe we could do this already by combining a Bayes network fed with a(ny) decentralized reputation algorithm and user input. Do note that users having to manually trust other people was the downfall of PGP's web of trust. |
Beta Was this translation helpful? Give feedback.
-
If you look at this from 18th century German Philosopher's perspective, truth and the reality are different concepts that is contradicted by humans their selves. In that sense one's trust based on the reality that is perceived would again result to the consensus directive established by the humans which is still in the control of the consensus. Just an exercise to think about... |
Beta Was this translation helpful? Give feedback.
-
To further what @hbiyik said. Trust and truth are fairly independent things. From experience and personality our values are orientated into how we frame the world (our worldview), as the world is more complicated than our formulation of it, we simplify our world view into axioms that are pillars that provide cohesion and consistency within our projection, constituents include archetypes, aspects, personas, and sub-personalities. Our interactions with what should be an unknown world is then filtered by what we know and understand (our axiomatic pillars) as well as the priority our current state places on the array of axioms. When we are hangry we are filtering our world differently than when we are in a state of nirvana. Political divergence is typically based on very different axioms on how one construct's what is true. For instance, take these two different constructions towards interpersonal (e.g. gender) politics "I am to you what I assert to you that I am; respect me" vs "you are to me what I assert that you are to me; respect me". Each side trust those who can either signal cohesion with their existing frame, or expand it compatibly and beyond. Those who are beyond and incompatible, produce dissonance, incomprehension and to some extent frustration and rage. In a functional democracy, we not only trust those we share the same beliefs with, but we also trust those who can build bridges with those who our worldview does not naturally intersect with. There is certainly opportunity for a formal network of trust signals to play large roles in our world, such as how Stellar uses trust networks to provide immediate and cheap financial transactions. As well as for things such as verifying identity, such as voting and passports, beyond government granted certifications that are exploitable by imitation and fraud. Truth though, is a lot harder. As there is hard truths - this happened - however the "what does it mean?" truths, are inherently subjective and of great variety - of which trust can play a role in bubbling up better answers, but it certainly can't determine the answers itself. One of the Cobra effects that could occur in such a system, is how do newly born people earn trust? How do people redeem themselves from prior mishaps? How does someone who is a great person at work, but a terrible person at home, rectify these differences? I think a fundamental aspect of trust, is first and foremost, the contextual question, what does this signal of trust mean? trust in what? I can trust my wife as a compassionate person with my life, however do I trust her with a responsibility that she has no experience in, such as being the president of a country? I can also both trust a libertarian friend, as well as marxist friend. Perhaps a different take on the goal would be a concretised relationship of units of knowledge, and knowledge which was founded upon it. Such that say if 1000 descendants of understanding units were based upon the conceptualisation that "the the earth is flat", we can then either (1) throw out that entire lineage, or (2) evaluate how so much lineage was considered true, what was the actual pattern that they identified as real? Does it fit into another pattern of understanding? Can it be recategorised as congruent with newly discovered patterns? Such is often the case with criminal proceedings upon new technologies that unveil contradictory new interpretations of old evidence (modern DNA testing exonerating falsely convicted people whom the understanding of circumstantial evidence was against; they were apprehended within the vicinity of the crime). This is of particular concern to how we formulate our confidence in our understandings of the world. We can attribute outcome A to causation B because of pattern Z, but perhaps pattern X and Y had way more influence than pattern Z but we never knew of pattern X and Y yet. Perhaps we also vote for politicians who assert bill Q because of pattern Z, but it doesn't play out as they expected because pattern X and Y had yet to be discovered, so then they implement bill W to rectify, and so on and so on. It would be nice if the policies that govern our world were hedged on observed patterns and their inputs and outputs and probabilities, such that if a bill did not play out at planned, rather than adding an infinitely recursive amount of band aids in an attempt to rectify the situation with more hope of controlling the unknown to influence our realities, we can instead remove band aids formulate policies that are directly accurate with with their comprehension of patterns. At least in decentralised tech, what trust seems to signify is merely "I assert this entity is a real entity " or "I assert this entity is reasonably trustworthy to me towards the intentions of this specific application", and "if they violate my pledge to them, then I also accept a decrease in my own ability to assert trustworthiness". When it comes to politics, and perhaps this is a better abstraction for trust in general, including for decentralised tech and all the various applications, trust seems to just mean "I assert that this person respects my interests more than someone I do not trust" and distrust means "I assert that this person does not respect my interests". To that extent, trust merely seems to be one-way staked assumption of shared interest, with mutual trust being a two-way staked assumption of shared interest (cooperation). |
Beta Was this translation helpful? Give feedback.
-
I had certain thoughts about the direction in which the Tribler's development might move and what it might look like in ten years, and I would like to discuss them.
Fake news is becoming the most serious problem in the modern information world. It is extremely difficult for an ordinary person to distinguish fake news from real because fake news is supported by a large number of fake facts and references to fake specialists (real or fictitious).
The current approach to dealing with fake news is centralized censoring using artificial intelligence from Google or Facebook or human moderators. But this approach cannot be considered satisfactory due to its opacity. Both artificial intelligence and humans can make mistakes, misunderstand censored content, and be biased. Instead of censoring, an alternative might be to build a trust graph that allows the reader to draw their own conclusions about whether the information can be trusted.
In my opinion, the free Internet lost a lot after Google and Facebook decided that it was not profitable for them to share content with the outside world and to encourage reading content outside their system, and stopped supporting the RSS.
Tribler could become a system of free distributed content aggregation and commenting, allowing to build a trust graph for the disseminated information.
I see it this way: there are people I, as a user of the system, trust. They can be scientists, doctors, economists, politicians, journalists, etc. In the Tribler's interface, I press a button indicating that I trust these people. They, in turn, trust other people. This allows you to build a trust graph.
Consider the following example: in my news feed inside the Tribler, I see a link to an article that a certain drug effectively treats COVID. This article is trusted by a well-known biologist (whom I myself may not know). A well-known popularizer of science trusts this biologist, and I trust this popularizer.
The article is a popular presentation of a research paper of several Indian authors (in the Tribler, there are links to the scientific work and to its authors). Some Chinese specialists trust this scientific work, while some Italian specialists do not. These specialists themselves are trusted by other scientists, whom I trust along the chain. Two experts have written critical reviews of scientific work, and there are others who trust (or distrust) these reviews.
The Tribler allows me to visually see the most important part of this trust graph and decide for myself if I can trust this article. Thus, the Tribler may not resort to strict censorship but let the user decide for himself whether the news is trustworthy.
The Tribler development team has a lot of scientific and practical experience in solving problems related to building a trust graph, and we could build an open platform for creating and analyzing a trust graph.
An example of how the problem of fake news is relevant to the world today can be seen in the given discussion of the results of elections in the United States. The country is actually divided into two camps, in each of which both real and fake arguments are presented in defense of their point of view, and for an ordinary person, it may be extremely difficult to understand the validity of arguments. This shows that the need for a system that allows in an open, understandable form to make statements about the truth or falsity of certain news and to argue these statements with a visual graph of trust really exists.
Trust in such a system does not have to be calculated in a centralized manner. It should be possible for each person to build a personalized trust graph, which will consider those authoritative opinions that are important specifically for this user.
Over time, the created platform could replace the current approach to the publication of scientific articles. Instead of publishing in a limited number of scientific journals which force readers to pay for access to scientific articles, scientists could post publications and critical responses to them directly in the Tribler, using a trust graph to determine the potential value of a publication.
Given the trend towards an avalanche-like growth of fake news in recent years, I do not exclude that in ten years, the average user will not even want to read the information for which he cannot calculate a trust rating based on his personalized trust graph. The problem is urgent and real, and Tribler can become a generally accepted platform for storing and processing a distributed and open trust graph for all relevant information pieces.
Beta Was this translation helpful? Give feedback.
All reactions