Thursday, December 31, 2020

System Architecture for “Analytical framework for Social media Content Moderation through Crowdsourcing"


1.     Register contributors
A Facebook page will be created to increase the visibility of the app and to find contributors. OPEN-ID will be used to register contributors via Google, FB, etc. Any social media user will be allowed to register. But those who can read Sinhala and those who are from Sri Lanka will be selected as contributors. Those who are not using social media will not be allowed to contribute as crowd participants.

2.     Eliminate contributors
Those who do not qualify at the pre-selection process, fail at quality control, fail at the trustworthiness the process will be eliminated by being a contributor in the platform.

3.     Pre-Selection of contributors
The pre-selection process would involve checking the qualification (age, demographic, Sinhala and
Singlish literacy,etc.), context-specific, trustworthiness based, persona-based and without considering
any of these.

4.     Question Selection
Implement a question generation mechanism and then generate a questionnaire to generate a corpus with hate speech, to annotate the posts to detect and classify the intention of the hate speech.

5.     Fire Questions
Fire questions to selected contributors based on pre-selected contributors in using a question firing mechanism such as a decision tree. Some categories of such questions would be to check the credibility of information shared in social media, to check fake profiles, to capture user opinion and behaviour. Generate advanced questions as given below after aggregating the responses from the contributors.
E.g. This is a flagged content as hate speech. But x number of people have shared it. Why do you think people keep sharing it?
·        Because it is fun
·        Sarcasm
·        Irony
6.     Assign Rewards( Extrinsic Motivators)
Decide rewards and design badges (For completing levels, trustworthiness) for contributors considering the completing levels, the trustworthiness of the contributor, etc.

7.     Incorporate Intrinsic Motivators
Identify features to incorporate for those who contribute to building better cyberspace.

8.     Rewarding Contributors/Gaming Experience
The platform would give a gaming experience for contributors to retain in the cause. Intrinsic and extrinsic motivators will be embedded in the gaming experience.

9.     Model the trustworthiness of a contributor
A mechanism is to be implemented to check the trustworthiness of a particular contributor and to assign a badge for trustworthiness. A higher weightage of validity to the response from a trustworthiness badge owned contributor will be considered in assessing the quality of response.

10.  Aggregation of contributions
Evaluates the results-based on human-judgment for each post store the results, ignored, hate speech or not, category of the hate nature, etc to aggregate the contributions and to generate advanced questions at a later stage.

11.  Suggest Preventive measures
Provide suggestions after analyzing the post to remove the post, to remove only a part of the post, etc. Use the adaptability capability of UX as a preventive measure in social media communication

12.  Accessibility of peer contribution
Implement a mechanism to regenerate questions based on how others responded without revealing the other contributors.

E.g. Sample question
12 contributors or 20% of the contributors have said this post contains hate speech. Do you agree with this statement?


No comments:

Post a Comment

Workshop for Annotators

Let's say that you are trying to train a model using labelled data. If your model to give accurate results the training dataset should b...