Procedures and Methods for Cross-community Online Deliberation

In this paper I introduce a model of self-regulated mass online deliberation, and apply it to a context of crossborder deliberation involving translation of contributions between participating languages, and then to a context of crosscommunity online deliberation for dispute resolution, e.g. between opposing ethnic or religious communities. In such a cross-border or cross-community context, online deliberation should preferably progress as a sequence of segmented phases each followed by a combining phase. In a segmented phase, each community deliberates separately, and selects their best contributions for being presented to all other communities. Selection is made by using the proposed mechanism of mutual moderation and appraisal of contributions by participants themselves. In the subsequent combining phase, the selected contributions are translated (by volunteering or randomly selected participants among those who have specified appropriate language skills) and presented to target segments for further appraisal and commenting. My arguments in support of the proposed mutual moderation and appraisal procedures remain mostly speculative, as the whole subject of mass online self-regulatory deliberation still remains largely unexplored, and there exists no practical realisation of it .

public online deliberation on a given subject matter typically takes the form of a "one-room" discussion, where every opinion or proposal advanced by any participant is made immediately available to all others, thus providing for a common and unified discussion space.Special incentives may even be provided in such a one-room model, pushing too narrow-minded participants to get acquainted with different or opposing views.In this way, one can expect to weaken the "self-seclusion" effect, when a group of deliberants concentrates on one point of view paying no attention to any other.Confrontation due to lack of mutual understanding would thus be ended or appeased in many cases, which is indeed the prime purpose of any deliberation.
Not in every situation, however, can we consider expectations to appease confrontations in such a way, i.e. by providing one common deliberation space from the very beginning, as well-justified.There are many cases, typically those of a long-term contention between two or more neighbouring ethnic or religious communities, when every attempt to start with creating for them one common deliberation space would be unproductive.Rather, every community should start with having an internal deliberation "in a separate room", not necessarily to elaborate one common position, but to have various views within their community becoming better articulated, well-weighed and crystallised.In this way the disputing communities would become better prepared for the next stage or stages of deliberation, when such crystallised opinions will be re-discussed in a larger, intercommunity context.
The same problem appears, and the same solution could even more easily be applied, in a context when a cross-community deliberation is difficult to carry out not because of mutual accusations between communities, but simply because they speak different languages.This is the case when inter-community contentions can be considered as not more severe than social contentions within one given nation.However, one cannot expect a fair and productive deliberation if participants speaking one language as their mother tongue are forced to use another language, CC: Creative Commons License, 2009.
be it the language of another community or some "common language", e.g.English.In such a case, as I suggest in this paper, deliberation could start in several "rooms" or "linguistic segments" in parallel, and then continue in a more "unified" way through translation of a reduced number of the "best" or the "most representative" native contributions from their native language to other participating languages.
Indeed there is no clear separation line between the two above cases-that of a crosscommunity deliberation for dispute resolution and that of a "simple" multi-language deliberation.For disputing communities typically are also linguistically divided, and vice versa, linguistically separated communities often have less mutual understanding than people speaking the same language.This is an additional reason to study both cases jointly, simply as cross-community deliberation.
I will greatly simplify my task by considering written online deliberation only; the reasons for such simplification are exposed in the following section.On the other hand, an open deliberation may always become populous by attracting a very large number of active participants-in the order of several thousands or even tens of thousands (as it has already happened, probably for the first time ever, in the August 2010 Russian experiment with public online hearings of a new draft law on Federal Police).The mere possibility of having tens of thousands of contributions to deal with makes our task rather difficult.
So, the whole context is that of a large number of participants, possibly speaking different mother tongues, and deliberating by exchanging their contributions in writing over Internet.
In the following sections I start by introducing in Section 1 a segmented multi-stage model of deliberation, as potentially the most suitable in a cross-community context.Then, in Sections 2 and 3, I present arguments in favour of self-regulation within a deliberation, based on the proposed method of mutual moderation and appraisal.These arguments are valid for both "one-room" and segmented models.Section 4 contains an overview of my basic procedures for self-regulated online deliberation, that would make it efficient and productive.These are supplemented in Section 5 by additional procedures specific to the segmented multistage cross-language deliberation model.Then, in Section 6, I attempt to apply my model and procedures to the most complex case of cross-community deliberation for dispute resolution.

Segmented Multi-Stage Deliberation Model
Consider a somehow divided community, or two or more neighbouring or coexisting communities, esp. of different ethnic or religious groups, which have several points of discord rooted in their past history and hence difficult to rationalise.Written online deliberation can provide a propitious occasion for such communities to achieve a more unified view and a more peaceful stance, or at least a better mutual understanding.But it may also become a new battlefield, where different factions advance offending utterances and feel themselves in turn offended by others.The outcome would depend on the ability and authority of the deliberation facilitators.Their role can be that of "translating" opinions and proposals of each faction or community to all others, by smoothing too sharp expressions and points of view, and by filtering out the most intolerable ones.Facilitators can also add some explanative comments or change some terms in a communication or contribution, when passing it from its "native" community to another one.Yet the role of those facilitators should be largely accepted by all the disputant communities; otherwise there is no chance that any agreement can be reached and followed.In the following sections I will discuss this problem in more detail and will propose a solution based on self-regulation and self-facilitation.
Clearly, a "one-room" deliberation model, allowing every member of one community to directly address at any time the opposing community, would be counter-productive in such a case.Instead, every community should first deliberate in their own closed circle, trying to rationalise their passionate opinions before starting any discussion with the opposite community.I call such a model a segmented multi-stage deliberation.It can be seen as an online deliberative equivalent of the so-called consociationalism in ordinary "offline" politics.Of course, in every segmented deliberation such a segmented phase should be followed by a combining phase (or, more generally, there should be an alternate sequence of segmented phases and combining phases); otherwise it would remain a set of separate deliberations whose results could only be aggregated by some external authority.
In a less controversial environment, when a cross-border and/or cross-language deliberation on an issue of common importance happens not to be aggravated by mutual accusations, the role of facilitators becomes less important, and we can even consider in some cases that no procedural barriers or restrictions should be placed between those separate "community discussion rooms" at the beginning of a deliberation.Removing those procedural barriers, however, would not make participants able and willing to communicate directly across natural barriers, which may consist e.g. of their different legal systems, and first of all-of their different languages.For if we consider that everybody should use one and the same common language (e.g.English) in a deliberation, then not only do we exclude from the deliberation all those who are not fluent in that common language (especially in writing), but we also impair the deliberation capabilities of those more educated participants who understand the common language but are themselves much more eloquent in their mother tongue.
As an example, we can consider any eConsultation campaign on a subject matter of pan-European importance.Today, even if such a campaign is launched simultaneously in several member states, it tends to progress as a number of distinct actions within individual member states, with sometimes contradictory separate conclusions-because of linguistic barriers causing severe lack of mutual understanding.This drawback would become even more apparent if an attempt were made to launch a pan-European online deliberation on a specific theme or subject matter, thus inviting citizens to contribute with their own proposals and opinions and to further discuss those proposals and opinions online.Because of linguistic barriers, citizens in different EU countries would have very little information about the progress and the current trends of the campaign in other countries or linguistic communities, and in particular, would not get acquainted with contributions made by participants in other languages, even with the most significant ones.
Obviously, full translation of all contributions that are made in course of a deliberation cannot be done systematically between all the EU languages (or, in a lesser scale, between all the participating languages).In a large and rather populous deliberation this seems both unnecessary and resource-consuming.Therefore, contributions need to be ranged according to their "quality" and/or "importance" or "popularity" within every national segment, so that only the best and/or most representative contributions would be translated into other languages.Translation of contributions is indeed a labour-intensive task.Selection of contributions for translation also needs significant effort, but, even worse, it is prone to disagreements and can create resentment among participants if performed by some external facilitators.
Turning back to our initial discussion of online deliberation in a context of inter-community dissent, we should note that in most cases the disputing communities speak different languages; hence the "selection and translation" task should be considered in this case as well.Though, in the context of inter-community dissent, the task of selecting the most appropriate contributions for being presented to the other side(s) in the dispute should be considered much more carefully, and would need additional mechanisms and procedural restrictions, as I will show later in this paper.
We arrive at the following list of requirements.An inter-community online deliberation (1) should be structured in several segments, or "discussion rooms", one per each participating community; (2) should comprise at least one combining stage, or a set of (asynchronously performed) combining actions; (3) should involve a number of translators; (4) should also involve a number of appraisers, whose task is to select the most appropriate native contributions for translation or presentation to other participating communities; (5) in the context of an inter-community dissent, it should probably also involve a number of facilitators, whose role would be to present (or edit, or CC: Creative Commons License, 2009.modify) the selected native contributions to other communities in such a way as to keep the whole deliberation goal-oriented and productive.
In the following section I develop my arguments in favour of a self-regulatory model for such a segmented online deliberation.Then I propose a way to implement this model by using appropriate procedures aimed at installing both restrictions and incentives compelling participants to act fairly and in good faith.
Before continuing with my arguments, we should note that I am restricting myself to the case of asynchronous written online deliberation.Hence, deliberation models that include oral online deliberation, or even synchronous written online deliberation (e.g. by using "chat" or other instant messaging services) are not considered here at all.For only when people exchange well thoughtout and attentively prepared contributions can they expect to achieve at least a mutual understanding, if not a common agreement, especially when discussing a controversial subject in a populous forum.Immediate exchange of opinions does not provide for such possibility.
There might be also a question of how we can draw separation lines between those "online discussion rooms" when the real world communities are never fully separated.This is of course a question of self-determination of every individual citizen as a member of this or that community.Cases of abuse may potentially result from multiple registrations, so in critical contexts controlled unique registration (as for national elections) may be required.This will be discussed in Section 6 in more detail.

External Facilitators vs. Self-Regulation
As I have already pointed out, in any online deliberation, and more specifically in a segmented deliberation, there are several facilitation tasks that are typically meant to be performed by designated agents rather than by participants themselves.These tasks may include initial moderation of contributions; their semantic sorting and sometimes also quality rating; then, summing up or merging multiple similar or compatible contributions, and editing a final report or proposal, etc.A good example is provided by the EC-funded "DEMOS" project (Lührs et al., 2001).
In a multi-language segmented deliberation, the tasks of translating the top-rated proposals from their native languages into other participating languages will be added to the above list.And, when such a segmented deliberation occurs in a highly controversial context, some additional moderation or facilitation tasks may be needed that are much more subtle than an ordinary "moderation" in a typical online forum, the latter being rather a simple "censorship", as it consists in "banning" some contributions instead of "smoothing" them in an agreeable way.
Most if not all of today's experiments in online deliberation on political or societal issues make use of a staff of skilled or specially trained moderators/facilitators.Typically, this staff belong to one or several organisations (universities, municipal services, NGOs) in charge of a given eParticipation project; they are therefore paid, directly or indirectly, from the project budget.Until now, however, the total number of participants in any such deliberative campaign was quite limited, in the order of a few hundred people at most (as an example, the most populous and successful instance of the above-cited "DEMOS" project has attracted 285 active participants).Hence the cost of employing a staff of facilitators was remaining low enough.
Current efforts in the field of eParticipation are mostly aimed at enlarging by any means the circle of participants.I, in contrast, am not dealing with the problem of how to increase numbers of participants; rather, I am studying a much awaited and desired, though still hypothetical, situation when an open online deliberation on some topical question succeeds in becoming really populous, by attracting at least several thousand active participants: that is, participants who not only read what is supplied by others, but also write their own contributions, appraise contributions made by others, and vote on proposed choices.
In such a case, employing a hired staff of facilitators would become rather impractical.First, it would involve high organizational and operational costs.Then, it would be seen by many participants as an external body meddling with their discussion and probably trying to direct it to some externally defined goals.Hence, interventions of those external facilitators would very probably create distrust and disagreement.In today's eParticipation projects this problem does not seem to be so acute, because few people who participate in those projects have a clear understanding of their still experimental character, and are rather disposed to trust the organisers; those who do not trust simply do not participate.Incidentally, this may be seen as one of the causes of insufficient participation in most past eParticipation projects.
Finally, a large staff of facilitators would very probably become rather inefficient, because it should then apply its own internal procedures for serving a large number of customers (participants), while those specific procedures haven't yet been studied and developed.
These are my arguments against relying on predominantly staffed/paid facilitators.An alternative solution would be to organise the online deliberation in such a way that its participants perform those regulation and facilitation tasks mostly or entirely by themselves.I will call such an online deliberation self-regulatory.
Hereafter I propose some arguments in support of such a self-regulatory model.At this point my arguments remain rather speculative-firstly because I am not a specialist in the appropriate fields of sociology, social psychology and sociolinguistics (e.g. the discourse analysis); but also because the question of how the discourses of participants (and more generally, their deliberative actions, including voting and assigning appraisal grades) are influenced by a given system of imposed and enforced rules, restrictions and incentives, seems not to have been studied yet-for such an advanced system of rules for online deliberation is still to be developed and tested in full-size experiments.
Surely, some partial results can be drawn from the existing experience with Wikipedia and other collaborative projects, and also with various Internet forums, social networks and even online games.However, while Wikipedia is strongly regulated in a top-down fashion (which is now largely criticised, and may be seen as demonstrating the weakness of the non-self-regulatory model), in most other contexts regulation is rather loose, promoting superficial attractiveness of a given Internet product to the detriment of social responsibility and overall productivity.For example, in a typical discussion forum moderators just delete inadmissible posts and ban persistent offenders, while no restrictions or incentives are installed to prevent unlimited sabotage of any discussion thread.We can conclude that the existing "field data" are of mostly negative nature: what is going wrong, and for what conjectural reasons

Arguments in Support of Self-regulation
In this section I develop my main arguments in favour of self-regulation, namely, why a selfregulatory open online deliberation may become possible at all, and why it is indeed the best model for an efficient and effective deliberation.Most of the section discusses the self-regulation issue in the "one-room" deliberation context, while the cross-community segmented online deliberation is discussed at the end of the section as a special, more complex case.
First of all, I assume that citizens who join a deliberation on a subject matter pertaining to the common good are themselves motivated by the common good.This is indeed a strongly "Habermassian" belief, miles away from any "social choice" considerations.However, I am not alone in holding this belief.It has been suggested (see e.g.Dryzek, 2000, where he further cites Fearon, 1998 andGoodin, 1992) that a person deciding to speak or act in a public space quite naturally takes an attitude which is the most appropriate for the public space, that is, wellintentioned, purposeful and concerned with the common good; and even if such an attitude is at first a mere pretence, with time it would become more entrenched in that person's mind.In an online deliberation, when participants act under pseudonyms as "virtual persons" emancipated from their known or supposed characteristics as physical persons, such an attitude would become even easier to take; a bad guy is playing a good guy's role in an online game.It's pleasant and costs nothing, to begin with; then, it may eventually bring some social rewards, moral or even material ones, if we provide for assigning such rewards to the "best" participants.
Yet, to be fairly rewarded, the participant as a physical person must have proof that he or she is indeed the person who acts under a given pseudonym, and that all contributions and actions performed under that pseudonym indeed belong to that virtual, and hence physical person.This problem can be solved by using a digital signature that brings sufficient proof of authorship.
Of course, all these considerations cease to apply when the question being discussed matters to me personally: one thing is to decide whether our region needs a new highway, and quite another thing is to decide whether a new highway should or not pass two hundred meters from my home.That's why general questions (e.g.draft laws) are much more appropriate for a public deliberation than too particular ones, as the latter may heavily strike at somebody's vital interests.General matters can be discussed in a "Habermassian" mood; while too particular matters may often involve a dispute resolution by aggregation of immutable personal preferences.
Going further, we can assume that a well-intentioned and purposeful participant would willingly perform a number of auxiliary or service tasks in the deliberation forum, provided that either those potentially burdensome tasks are fairly distributed among all participants, or that one can accumulate special credits for performing such tasks.
In this way, the task of initial moderation (i.e.censorship) of all contributions can be performed by randomly distributing all freshly posted contributions among all currently available participants; and those who are often unavailable (offline) can declare in advance their readiness to perform such task(s) at a given date and time.
Next comes a more interesting task, that of attentively reading and appropriately appraising a number of others' contributions, not only those corresponding to my current "political preference", but also those with which I rather disagree.As I have stated in the first lines of this paper, in a "oneroom" deliberation there should be no self-seclusion, everybody should be aware of others' opinions.The problem can be solved by further distributing every new contribution, after it has successfully passed the initial moderation stage, to a few randomly selected participants for reading and appraisal; in the next section I outline the proposed solution in more detail.Other participants indeed can also appraise the contribution on a voluntary basis.
As an important secondary effect of this process of mutual appraisal, participants themselves can be rated according e.g. to the overall quality of their own contributions and/or to some other parameters of their deliberative actions.In this way, a smooth dynamic hierarchy among participants is created and maintained by the system; it should be stressed that this hierarchy is the result of the participants' actions toward each other, and is not influenced by whatever external facts such as social positions or merits of participants as physical persons in real life.Participants therefore would easily accept such an hierarchy, and would have an incentive to act in an appropriate way on the forum (these arguments are developed in more detail in: Velikanov, 2010d and 2010e).
Moreover, by granting higher rated participants an accordingly higher weight, with which every one of their appraisals or voting actions will be counted, I expect to bring much more stability into the deliberation model, as ill-intentioned participants, even acting conjointly, would never attain high ratings and hence would vote with only a minimum weight.
Summing up all the above considerations, I conclude that self-regulation would probably be possible in an (asynchronous written) online deliberation, provided that an appropriate mutual appraisal procedure is applied that both restrains participants from acting unfairly, ill-intentionally or self-seclusively, and rewards them for being fair, purposeful and attentive to others' arguments.Now, passing on to a more complicated case of a segmented deliberation aimed at resolving inter-community disputes, I would like to put forward the following general considerations.First, such a deliberation simply would not happen unless there is a clear understanding (or a desire, or expectation) among members of both communities that some progress in their mutual relations could and should be made, right now or in the nearest future.
Second, there should be an original approach, or some fresh ideas or considerations capable of creating such expectations; alternatively, people could just expect that such fresh ideas might appear in the course of their deliberation.
Third, there should be some personalities who "embody" those fresh ideas or original considerations; or, similarly, such personalities could be expected to emerge in course of deliberation.
Fourth, it is essential that members of each community neither would expect to win, nor would accept to give up, nor else would be ready to bargain; instead, they should be expecting to achieve a better understanding of their own position as well as that of the other side, and to get a new solution based on that new understanding.
And finally, I anticipate that they would be happy to know that they discuss those contradictory issues among themselves and on their own initiative, rather than under the aegis of whatever peace-making body or supranational government.
The above considerations indeed reflect yet again my Habermassian belief; I do not claim, however, that conflicting communities anywhere on the globe, or their individual members, are ready to accept those considerations as today's requirements.I simply state that a self-regulated online deliberation between those involved in a conflict may probably become the best instrument for bringing a real and durable inter-community agreement, an instrument much more efficient than any externally managed negotiations.Such an instrument is therefore worth being implemented, experimented with, and popularised.
The following sections contain a brief description of my proposed self-regulation procedures for mass online deliberation, starting with a homogeneous case, then passing to a segmented crosslanguage deliberation, and finally arriving at the case of a segmented cross-community deliberation for dispute resolution.

Basic Self-Regulatory Procedures for Online Deliberation
In this section I present the basic characteristics of the proposed model for online deliberation.I have already presented the model in more detail at a series of conferences earlier this spring (Velikanov, 2010a(Velikanov, , 2010b(Velikanov, , and 2010c)), so this is just a short summary with reference to those earlier papers.The model, as described hereafter, seems too complicated for an "average participant"; but the latter needs not know its internal mechanics, as the user interface can be made reasonably simple and intuitive.
To implement the above-discussed principles of ordered political online deliberation, a set of software-implemented procedures is needed, able to withstand quite populous eParticipation campaigns.Any existing methods and tools for automated semantic analysis of texts, as described elsewhere in literature, could certainly be applied here for contribution sorting and grouping; though, in my view, all such tools can only play subordinate role, to facilitate mostly "manual" work by human participants.So, I put emphasis on the behavioural procedures rather than on semantic tools.Here are the major steps and characteristics of the proposed method: 1.A "theme" (or a problem, or a subject matter) is submitted for discussion, presumably in a topdown way (participant-defined themes could be considered probably at a later stage).A short preliminary discussion may be held at this point, aimed at better focussing the theme.Then, expert information on the theme is ordered from experts in the field (presumably from academic circles).Expert information should present known facts, rather than the expert's own opinions or proposals.
2. Then people start a deliberation (in writing, over the Internet), by contributing their proposals/solutions, exchanging critical comments on those proposals, merging and further developing them in one or more competing proposals, and finally voting on a limited set of the resulting common proposals.This multi-step procedure is software-controlled.
3. In course of deliberation, participants perform mutual moderation and appraisal of each other's contributions, without employing external staff of paid moderators and/or editors.(In contrast, experts who provide initial information would probably be paid for their work.)A new contribution is first forwarded to one randomly selected participant who will act as a moderator, by accepting or rejecting it according to some set of formal admissibility rules.
4. If accepted, the contribution is forwarded to three other randomly selected participants for an obligatory blind quality appraisal.Appraising a contribution means assigning it a quality grade.Basically this procedure is quite similar to the blind peer review widely used by scientific journals and conference chairs to select good quality papers (the idea of applying it to contributions in an online deliberation has been suggested in Stodolsky, 2002).I suggest, however, to enhance it in the following way: if the total of those three appraisal grades is high enough, then the contribution is submitted for further appraisal to another set of randomly selected participants, and so on iteratively, up to some number of stages.In this way, a really good contribution is rapidly promoted to the top of the list, where it would certainly be seen by almost every participant; while a poor contribution will not require too much appraisal effort from the community of deliberants.5.At the end of this obligatory appraisal stage (or perhaps simultaneously with it), the contribution is made accessible to all other participants who can read and appraise it on a discretionary basis; in this way, the contribution may collect supplementary points (which can be, as every appraisal grade, either positive or negative).However, I assign more importance to appraisal points assigned by the randomly selected appraisers, than to those assigned by volunteers at will.In this way I expect to combat the "claque effect", when the author organises his/her friends to give him support.
6. Participants are also invited to assess their degree of agreement with every contribution they read.They are urged to appraise a contribution's quality independently of whether they agree with the opinions expressed in it or not.7.In the mind of most appraisers, however, these two parameters of any contribution-its quality level and the degree of the appraiser's agreement with it-which are theoretically orthogonal, would remain heavily correlated.To de-correlate them yet more, positive quality levels assigned in case of agreement (and inversely, negative quality levels assigned in case of disagreement) could be considered by the system as counting less than in the cases "good quality contribution, though I don't agree with it" and "badly expressed idea, though I agree with it". 8. Quality grades assigned to a contribution by individual appraisers are then aggregated (totalled and/or averaged) by the system.Agreement grades assigned to contributions by participants can also be aggregated, though I propose to use them in a more subtle way.Namely, the system can algorithmically create clusters of contributions that are mostly supported by the same clusters of participants, and hence probably containing similar or compatible opinions or proposals.In this way, participants can easily orientate themselves within a potentially large quantity of contributions, by paying more attention to the most representative ones, i.e. having obtained the "locally highest" aggregate quality grades within their appropriate opinion clusters (and not just to the few "globally best" or "globally most supported" ones, as it is typically done).9. Every participant has an activity counter where points are added for every moderation/appraisal action performed; these points can then be "spent" when posting one's own contribution.In this way, you have to work for the community (by listening to other community members) in order to be heard by the community.10.The author of a rejected or negatively appraised contribution can resort to arbitration; similarly, a "vigilant reader" can call for arbitration when considering some contribution unduly accepted or too highly rated.The moderator/appraiser will be the defendant in both cases.The "arbitration court" would consist of three randomly selected participants.Its decision should be considered final, and should not only reverse the initial moderator's decision if so decided, but also impose a "penalty" on the guilty side.11.A trust or reputation count is assigned to every participant; it receives some number of additional points with every passed contribution, and loses yet more points with a rejected one.It CC: Creative Commons License, 2009.also increments with every moderation/appraisal action not reversed by arbitration; while a negative arbitration decision subtracts several penalty points from the guilty side's reputation.
12. Each participant is assigned a rating, which is calculated based on those aggregate appraisal grades of all his/her contributions (e.g. an average of all totals for all contributions).The rating of a participant is a dynamic characteristic, which can gradually increase, but can decrease as well (e.g. after posting a negatively appraised contribution).It has no upper bound, though it may have a negative lower bound (a value below which the participant is banned from posting new contributions).
13. Another characteristic of a participant is his/her weight, which is defined as a limited function of his/her rating, growing from 1 up to some maximum value.This maximum value is dynamically set by the system depending on the actual "activity level" on the forum, so that it is low at first, and then typically grows with the number of participants and the total number of their contributions.The weight of a participant is a multiplying factor to be applied to each of his/her subsequent votes and/or appraisal of others' contributions.The weight is also a dynamic characteristic, it can both increase and decrease.In this way, an open dynamic hierarchy is created and maintained among participants.
14. Participants are provided with appropriate tools for finding similar or compatible proposals and for easily commenting on similarities found.They are induced to accept borrowing their ideas by others and/or to form working groups or teams for further collaborating on those proposals and ideas.Incentives to participants for doing that way may take the form e.g. of additional points into their ratings (here I refer to an open online collaborative development model, which will be described in detail in a paper still under preparation).
15. Participants act under unique pseudonyms.There exist well-known methods of preserving participants' confidentiality while enforcing the one-to one correspondence between physical persons (citizens) and virtual participants, e.g.methods based on using national identity card numbers for one-way ciphering during electronic registration (see e.g.Cameron, 2009).
16.A digital signature on every contribution, together with its time-stamping by the system, would provide for maintaining contributions' authorship and priority.In this way, contributors can securely collect points received for their valuable contributions.On the other hand, they would not be able to spam anonymously, and their "bad points" will be stored by the system along with the "good" ones.

Self-regulatory procedures for Segmented Cross-Language Online Deliberation
As stated above, an eParticipation campaign covering several distinct communities speaking different languages (e.g. a pan-European campaign, or a cross-border campaign concerning the Danube basin development and management) should comprise a segmented phase, followed by a combining phase, or even an alternating sequence of several segmented and combining phases.In a segmented phase, deliberation takes place separately in each segment ("discussion room"), thus allowing participants to read and contribute in their respective native tongues.In a combining phase, participants within their native segments are invited to read, appraise and comment contributions coming from other segments, along with their "native" contributions.
Those "foreign" contributions need to be translated from their respective native languages into all other participating languages.In my model, translation of contributions is preferably done by participants themselves; it represents another task imposed by the whole community of participants on its individual members.
Yet, it appears rather difficult, if at all possible, to translate all contributions posted within every national segment into all other participating languages.Only the "best" or the "most representative" native contributions should thus be translated and offered to other communities.In my model, this selection is made by the participants themselves within their native segments in a segmented CC: Creative Commons License, 2009.
phase of the deliberation.The latter is organised, in every individual segment, according to the selfregulatory procedures for the "one-room" deliberation, as described in the previous section.
Contributions thus selected are then randomly distributed for translation among those participants who are currently active and who are capable of translating from the given language into another language (presumably their mother tongue).To that end, every participant declares, at registration time, to/from which language(s) he/she can translate, and the system keeps track and makes use of this information when performing random selection of translators.
When considering mutual appraisal of contributions, I assign more importance to randomly selected appraisers than to volunteers, to combat the "claque effect", as it was mentioned in the previous section.In contrast, translation of contributions could perhaps even better be done by volunteers than by randomly assigned participants; agreement with the author or compassion for her/him would not create here a claque effect.Therefore, every contribution selected for translation would first be proposed for voluntary translation; then, if no volunteers found, it would be randomly assigned (separately for every target language).As a last resort, if no translators are available for a given language, translation could be done by paid staff, if they are available.In more detail, this translation procedure may look as follows: (1) Dispatching a native contribution to other participating language segments, and launching a "request for translation".Where one or more volunteering translators manifest themselves, one (or two) of them are randomly selected, and they perform translation.
(2) Where no volunteers are found, the translation task is randomly assigned, within a given language segment, to one (or two) bilingual participant(s).Performing translation tasks, either voluntarily or by assignment, may optionally result in some rewards for translators (e.g. points into their "activity count", see previous section).
(3) In those cases where no translators are found among participants, translation is performed by hired staff, though my model aims at minimising such interventions.Also, in some difficult cases double translation (e.g. from Finnish to English then to Greek) may be considered.
(4) The translated contribution is thus made available to other participating language communities for reading, appraising and commenting, on a level playing field with native contributions.
(5) An additional step may consist of translating the "best-valued" native comments on a translated contribution back into the contribution's native language and into other participating languages.This can be done by applying the above steps 1 to 4 to the selected comments.Now, let us return to the starting point of such a cross-language eParticipation campaign, with reference to point (1) of the previous section.At the beginning, the theme of a new campaign is specified, either top-down (eConsultation style) or bottom-up (ePetition style).Then, a starter information package on the theme (comprising e.g. one or several expert surveys) is provided by the campaign staff in every language of the EU or of the countries concerned by the campaign.There may appear sometimes several mutually contradicting surveys prepared by national experts from different EU countries; these have every reason to be translated into other languages to enhance intercommunity understanding.Note that, when the campaign budget is low, translation of expert surveys can also be done by volunteering or randomly assigned participants, e.g.chapter by chapter.
The sequence of phases in my model is as follows ("S" stands for "segmented", "C" for "combining"): S1: Registration of participants (separately in every segment); the system creates a common database where participants' profiles (including their language skills) are registered, together with their assigned digital signatures C1: Formulation of the campaign's theme or subject matter, or problem(s) to solve S2: Preparation of expert surveys C2: Translation of the expert surveys into all the participating languages S3: Segmented deliberation, including mutual appraisal which results in selecting the best contributions C3: Distribution of the best native contributions for translation; their translation into all languages S4: Segmented deliberation, when the translated contributions are appraised along with the native ones within every segment C4: Same as step C3, but applied to the best comments selected in step S4 within each segment The diagram below shows cross-language flow of contributions according to my segmented model (where steps C1-S2-C2 are shown combined, and step C4 is not shown).Small black circles represent native contributions, white circles represent their translations.French, German and Dutch languages are chosen by way of example only.Horizontal panes with dotted borders represent specific functions that are applied to individual contributions progressing in the bottom-up direction.
Figure 1: Cross-language flow of contributions I conclude this section with one important note.Translation of the "best rated" contributions and their "injection" into other segments for further deliberation need not necessarily be done in a separate "combining phase".We can consider instead a more "smooth" procedure, when every contribution can be proposed for translation as soon as it attains some level of quality or degree of support, according to its appraisal results within its native sector.In this way, translation tasks may be more evenly distributed in time, and participants would see freshly translated contributions appearing at any moment, in the same way as fresh native contributions may appear.This alternative approach, however, does not provide for "selection" of best contributions; rather, contributions are judged in absolute terms (not "the best", but "sufficiently good"), and hence their number would be variable.Also, the fact that a freshly translated contribution may appear at any moment, makes it possible for it to get unnoticed; therefore, it should undergo a new "peer review" (obligatory quality appraisal) within every receiving segment, thus making the whole procedure longer and heavier.

Segmented Model for Cross-Community Deliberative Dispute Resolution
Now I will try to apply the aforementioned segmented cross-language model and procedures to those cases where distinct communities, not necessarily speaking different languages, are making an attempt to resolve their dispute(s) by participating in a common deliberation.As I have already stated, one or more segmented phases would be beneficial or even necessary for each of those communities to discuss a contradictory topic "among one's own people".To compare, complex diplomatic negotiations always comprise separate confidential meetings; yet in the case of an online deliberation, which is potentially open to every member of each of the disputing communities, confidentiality of such an intra-community discussion (a "closed discussion room") simply cannot be implemented.
Deliberation within every community segment will therefore be overlooked at will by members of all other communities.Moreover, as the limits of those communities are never strictly defined, there may always appear participants who register within the opposing community, aiming at influencing or even obstructing their internal deliberation.I do not expect, however, that such "planted" participants could noticeably weaken or endanger the deliberation model, for the following reasons.
First, if we apply any method of controlled registration, eliminating the possibility of multiple registration by the same person, then every person would have a choice: either to register within the community that he/she considers as his/her "own" one, presumably for some positive participation; or, to register within the opposing community.In the latter case, such a "foreign" participant would be always willing only to disturb others' discussion; he/she could instead be willing to bring some peaceful ideas or reasonable considerations into an otherwise too heated discussion within the opposing community.
If such a "peacemaker" succeeds in influencing the opposing community's stance, the whole community would perhaps be grateful to him/her afterwards.In all other cases, his/her action would not succeed, thanks to the mutual moderation mechanism; hence it would bring no harm to the deliberation.If however many people do the same in a coordinated fashion, by supporting each other within the "foreign" online community, they would at the same time weaken their own community by "deserting" it.Thus the only case where "real" affiliation of a participant to a community should better be checked at registration time is when the community is a so largely outnumbered minority, that the opposing majority community can "delegate" a large number of "planted" participants to distort that minority's opinion, while still remaining themselves a large majority.Now, if we put aside those cases of "planted" participants, and consider deliberation within each segment as genuine, we cannot exclude however the cases when a contribution within one segment is cited, commented or even mocked within the opposing segment.This may cause discomfort, but people should get used to that: they deliberate in a room, other people comment in the street, but they do not pay attention to those comments.Now, after having discussed various cases of "irregular" cross-community behaviour, let us turn to a "regular" case, when each community is progressing their deliberation within their own segment without being disturbed by intruders.They commonly elaborate their proposals, opinions, ideas and considerations, and commonly select the best contributions, i.e. contributions expressing those proposals and ideas in the most eloquent manner, by using the mutual appraisal mechanism.The best contributions are then translated if necessary, and submitted to the opposing community/ies as an "official input".
Let us consider a practical case of two neighbouring ethnic communities, which decide to make an effort to settle down their long-term dispute.There would certainly be, among participants on either side, both peacemakers and belligerent ones; there would be those who wish to recall old grievances and those who wish to recall some independent historical accounts or statistical data, etc. Obviously, all those different stances, opinions, feelings and dispositions should be not only discussed within the corresponding community, but also presented to the opposing community, in order to keep the deliberation inclusive, and finally to achieve sound results.Yet each of the opposing communities would be mostly interested to see such contributions that express the opinions of a majority or of a significantly large minority of their opponents, and which have been considered within each category as the most eloquent and representative ones.
Here I can propose two alternative methods.The first consists in elaborating (a universal or a case-specific) taxonomy of contribution categories, reflecting not only their content, but also their intent, e.g."to bring forward a compromise", "to recall our claims", "to admit some level of our own guilt or faults", etc. Participants mark themselves their contributions as belonging to a specific category; selection of the "best ones" would then be made separately within each category.The second method makes use of contribution clustering: contributions mostly supported by the same set of participants very probably express similar ideas (feelings, positions, …).Clusters thus represent unnamed categories of contributions, and the best contribution in each cluster could be seen as representing the main idea of the whole category.
The first method represents explicit classification conducted by authors; the second is implicit classification resulting from appraisers' actions.In fact, both methods can be used together, thus achieving a finer classification.
In order to deliver sound results, the model of cross-community deliberation for dispute resolution may need several phases of separate deliberation within each community, every such segmented phase followed by a combining phase, when each community is confronted with the next set of opinions and proposals by their adversaries.The whole process may take a long time, though not as long as several centuries of feud and hostilities.

Conclusions
Self-regulated written online deliberation may become an ideal instrument for citizens' participation in law-and policy-making at every level, including international (e.g.pan-European) level.It could also become a very useful tool for cross-community dispute resolution.Yet, any mass deliberation, even a written asynchronous online deliberation, should follow strict behavioural procedures in order to be efficient and productive while remaining open and fair.In my previous papers, such a set of procedures has been proposed and discussed for the case of "homogeneous" deliberation in one language.In this paper I have shown how the model can be enhanced to cover the case of "segmented" deliberation in several languages, and in particular for inter-community dispute resolution.To that end, additional procedures are introduced for multilanguage translation and cross-community moderation of contributions by participants themselves.My considerations still remain mostly speculative, pending appropriate software realisation and testing in a large scale pilot project.
The theme of a new deliberation campaign is published, and introductory expert information is prepared and made available in each EU languageGerman French DutchParticipants write and upload their contributions in their respective languages Contributions undergo initial moderation and compulsory quality appraisal by randomly selected participants within their respective national segments Contributions can now be read and appraised by the whole community of participants within their respective national segments Contributions obtaining highest grades are proposed (in their original language) to other national segments, aiming at finding volunteers for their translation If volunteers not found, those best contributions are assigned for translation to randomly selected bilingual participants, or to the (paid) campaign staff Translated contributions can now be read and appraised along with native contributions within every national segment Participants register under pseudonyms… and obtain their digital signatures (authorship)