Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Honest user, model

Related decisions are where one represents security parameters (see Section 5.2.4, Initialization ), how active attacks on honest users are modeled (Section 5.4.2), and the formalization of availability of service (Section 5.2.7). [Pg.44]

Note that the types of attacks mentioned in Section 2.5 now belong to the model of the behaviour of the honest users, and the types of success belong to the specification. [Pg.46]

Section 5.4.1 describes how the attackers are coimected to the rest of a system derived from the scheme, and Section 5.4.2 models the behaviour of the honest users. Thus, after these two sections, the actual systems are known whose runs have to be considered. Section 5.4.3 shows the usual combinations of the assumptions about the computational abilities of the attackers and the notions of fulfilling. Section 5.4.4 contains theorems and proof sketches one could hope to work out if Sections 5.3 and 5.4 were completely formal. Some are general, others specific to signature schemes. [Pg.109]

The formal reason for modeling the honest users is that one cannot define runs of the system without them One needs some source for the user inputs. A more intuitive reason for modeling the honest users is that this is where the active attacks described in Section 2.5 come in One must decide to what extent the behaviour of honest users might be influenced by the attackers. [Pg.112]

Essentially, one obtains the best degree of security if one universally quantifies over the behaviour of the honest users No matter what the honest users do, the requirements are fulfilled. Such a model automatically covers all conceivable active attacks, because behaviours resulting from an influence by an attacker are just behaviours, too. It is rather a natural model, too — for instance, how could one know anything about how an honest user selects the messages she authenticates (This is in contrast to the behaviour of correct entities, which act according to programs.)... [Pg.112]

In the first model, the honest users are integrated into the attacker strategy, as shown in Figure 5.15. (However, the correct entities are still correct.) This is called the model with direct access, because the attackers have direct access to the access points under consideration. [Pg.113]

Figure 5.15. Model of an active attack on two honest users, version with direct access. Figure 5.15. Model of an active attack on two honest users, version with direct access.
In the second model, there are explicit entities representing honest users. As shown in Figure 5.16, they have special ports where they communicate with the attacker, i.e., where the attacker can influence them. This is called the model with indirect access. [Pg.113]

In the second model, the computational abilities of the entities representing the honest users have to be restricted in the same way as those of the attackers. Apart from this restriction, there will be normal universal quantifiers over their programs in the same place in the formulas in Section 5.4.3 as the quantifier over the attacker strategy. [Pg.113]

Neither model mentions any preconditions on user behaviour, because the requirements fully deal with those already The models in this section mean that no matter how the honest users behave, the requirements are fulfilled (in some sense to be defined in Section 5.4.3), and the requirements say that if the users take certain elementary precautions such as using initialization correctly, the system guarantees them some other properties. [Pg.114]

Linking behaviour above and below the interface. After the previous paragraph, it may seem strange that the behaviour of the honest users in the model with direct access has been linked to activities below the interface. However, the attacker can base its influence on details of the corrupted entities, and then the behaviour of the honest users may depend on those, too. Still, the behaviour remains independent of internal details of the correct entities. [Pg.114]

Equivalence for integrity requirements. For requirements in the sense of Section 5.2.1, where sequences of interface events are classified into permitted ones and undesirable ones, one can easily see that the two models are equivalent (both with and without computational restrictions on honest users and attackers) ... [Pg.114]

First consider a situation in the first model. It is described by an attacker strategy Aj. Then an attacker can achieve exactly the same sequences of interface events in the second model by using the strategy A2 = Aj, if the honest users only pass messages on between the attacker and the interface and these particular honest users... [Pg.114]

Formally, one can see that the equivalence proof cannot be used with privacy requirements If one combines an attacker strategy A2 nd the strategies of the honest users from the model with indirect access into an attacker strategy Aj in the model with direct access, the attacker gains knowledge that only the honest users had in the model with indirect access. [Pg.115]

Restricted attacks. In the model with indirect access, one can model restricted attacks if one needs them. For instance, passive attacks are modeled by honest users that do not react on anything from the outside world, but choose their inputs, e.g., the messages they authenticate, according to some predefined probability distribution. This is unrealistic in most situations. A better example might be that honest users choose some initialization parameters carefully and without influence from outside, but may be influenced later. [Pg.115]

The model of honest users with direct access is used. [Pg.118]

Furthermore, the port structure of an attacker that can communicate with such a system is known from the access of attackers to system parts and the model of honest users. Thus one has a class... [Pg.118]

I also expect that computational restrictions only make sense in combination with allowing error probabilities, at least in models where the complexity of an interactive entity is regarded as a function of its initial state alone or where honest users are modeled as computationally restricted. Then the correct part of the system is polynomial-time in its initial state and therefore only reacts on parts of polynomial length of the input from an unrestricted attacker. Hence with mere guessing, a computationally restricted attacker has a very small chance of doing exactly what a certain unrestricted attacker would do, as far as it is seen by the correct entities. Hence if a requirement is not fulfilled information-theoretically without error probability, such a restricted attacker has a small probability of success, too. [Pg.121]


See other pages where Honest user, model is mentioned: [Pg.114]    [Pg.115]    [Pg.115]    [Pg.134]    [Pg.161]   
See also in sourсe #XX -- [ Pg.112 ]




SEARCH



Honest

Honest user

© 2024 chempedia.info