Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Honest user

D. The dynamic behaviour of the system with an attacker from the class and certain honest users. [Pg.41]

Related decisions are where one represents security parameters (see Section 5.2.4, Initialization ), how active attacks on honest users are modeled (Section 5.4.2), and the formalization of availability of service (Section 5.2.7). [Pg.44]

Some parts of a behaviour are of particular interest One is the restriction of the actions to those at the interface, i.e., the service. Another is the restriction to a given set of access points, because that may be what a set of honest users sees from the system. The view of an attacker is the restriction of a run to everything the attacker... [Pg.46]

Note that the types of attacks mentioned in Section 2.5 now belong to the model of the behaviour of the honest users, and the types of success belong to the specification. [Pg.46]

Not all requirements on cryptologic schemes can be expressed as predicates on event sequences, but fortunately, all minimal requirements on signature schemes can. Others require certain distributions, e.g., the service of a coin-flipping protocol, or privacy properties, i.e., they deal with the information or knowledge attackers can gain about the sequence of events at the interface to the honest users see [PfWa94J. [Pg.56]

Deciding on the weakest possible minimal requirements is not trivial with respect to the preconditions on the behaviour of the honest users. For instance, the effectiveness of authentication clearly needs a precondition that the signer and the recipient input the same message, but it is not so clear how the users must use initialization for their requirements to be fidfilled. The conflicting factors are ... [Pg.62]

The reason for this is technical There are schemes where the complexity of initialization depends on N. The fact that all users input N into their own entities ensures that no attacker can trick entities of honest users into arbitrarily long computations. [Pg.70]

At present, however, the security parameters are regarded as internal details of the systems and represented in the initial states of each entity. The prevailing argument was The goal is to define general notions like a scheme Scheme fulfils a requirement Req computationally in the parameter k where both Scheme and Req are variables. This means that a sequence of systems Sys derived from Scheme fulfils a sequence of requirements Reqi. If k is a distinguished part of the entities, it is easy to define Sys/ formally. If k were an input parameter, it would formally be a parameter of the honest users and not of the system. This is not too bad, because a formal representation of honest users is needed anyway. However, one has to be able to talk about an honest user with parameter k, as far as a certain requirement is concerned. This seems difficult if k is one of many parameters and there can be different k s in different initializations. [Pg.71]

Black arrows show the strong requirements for the low degree of security, i.e., for normal situations. (It is assumed that honest users behave sensibly in the details that are not shown in the figiure, e.g., the signer disavows if she has not authenticated.) Grey arrows show transitions that are additionally possible in extreme situations. Transitions without any arrow are excluded with both degrees of security. [Pg.93]

One could also consider normal confidentiality requirements in the interest of all participants in a transaction, e.g., that attackers do not learn anything about the messages that honest users authenticate for each other. In general, one woidd use combinations of normal signature schemes and secrecy schemes inside the system, but, as mentioned under Directedness of Authentication in Section 5.2.8, this will not always be trivial. [Pg.103]

Section 5.4.1 describes how the attackers are coimected to the rest of a system derived from the scheme, and Section 5.4.2 models the behaviour of the honest users. Thus, after these two sections, the actual systems are known whose runs have to be considered. Section 5.4.3 shows the usual combinations of the assumptions about the computational abilities of the attackers and the notions of fulfilling. Section 5.4.4 contains theorems and proof sketches one could hope to work out if Sections 5.3 and 5.4 were completely formal. Some are general, others specific to signature schemes. [Pg.109]

With all the integrity requirements on signature schemes, it is tolerated that all corrupted entities collude, as almost always in cryptology. Hence, when a certain requirement is considered, all entities not belonging to the interest group are replaced by one big entity. This entity may be completely malicious, i.e., it does its best to cheat the honest users. An example is shown in Figure 5.14. [Pg.110]

The formal reason for modeling the honest users is that one cannot define runs of the system without them One needs some source for the user inputs. A more intuitive reason for modeling the honest users is that this is where the active attacks described in Section 2.5 come in One must decide to what extent the behaviour of honest users might be influenced by the attackers. [Pg.112]

Essentially, one obtains the best degree of security if one universally quantifies over the behaviour of the honest users No matter what the honest users do, the requirements are fulfilled. Such a model automatically covers all conceivable active attacks, because behaviours resulting from an influence by an attacker are just behaviours, too. It is rather a natural model, too — for instance, how could one know anything about how an honest user selects the messages she authenticates (This is in contrast to the behaviour of correct entities, which act according to programs.)... [Pg.112]

In the first model, the honest users are integrated into the attacker strategy, as shown in Figure 5.15. (However, the correct entities are still correct.) This is called the model with direct access, because the attackers have direct access to the access points under consideration. [Pg.113]

Figure 5.15. Model of an active attack on two honest users, version with direct access. Figure 5.15. Model of an active attack on two honest users, version with direct access.
In the second model, there are explicit entities representing honest users. As shown in Figure 5.16, they have special ports where they communicate with the attacker, i.e., where the attacker can influence them. This is called the model with indirect access. [Pg.113]

Correct entities and access points about which requirements are made are white, the attacker is dark the white entities above the interface represent the honest users. [Pg.113]

In the second model, the computational abilities of the entities representing the honest users have to be restricted in the same way as those of the attackers. Apart from this restriction, there will be normal universal quantifiers over their programs in the same place in the formulas in Section 5.4.3 as the quantifier over the attacker strategy. [Pg.113]

Neither model mentions any preconditions on user behaviour, because the requirements fully deal with those already The models in this section mean that no matter how the honest users behave, the requirements are fulfilled (in some sense to be defined in Section 5.4.3), and the requirements say that if the users take certain elementary precautions such as using initialization correctly, the system guarantees them some other properties. [Pg.114]

Separation of users and entities. The separation between users and their entities in this framework is useful here, because it automatically excludes some particularly stupid user behaviour. For instance, one would otherwise have to explicitly exclude honest users that tell the attacker their secret keys, because no signature scheme could possibly protect them. Now, the formal users simply have no access to the secret information in the entities. This is a reasonable restriction e.g., we have no idea what letters a user writes, but her letters might be expected to be independent of the implementation of the signature scheme. [Pg.114]

Linking behaviour above and below the interface. After the previous paragraph, it may seem strange that the behaviour of the honest users in the model with direct access has been linked to activities below the interface. However, the attacker can base its influence on details of the corrupted entities, and then the behaviour of the honest users may depend on those, too. Still, the behaviour remains independent of internal details of the correct entities. [Pg.114]

Equivalence for integrity requirements. For requirements in the sense of Section 5.2.1, where sequences of interface events are classified into permitted ones and undesirable ones, one can easily see that the two models are equivalent (both with and without computational restrictions on honest users and attackers) ... [Pg.114]

First consider a situation in the first model. It is described by an attacker strategy Aj. Then an attacker can achieve exactly the same sequences of interface events in the second model by using the strategy A2 = Aj, if the honest users only pass messages on between the attacker and the interface and these particular honest users... [Pg.114]

Formally, one can see that the equivalence proof cannot be used with privacy requirements If one combines an attacker strategy A2 nd the strategies of the honest users from the model with indirect access into an attacker strategy Aj in the model with direct access, the attacker gains knowledge that only the honest users had in the model with indirect access. [Pg.115]

Restricted attacks. In the model with indirect access, one can model restricted attacks if one needs them. For instance, passive attacks are modeled by honest users that do not react on anything from the outside world, but choose their inputs, e.g., the messages they authenticate, according to some predefined probability distribution. This is unrealistic in most situations. A better example might be that honest users choose some initialization parameters carefully and without influence from outside, but may be influenced later. [Pg.115]

Attacks above and below the interface. One formal definition of an active attack might be that it is any attack where the attacker lets his entities deviate from their prescribed programs. The behaviour of the honest users and the influence of the attackers on them has nothing to do with active attacks in this sense, because no programs for users exist. Instead, this type of active attack was considered in Section 5.4.1, where the access of attackers to parts below the interface was considered. [Pg.115]

Finally, note that an active attacker may not only influence the inputs of the honest users, but also see the outputs. For example, the poor recipient who was made to test lots of junk signatures might tell everybody around him Have you also had such troubles with e-mail this morning I received 105 authenticated messages and only 2 were correct. The attacker might even answer No, I haven t let me see if I can help , to find out which were the two that passed the test. [Pg.116]

In the current application, the connection-control access point is treated like an access point of an honest user, because it is mentioned in the requirements and the entity that belongs to it, i.e., the switch, is assumed to be correct. Hence the attacker has direct or indirect access to it, precisely as to the access points of the honest users. [Pg.116]

The model of honest users with direct access is used. [Pg.118]

Furthermore, the port structure of an attacker that can communicate with such a system is known from the access of attackers to system parts and the model of honest users. Thus one has a class... [Pg.118]

I also expect that computational restrictions only make sense in combination with allowing error probabilities, at least in models where the complexity of an interactive entity is regarded as a function of its initial state alone or where honest users are modeled as computationally restricted. Then the correct part of the system is polynomial-time in its initial state and therefore only reacts on parts of polynomial length of the input from an unrestricted attacker. Hence with mere guessing, a computationally restricted attacker has a very small chance of doing exactly what a certain unrestricted attacker would do, as far as it is seen by the correct entities. Hence if a requirement is not fulfilled information-theoretically without error probability, such a restricted attacker has a small probability of success, too. [Pg.121]

Note that the error probability in Part c) is only taken over key generation (in the functional version), and not over the messages. Hence, as anticipated in Section 7.1.3, arbitrary active attacks have been hidden in the quantifier over the message sequence If the keys are not in Badg N), there is no message sequence for which authentication is not effective, hence it does not matter whether the attacker or an honest user chooses the messages, and whether adaptively or not. [Pg.171]

Correctness All honest users obtain the same result y. The result is distributed according to /(xj,. .., x ), where x,- is the intended input if the i-th user is honest, and otherwise a value chosen independently of the XjS of the honest users. [Pg.207]

Privacy The attacker should not learn anything about the inputs of honest users from the protocol execution that he could not have inferred from the result y and his own inputs alone. [Pg.207]


See other pages where Honest user is mentioned: [Pg.17]    [Pg.112]    [Pg.114]    [Pg.115]    [Pg.115]    [Pg.115]    [Pg.116]    [Pg.117]    [Pg.134]    [Pg.155]    [Pg.161]   
See also in sourсe #XX -- [ Pg.112 ]




SEARCH



Honest

Honest user, model

Influence of attackers on honest users

© 2024 chempedia.info