Other
Read books online » Other » Data Mining Mehmed Kantardzic (good english books to read .txt) 📖

Book online «Data Mining Mehmed Kantardzic (good english books to read .txt) 📖». Author Mehmed Kantardzic



1 ... 101 102 103 104 105 106 107 108 109 ... 193
Go to page:
large databases. The problem of mining association rules may be decomposed into two phases:

1. Discover the large itemsets, that is, the sets of items that have transaction support s above a predetermined minimum threshold.

2. Use the large itemsets to generate the association rules for the database that have confidence c above a predetermined minimum threshold.

The overall performance of mining association rules is determined primarily by the first step. After the large itemsets are identified, the corresponding association rules can be derived in a straightforward manner. Efficient counting of large itemsets is thus the focus of most mining algorithms, and many efficient solutions have been designed to address previous criteria. The Apriori algorithm provided one early solution to the problem, and it will be explained in greater detail in this chapter. Other subsequent algorithms built upon the Apriori algorithm represent refinements of a basic solution and they are explained in a wide spectrum of articles including texts recommended in Section 10.10.

10.2 ALGORITHM APRIORI

The algorithm Apriori computes the frequent itemsets in the database through several iterations. Iteration i computes all frequent i-itemsets (itemsets with i elements). Each iteration has two steps: candidate generation and candidate counting and selection.

In the first phase of the first iteration, the generated set of candidate itemsets contains all 1-itemsets (i.e., all items in the database). In the counting phase, the algorithm counts their support by searching again through the whole database. Finally, only 1-itemsets (items) with s above required threshold will be selected as frequent. Thus, after the first iteration, all frequent 1-itemsets will be known.

What are the itemsets generated in the second iteration? In other words, how does one generate 2-itemset candidates? Basically, all pairs of items are candidates. Based on knowledge about infrequent itemsets obtained from previous iterations, the Apriori algorithm reduces the set of candidate itemsets by pruning—a priori—those candidate itemsets that cannot be frequent. The pruning is based on the observation that if an itemset is frequent all its subsets could be frequent as well. Therefore, before entering the candidate-counting step, the algorithm discards every candidate itemset that has an infrequent subset.

Consider the database in Table 10.1. Assume that the minimum support s = 50%, so an itemset is frequent if it is contained in at least 50% of the transactions—in our example, in two out of every four transactions in the database. In each iteration, the Apriori algorithm constructs a candidate set of large itemsets, counts the number of occurrences of each candidate, and then determines large itemsets based on the predetermined minimum support s = 50%.

In the first step of the first iteration, all single items are candidates. Apriori simply scans all the transactions in a database DB and generates a list of candidates. In the next step, the algorithm counts the occurrences of each candidate and based on threshold s selects frequent itemsets. All these steps are given in Figure 10.2. Five 1-itemsets are generated in C1 and, of these, only four are selected as large in L1 because their support is greater than or equal to two, or s ≥ 50%.

Figure 10.2. First iteration of the Apriori algorithm for a database DB. (a) Generate phase; (b1) count phase; (b2) select phase.

To discover the set of large 2-itemsets, because any subset of a large itemset could also have minimum support, the Apriori algorithm uses L1*L1 to generate the candidates. The operation * is defined in general as

For k = 1 the operation represents a simple concatenation. Therefore, C2 consists of 2-itemsets generated by the operation|L1 · (|L1|−1)/2 as candidates in the second iteration. In our example, this number is 4 · 3/2 = 6. Scanning the database DB with this list, the algorithm counts the support for every candidate and in the end selects a large 2-itemsets L2 for which s ≥ 50%. All these steps and the corresponding results of the second iteration are given in Figure 10.3.

Figure 10.3. Second iteration of the Apriori algorithm for a database DB. (a) Generate phase; (b1) count phase; (b2) select phase

The set of candidate itemset C3 is generated from L2 using the previously defined operation L2*L2. Practically, from L2, two large 2-itemsets with the same first item, such as {B, C} and {B, E}, are identified first. Then, Apriori tests whether the 2-itemset {C, E}, which consists of the second items in the sets {B, C} and {B, E}, constitutes a large 2-itemset or not. Because {C, E} is a large itemset by itself, we know that all the subsets of {B, C, E} are large, and then {B, C, E} becomes a candidate 3-itemset. There is no other candidate 3-itemset from L2 in our database DB. Apriori then scans all the transactions and discovers the large 3-itemsets L3, as shown in Figure 10.4.

Figure 10.4. Third iteration of the Apriori algorithm for a database DB. (a) Generate phase; (b1) count phase; (b2) select phase

In our example, because there is no candidate 4-itemset to be constituted from L3, Apriori ends the iterative process.

Apriori counts not only the support of all frequent itemsets, but also the support of those infrequent candidate itemsets that could not be eliminated during the pruning phase. The set of all candidate itemsets that are infrequent but whose support is counted by Apriori is called the negative border. Thus, an itemset is in the negative border if it is infrequent, but all its subsets are frequent. In our example, analyzing Figures 10.2 and 10.3, we can see that the negative border consists of itemsets {D}, {A, B}, and {A, E}. The negative border is especially important for some improvements in the Apriori algorithm such as increased efficiency in the generation of large itemsets.

10.3 FROM FREQUENT ITEMSETS TO ASSOCIATION RULES

The second phase in discovering association rules based on all frequent i-itemsets, which have been found in the first phase using the Apriori or some other similar algorithm, is relatively simple and straightforward. For a rule that implies {x1,x2, x3} → x4,

1 ... 101 102 103 104 105 106 107 108 109 ... 193
Go to page:

Free ebook «Data Mining Mehmed Kantardzic (good english books to read .txt) 📖» - read online now

Comments (0)

There are no comments yet. You can be the first!
Add a comment