# Chapter 8 Data Mining Exercises Paper

Chapter 7 exercises:

9. (a) List all the 4-subsequences contained in the following data sequence:

< {1,3} {2} {2,3} {4} >,

assuming no timing constraints.

(b) List all the 3-element subsequences contained in the data sequence for part (a) assuming that no timing constraints are imposed.

(c) List all the 4-subsequences contained in the data sequence for part (a) (assuming the timing constraints are flexible).

(d) List all the 3-element subsequences contained in the data sequence for part (a) (assuming the timing constraints are flexible).

15. Describe the types of modifications necessary to adapt the frequent subgraph mining algorithm to handle:

(a) Directed graphs

(b) Unlabeled graphs

(c) Acyclic graphs

(d) Disconnected graphs

For each type of graph given above, describe which step of the algorithm will be affected (candidate generation, candidate pruning, and support counting), and any further optimization that can help improve the efficiency of the algorithm.

18. a) If support is defined in terms of induced subgraph relationship, show that the confidence of the rule g1 to g2 can be greater than 1 if g1 and g2 are allowed to have overlapping vertex sets.

b) What is the time complexity needed to determine the canonical label of a graph that contains lVl vertices?

c) The core of a subgraph can have multiple automorphisms. This will increase the number of candidate subgraphs obtained after merging two frequent subgraphs that share the same core. Determine the maximum number of candidate subgraphs obtained due to automorphism of a core of size k.

d) Two frequent subgraphs of size k may share multiple cores. Determine the maximum number of cores that can be shared by the two frequent subgraphs.

Chapter 8 exercises:

3. Many partitional clustering algorithms that automatically determine the number of clusters claim that this is an advantage. List two situations in which this is not the case.

8. Consider the mean of a cluster of objects from a binary transaction data set. What are the minimum and maximum values of the components of the mean? What is the interpretation of components of the cluster mean? Which components most accurately characterize the objects in the cluster?

11. Total SSE is the sum of the SSE for each separate attribute. What does it mean if the SSE for one variable is low for all clusters? Low for just one cluster? High for all clusters? High for just one cluster? How could you use the per variable SSE information to improve your clustering?

18. Suppose we find K clusters using Ward’s method, bisecting K-means, and ordinary K-means. Which of these solutions represents a local or global minimum? Explain.