Skip to main content
Log in

Concept map engineering: methods and tools based on the semantic relation approach

  • Development Article
  • Published:
Educational Technology Research and Development Aims and scope Submit manuscript

Abstract

The purpose of this study is to develop a better understanding of technologies that use natural language as the basis for concept map construction. In particular, this study focuses on the semantic relation (SR) approach to drawing rich and authentic concept maps that reflect students’ internal representations of a problem situation. The following discussions are included: (a) elaborate classifications of concept map approaches that use natural language responses (e.g., student essay); (b) the SR process of eliciting concept maps, established using studies on domain ontology; and (c) a more effective way to identify key concepts and relations from a concept map generated by the SR approach. By comparing the SR approach to other promising concept map technologies that constrain the analytical process in various ways, this study suggests that the SR approach is likely to draw richer and more authentic concept maps. In addition, this study suggests that a certain combination of graph-related metrics be used to filter key concepts from a SR concept map drawn from a written text of 350–400 words. The methods suggested in the study could be used to design an automated assessment technology for complex problem solving and to develop adaptive learning systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Adriana, D. M., Moldovan, D., Badulescu, A., Tatu, M., Antohe, D., & Girju, R. (2004). Models for the semantic classification of noun phrases. In HLT-NAACL 2004: Workshop on Computational Lexical Semantics, 60-67.

  • Anthonisse, J. M. (1971). The rush in a graph. Amsterdam: Mathematische Centrum.

    Google Scholar 

  • Anzai, Y., & Yokoyama, T. (1984). Internal models in physics problem solving. Cognition and Instruction, 1, 397–450.

    Article  Google Scholar 

  • Axelrod, R. (1976). Structure of decision: The cognitive maps of political elites. Princeton: Princeton University Press.

    Google Scholar 

  • Azevedo, R. (2011). Can we design artificial pedagogical agents to be intelligent enough to detect, model, and foster regulatory learning processes? In R. Pirrone & F. Sorbello (Eds.), AI*IA 2011: Artificial intelligence around man and beyond (Vol. 6934, p. 9). Berlin: Springer.

    Chapter  Google Scholar 

  • Beamer, B., Rozovskaya, A.,& Girju, R. (2008) Automatic semantic relation extraction with multiple boundary generation. Proceedings of AAAI (824–829), Chicago: AAAI Press.

  • Bierwisch, M., & Schreuder, R. (1992). From concepts to lexical items. Cognition, 42(1–3), 23–60. doi:10.1016/0010-0277(92)90039-K.

    Article  Google Scholar 

  • Bransford, J. D., Barclay, J. R., & Franks, J. J. (1972). Sentence memory: A constructive versus interpretive approach. Cognitive Psychology, 3, 193–209.

    Article  Google Scholar 

  • Bransford, J. D., & Franks, J. J. (1972). The abstraction of linguistic ideas. Cognitive Psychology, 2, 331–350.

    Article  Google Scholar 

  • Bransford, J. D., & Johnson, M. K. (1972). Contextual prerequisites for understanding: Some investigations of comprehension and recall. Journal of Verbal Learning and Verbal Behavior, 11(6), 717–726. doi:10.1016/S0022-5371(72)80006-9.

    Article  Google Scholar 

  • Brown, D. (1992). An introduction to object-oriented analysis: Objects in plain English. New York: Wiley.

    Google Scholar 

  • Cañas, A. J. (2009). What are Propositions?… from a Concept Mapping Perspective. http://cmap.ihmc.us/docs/Proposition.html. Accessed 31 Jul 2011.

  • Champagne, A. B., Kouba, V. L., & Hurley, M. (2000). Assessing inquiry. In J. Minstrell & E. H. van Zee (Eds.), Inquiry into inquiry learning and teaching in science (pp. 456–461). Washington DC: American Association for the Advanced of Science.

    Google Scholar 

  • Clariana, R. B. (2010). Multi-decision approaches for eliciting knowledge structure. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 41–59). New York: Springer.

    Chapter  Google Scholar 

  • Clariana, R. B., & Koul, P. (2008). The effects of learner prior knowledge when creating concept maps from a text passage. International Journal of Instructional Media, 35(2), 229–236.

    Google Scholar 

  • Clariana, R. B., & Taricani, E. M. (2010). The consequences of increasing the number of terms used to score open-ended concept maps. International Journal of Instructional Media, 37(2), 218–226.

    Google Scholar 

  • Clariana, R., Wallace, P., & Godshalk, V. (2009). Deriving and measuring group knowledge structure from essays: The effects of anaphoric reference. Educational Technology Research and Development, 57(6), 725–737. doi:10.1007/s11423-009-9115-z.

    Article  Google Scholar 

  • Collins, A., & Gentner, D. (1987). How people construct mental models. In D. Holland & N. Quinn (Eds.), Cultural models in language and thought (pp. 243–265). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Curti, M. B., & Viator, R. E. (2000). An investigation of multidimensional knowledge structure and computer auditor performance. Auditing: A Journal of Practice and Theory, 19(Fall), 83–103.

    Article  Google Scholar 

  • Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by latent semantic analysis. Journal of the American Society for Information Science (1986-1998), 41(6), 391-407.

    Google Scholar 

  • Dochy, F., Segers, M., Van den Bossche, P., & Gijbels, D. (2003). Effects of problem-based learning: a meta-analysis. Learning and Instruction, 13(5), 533–568. doi:10.1016/S0959-4752(02)00025-7.

    Article  Google Scholar 

  • Downing, P. (1977). On the creation and use of English compound nouns. Language, 53(4), 810–842.

    Article  Google Scholar 

  • Duschl, R. (2003). Assessment of inquiry. In J. M. Atkins & J. Coffey (Eds.), Everyday assessment in the science classroom (pp. 41–60). Arlington, VA: NSTA Press.

  • Fodor, J. A., Bever, T. G., & Garrett, M. F. (1974). The psychology of language: An introduction to psycholinguistics and generative grammar. New York: McGraw-Hill.

    Google Scholar 

  • Frederick, D. M. (1991). Auditors’ representation and retrieval of internal control knowledge. The Accounting Review, 66, 240–258.

    Google Scholar 

  • Frederick, D. M., Heiman-Hoffman, V. B., & Libby, R. (1994). The structure of auditors’ knowledge of financial statement errors. Auditing: A Journal of Practice and Theory, 13(Spring), 1–21.

    Google Scholar 

  • Freeman, L. C. (1977). A set of measures of centrality based on betweeness. Sociometry, 40, 35–41.

    Article  Google Scholar 

  • Funke, J. (1985). Steuerung dynamischer Systeme durch Aufbau und Anwendung subjektiver Kausalmodelle [Control of dynamic systems by building up and using subjective causal models]. Zeitschrift für Psychologie, 193, 443–466.

    Google Scholar 

  • Garnham, A. (1987). Mental models as representations of discourse and text. Chicheser: Ellis Horwood Ltd.

    Google Scholar 

  • Garnham, A. (2001). Mental models and the interpretation of anaphora. Hove: Psychology Press.

    Google Scholar 

  • Geeslin, W. E., & Shavelson, R. J. (1975). Comparison of content and cognitive structure in high school students’ learning of probability. Journal of Research in Mathematics Education, 6, 109–120.

    Article  Google Scholar 

  • Girju, R. (2008, August). Semantic relation extraction and its applications. Paper presented at 20th European Summer School in Logic, Language and Information, Freie und Hansestadt Hamburg, Germany.

  • Girju, R. (2011). The syntax and semantics of prepositions in the task of automatic interpretation of nominal phrases and compounds: A cross-linguistic study. Computational Linguistics, 35(2), 185–228. doi:10.1162/coli.06-77.

    Article  Google Scholar 

  • Girju, R., Beamer, B., Rozovskaya, A., Fister, A., & Bhat, S. (2010). A knowledge-rich approach to identifying semantic relations between nominals. Information Processing and Management, 46(5), 589–610. doi:10.1016/j.ipm.2009.09.002.

    Article  Google Scholar 

  • Girju, R., Nakov, P., Nastase, V., Szpakowicz, S., Turney, P., & Yuret, D. (2009). Classification of semantic relations between nominals. Language Resources and Evaluation, 43(2), 105–121. doi:10.1007/s10579-009-9083-2.

    Article  Google Scholar 

  • Goodman, C. M. (1987). The Delphi technique: A critique. Journal of Advanced Nursing, 12(6), 729–734.

    Article  Google Scholar 

  • Greeno, J. G. (1989). Situations, mental models, and generative knowledge. In D. Klahr & K. Kotovsky (Eds.), Complex information processing (pp. 285–318). Hillsdale: Lawrence Erlbaum Associates, Publishers.

    Google Scholar 

  • Hearst, M. (1992). Automatic acquisition of hyponyms from large text corpora. Proceedings on 14th international conference on computational linguistics (COLING-92) (539–545). Stroudsburg: Association for Computational Linguistics.

  • Hsu, Chia-Chien & Sandford, Brian A. (2007). The Delphi Technique: Making Sense of Consensus. Practical Assessment Research & Evaluation, 12 (10): 1–8. Available online: http://pareonline.net/getvn.asp?v=12&n=10.

  • Ifenthaler, D. (2006). Diagnose lernabhängiger Veränderung mentaler Modelle. Entwicklung der SMD-Technologie als methodologische Verfahren zur relationalen, strukturellen und semantischen Analyse individueller Modellkonstruktionen. Freiburg: Universitäts-Dissertation.

  • Ifenthaler, D., Masduki, I., & Seel, N. M. (2009). The mystery of cognitive structure and how we can detect it: Tracking the development of cognitive structures over time. Instructional Science, 39(1), 41–61.

    Article  Google Scholar 

  • Janssen, T. M. V. (2012). Montague semantics. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2012.). Retrieved from http://plato.stanford.edu/archives/win2012/entries/montague-semantics/.

  • Johnson-Laird, P. N. (2005a). Mental models and thoughts. In K. J. Holyoak (Ed.), The Cambridge handbook of thinking and reasoning (pp. 185–208). Cambridge: Cambridge University Press.

    Google Scholar 

  • Johnson-Laird, P. N. (2005b). The history of mental models. In K. I. Manktelow & M. C. Chung (Eds.), Psychology of reasoning: theoretical and historical perspectives (pp. 179–212). New York: Psychology Press.

    Google Scholar 

  • Jonassen, D. H., Beissner, K., & Yacci, M. (Eds.). (1993). Structural knowledge: Techniques for representing, conveying, and acquiring structural knowledge. Hillsdale: Lawrence Erlbaum Associates Inc.

    Google Scholar 

  • Kamp, H. (1981). A theory of truth and semantic representation. In J. Groenendijk, T. Janssen, & M. Stokhof (Eds.), Formal methods in the study of language (pp. 277–322). Amsterdam: Mathematisch Centrum.

    Google Scholar 

  • Katz, J. J., & Postal, P. M. (1964). An integrated theory of linguistic descriptions. Cambridge: M.I.T. Press.

    Google Scholar 

  • Kim, H. (2008). An investigation of the effects of model-centered instruction in individual and collaborative contexts: The case of acquiring instructional design expertise. (Unpublished doctoral dissertation). Florida State University, Tallahassee, FL.

  • Kim, M. (2012a). Theoretically grounded guidelines for assessing learning progress: Cognitive changes in ill-structured complex problem-solving contexts. Educational Technology Research and Development, 60(4), 601–622. doi:10.1007/s11423-012-9247-4.

    Article  Google Scholar 

  • Kim, M. (2012b). Cross-validation study on methods and technologies to assess mental models in a complex problem solving situation. Computers in Human Behavior, 28(2), 703–717.

    Article  Google Scholar 

  • Kintsch, W. (1994). Text comprehension, memory, and learning. American Psychologist, 49(4), 294–303. doi:10.1037/0003-066X.49.4.294.

    Article  Google Scholar 

  • Kintsch, W., & van Dijk, T. A. (1978). Toward a model of text comprehension and production. Psychological Review, 85(5), 363–394. doi:10.1037/0033-295X.85.5.363.

    Article  Google Scholar 

  • Landauer, T. K., Foltz, P. W., & Laham, D. (1998). Introduction to latent semantic analysis. Discourse Processes, 25, 259–284.

    Article  Google Scholar 

  • Lauer, M. (1995).Corpus statistics meet the noun compound: Some empirical results. Proceedings of the Association for Computational Linguistics Conference (ACL) (47–54), Cambridge, MA.

  • Levelt, W. J. M. (1989). Speaking: From intention to articulation. Cambridge: MIT Press.

    Google Scholar 

  • Levi, J. (1978). The syntax and semantics of complex nominals. New York: Academic Press.

    Google Scholar 

  • McKeown, J. O. (2009). Using annotated concept map assessments as predictors of performance and understanding of complex problems for teacher technology integration. (Unpublished doctoral dissertation). Florida State University, Tallahassee, FL.

  • Moldovan, D. I., & Girju, R. C. (2001). An interactive tool for the rapid development of knowledge bases. International Journal on Artificial Intelligence Tools, 10(1/2), 65–86.

    Article  Google Scholar 

  • Montague, R. (1974). In R. Thomason (Ed.), Formal Philosophy: The Selected Papers of Richard Montague. New Haven: Yale University Press.

    Google Scholar 

  • Murphy, M. L. (2003). Semantic relations and the lexicon: antonymy, synonymy and other paradigms. Cambridge: University Press.

    Book  Google Scholar 

  • Narayanan, V. K. (2005). Causal mapping: An historical overview. In V. K. Narayanan & D. J. Armstrong (Eds.), Causal mapping for research in information technology (pp. 1–19). Hershey: Idea Group Publishing.

    Chapter  Google Scholar 

  • Naveh-Benjamin, M., McKeachie, W. J., Lin, Y., & Tucker, D. (1986). Inferring students’ cognitive structures and their development using the ordered tree technique. Journal of Educational Psychology, 78, 130–140.

    Article  Google Scholar 

  • Newell, A., & Simon, H. (1972). Human problem solving. Englewood Cliffs: Prentice Hall.

    Google Scholar 

  • Norman, D. A. (1986). Reflections on cognition and parallel distributed processing. In J. L. McClelland, D. E. Rumelhart, & The POP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 2, pp. 531–546). Cambridge: MIT Press.

    Google Scholar 

  • Novak, J. D., & Cañas, A. J. (2006). The origins of the concept mapping tool and the continuing evolution of the tool. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.106.3382.

  • Okoli, C., & Pawlowski, S. D. (2004). The Delphi method as a research tool: an example, design considerations and applications. Information & Management, 42(1), 15–29. doi:10.1016/j.im.2003.11.002.

    Article  Google Scholar 

  • Partee, B. H. (1984). Compositionality. In F. Landman & F. Veltman (eds.), Varieties of formal semantics: Proceedings of the 4th Amsterdam colloquium (Groningen-Amsterdam Studies in Semantics, No. 3) (pp. 281–311). Dordrecht: Foris.

  • Phelan, J., Kang, T., Niemi, D. N., Vendlinski, T., & Choi, K. (2009). Some aspects of the technical quality of formative assessments in middle school mathematics. (CRESST Report 750). Los Angeles, CA: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

  • Pirnay-Dummer, P., & Ifenthaler, D. (2010). Automated knowledge visualization and assessment. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge. New York: Springer.

    Google Scholar 

  • Pirnay-Dummer, P., Ifenthaler, D., & Spector, J. (2010). Highly integrated model assessment technology and tools. Educational Technology Research and Development, 58(1), 3–18. doi:10.1007/s11423-009-9119-8.

    Article  Google Scholar 

  • Quellmalz, E. S., & Haertel, G. D. (2004). Use of technology-supported tools for large-scale science assessment: implications for assessment practice and policy at the state level. Commissioned paper prepared for the National Research Council’s Committee on Test Design for K-12 Science Achievement, Washington, DC.

  • Rijkhoff, J. (2002). The noun phrase. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Scheele, B., & Groeben, N. (1984). Die Heidelberger Struktur-Lege-Technik (SLT). Beltz: Eine Dialog-Konsens-Methode zur Erhebung subjektiver Theorien mittlerer Reichweite. Weinheim.

    Google Scholar 

  • Schlomske, N., & Pirnay-Dummer, P. (2008). Model based assessment of learning dependent change during a two semester class. In Kinshuk, Sampson, D., & Spector, M. (Eds.), Proceedings of IADIS International Conference Cognition and Exploratory Learning in Digital Age 2008 (pp. 45053). Freiburg, Germany: IADIS.

  • Schvaneveldt, R. W. (Ed.). (1990). Pathfinder associative networks: Studies in knowledge organization. Ablex Publishing Corp. Retrieved from http://portal.acm.org/citation.cfm?id=SERIES9012.119801.

  • Schvanevldt, R. W., Durso, F. T., & Dearholt, D. W. (1989). Network structures in proximity data. In G. H. Bower (Ed.), The psychology of learning and motivation: advances in research and theory (pp. 249–284). San Diego: Academic Press.

    Google Scholar 

  • Schwartz, D. L., Chase, C., Chin, D. B., Oppezzo, M., Kwong, H., Okita, S., et al. (2009). Interactive metacognition: Monitoring and regulating a teachable agent. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of metacognition in education (pp. 340–358). New York: Routledge.

    Google Scholar 

  • Seel, N. M. (1999). Semiotics and structural learning theory. Journal of Structural Learning and Intelligent Systems, 14(1), 11–28.

    Google Scholar 

  • Seel, N. M. (2001). Epistemology, situated cognition, and mental models: Like a bridge over troubled water. Instructional Science, 29(4–5), 403–427.

    Article  Google Scholar 

  • Seel, N. M. (2003). Model-centered learning and instruction. Technology, Instruction, Cognition, and Learning, 1(1), 59–85.

    Google Scholar 

  • Seel, N. M. (2004). Model-centered learning environments: Theory, Instructional design, and ethics. In N. M. Seel & S. Dijkstra (Eds.), Curriculum, plans, and processes in instruction design: International perspectives (pp. 49–74). Mahwah: Lawrence Erlbaum Associates Inc.

    Google Scholar 

  • Seel, N. M., & Dinter, F. R. (1995). Instruction and mental model progression: Learner-dependent effects of teaching strategies on knowledge acquisition and analogical transfer. Educational Research and Evaluation, 1(1), 4–35.

    Article  Google Scholar 

  • Shute, V. J., Jeong, A. C., Spector, J. M., Seel, N. M., & Johnson, T. E. (2009). Model-based methods for assessment, learning, and instruction: Innovative educational technology at Florida State University. In M. Orey (Ed.), Educational media and technology yearbook. New York: The Greenwood Publishing Group.

    Google Scholar 

  • Shute, V. J., & Zapata-Rivera, D. (2007). Adaptive technologies. In J. M. Spector, M. D. Merill, J. van Merrienboer, & M. P. Driscoll (Eds.), Handbook of research for educational communications and technology (pp. 227–294). New York: Routledge: Taylor & Francis Group.

    Google Scholar 

  • Smith, J. P., diSessa, A. A., & Roschelle, J. (1993). Misconceptions reconceived: A constructivist analysis of knowledge in transition. The Journal of the Learning Sciences, 3(2), 115–163.

    Article  Google Scholar 

  • Snow, R. E. (1990). New approaches to cognitive and conative assessment in education. International Journal of Educational Research, 14(5), 455–473.

    Google Scholar 

  • Spector, J. M., & Koszalka, T. A. (2004). The DEEP methodology for assessing learning in complex domains (Final report to the National Science Foundation Evaluative Research and Evaluation Capacity Building). Syracuse, NY: Syracuse University.

  • Taricani, E. M., & Clariana, R. B. (2006). A technique for automatically scoring open-ended concept maps. Educational Technology Research and Development, 54(1), 65–82.

    Article  Google Scholar 

  • Tversky, A. (1977). Features of similarity. Psychological Review, 84, 327–352.

    Article  Google Scholar 

  • Villalon, J., & Calvo, R. A. (2009, December). Single document semantic spaces. Paper presented at the Australian Joint Conference on Artificial Intelligence, Melbourne, Australia.

  • Villalon, J., & Calvo, R. A. (2011). Concept Maps as Cognitive Visualizations of Writing Assignments. Educational Technology & Society, 14(3), 16–27.

    Google Scholar 

  • Villalon, J., Calvo, R. A., & Montenegro, R. (2010). Analysis of a gold standard for Concept Map Mining – How humans summarize text using concept maps. Proceedings of the Fourth International Conference on Concept Mapping (pp. 14–22). Viña del Mar, Chile.

  • Wasserman, S., & Faust, K. (1994). Social networks analysis: Methods and applications. Cambridge: Cambridge University.

    Book  Google Scholar 

  • Yorke, M. (2003). Formative assessment in higher education: Moves towards theory and the enhancement of pedagogic practice. Higher Education, 45(4), 477–501.

    Google Scholar 

  • Zouaq, A., Gasevic, D., & Hatala, M. (2011a). Towards open ontology learning and filtering. Information Systems, 36(7), 1064–1081. doi:10.1016/j.is.2011.03.005.

    Article  Google Scholar 

  • Zouaq, A., Gasevic, D., & Hatala, M. (2011b). Ontologizing concept maps using graph theory. Proceedings of the 2011 ACM Symposium on applied computing (pp. 1687–1692). New York, NY, USA: ACM. doi:10.1145/1982185.1982537.

  • Zouaq, A., Michel, G., & Ozell, B. (2010). Semantic analysis using dependency-based grammars and upper-level ontologies. International Journal of Computational Linguistics and Applications, 1(1–2), 85–101.

    Google Scholar 

  • Zouaq, A., & Nkambou, R. (2008). Building domain ontologies from text for educational purposes. IEEE Transactions on Learning Technologies, 1(1), 49–62. doi:10.1109/TLT.2008.12.

    Article  Google Scholar 

  • Zouaq, A., & Nkambou, R. (2009). Evaluating the generation of domain ontologies in the knowledge puzzle project. IEEE Transactions on Knowledge and Data Engineering, 21(11), 1559–1572. doi:10.1109/TKDE.2009.25.

    Article  Google Scholar 

  • Zouaq, A., & Nkambou, R. (2010). A survey of domain ontology engineering: methods and tools. In R. Nkambou, J. Bourdeau, & R. Mizoguchi (Eds.), Advances in intelligent tutoring systems (pp. 103–119). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/978-3-642-14363-2_6.

Download references

Acknowledgments

The problem-solving task used in this article is based on a case described by Robert Reiser for use in his Trends and Issues in ID&T course at Florida State University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Minkyu Kim.

Appendices

Appendix A

Case study

Directions: read the case study described below and then prepare a response to the questions below

Assume that you have been involved in evaluating a media implementation project in an urban inner middle school. At the beginning of the school year all of the students assigned to four subject area teachers (math, language arts, social studies and science) in the seventh grade at the middle school were given tablet PCs (laptop computers also equipped with a stylus/pen and a touchscreen that can be written upon) and were also given wireless internet access at home and in school for a entire year.

The students took the tablet PCs home every evening and brought them into classes every day. The teachers were also provided with tablet PCs 24/7 (24 h a day, every day of the week) for the entire year. The teachers and students were trained on how to use the tablet PCs. Moreover, all of the curriculum materials (textbooks, workbooks, student study guides, teacher curriculum guides, some activities, tests, etc.) were installed on the tablet PCs or were accessible through the tablet PCs.

Your job as one of the evaluators for the project was to examine how this innovation (providing teachers and students with tablet PCs 24/7) changed the way instruction was presented in the classrooms of the four teachers. Results indicated that the innovation had very little effect on the manner in which instruction took place in the teachers’ classrooms.

  1. Written response with at least 350 words is required for each question
  1. 1.

    Based on what you have learned about the use of technology in education, describe what concepts, issues, factors, and variables are likely to have contributed to the fact that the introduction of the tablet PCs had very little effect on the instructional practices that were employed in the classes.

  2. 2.

    Describe the strategies that could have been employed to help mitigate the factors that you think contributed to the minimal effect the tablet PCs had on instructional practices. When you answering this question, use the concepts, factors, and variables you described in the question 1 or add other assumptions and information that would be required to solve this problem.

Appendix B

Rules for determining pairs of concepts from complex lexico-syntactic patterns

Complex lexico-syntactic patterns

Pairs of concepts–R(C i , C j )

N0 is N1 and N2

(e.g., technology is hardware and software)

(N0, N1); (N0, N2); (N1, N2)

N0 of N1 and N2 verb N3

(e.g., the use of technology and access to internet allow students to…)

(N0, N1); (N0, N2); (N0, N3); (N2, N3)

N0 such as N1, N2,…, Nn

(e.g., classroom technologies such as laptops, internet, and electronic whiteboard)

(N0, N1); (N0, N2);…; (N0, Nn)

Such N0 as N1, N2,…, Nn

(e.g., such new technologies as Web 2.0, cloud computing, and mobile internet)

(N0, N1); (N0, N2);…; (N0, Nn)

Np are N1, N2,…, Nn or other N0

(e.g., magnetism is positive or negative)

(Np, N1); (Np, N2);…; (Np, Nn); or (Np, N0)

N0 include N2 and N3

(e.g., internal representation includes conceptual structure and linguistic semantic structure)

(N0, N1); (N0, N2)

N0, especially N1, verb…

(e.g., supportive environments, especially leadership support, are the most important)

(N0, N1)

N1 of N2 in N3 of N4

(e.g., the use of technology in the classrooms of participating schools)

(N1, N2); (N3, N4); (N1, N3)

By –ing N1 and N2, Np verb N3a

(e.g., by using the Internet and Smartphone, students can access learning materials anytime, anywhere)

(Np, N1); (Np, N2); (Np, N3)

N1 provide N2 with N3

(e.g., the Internet provides us with)

(N1, N2); (N1, N3)

N1 and N2 verb N3

(e.g., teachers and students are not used to using a computer)

(N1, N2); (N1, N3); (N2, N3)

N1 verb that-clause

(e.g., the witness hated that the boy attacked the victim)

(N1, the first N in that-clause)c

N1 between N2 and N3

(e.g., discrepancy between boys and girls)

(N1, N2); (N1, N3)

N1 that N2 verb N3b

(e.g., teachers maintain the belief that these efforts will have positive results)

(N1, N2); (N2, N3)

  1. aSubordinate clause in which the subject is omitted
  2. bConjunction clause
  3. cIt connects N1 to the that-clause

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kim, M. Concept map engineering: methods and tools based on the semantic relation approach. Education Tech Research Dev 61, 951–978 (2013). https://doi.org/10.1007/s11423-013-9316-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11423-013-9316-3

Keywords

Navigation