Blog; Pratise Assignment

Blurb;

  • This article is fascinating, this is 14 years old, and they are so many implications of doing this kind of thing in this age. But back then this is very, very interesting.

Did the abstract tell you the three things I said it should? If not, what did it tell you? 

  • What is the topic?
    • This is an Evaluation of the user’s experience in two different mobile apps in beta
      • This seems to be an early application that will go on to be on the later cell phones ( i find this very interesting)
  • What is it that the researchers have done in this?
    • They evaluate the user’s experience in the made up test scenarios,
  • what have they figured out from this?
    • They have found out, that this is quite difficult to do due to the different factors of each person. But their overall conclusions was that there are many different ways needed to test user experience.

 

 

What seems to be the research question(s) they were trying to answer?

  • This appears to test the user’s experience with different interactive applications, this includes user experience to the point of having a location device to see where this person is and an application that can learn the habits of the users.

What  method(s) did they use to answer the question(s)

  • They set up test in office like environments, used during the work day and used with people interacting with others in the same test.
  • And another one was created in an environment like a home lab consisting kitchen, office, bedroom, etc., etc.

How credible do you think the paper is? (hint: look at who authors are and where and when it is published also compare what they were asking with what they did)

I believe these two authors are credible, they both have linked in, have active Facebook pages and other social media accounts. This was also published on google scholar and what they are doing is what I think is very fascinating.

Did you agree, or not, with what they wrote in their conclusion? Why?

  • I believe what they wrote, as at this stage they are trying to gather information on a learnable application. I think the world isn’t quite ready for this type of thing but damn it’s exciting.

Briefly, describe two things that you learned from the paper.

  • I learned that they were testing applications that were capable of learning 14 years ago.
  • I learnt that they didn’t get to far in this, there was a need for a more in-depth testing.

In no more than 250 of your own words (i.e. a paraphrase), describe what the paper is about

the short version, “something that excites me” but it has to be in more detail than this. So what does it describe?  it starts off to list of the fact that in today’s world that we are used to, as in what we used to from mobile devices 14 years ago.  and then it further states that this is to clarify how user experience and preference of device.

this is done because the world was rapidly developing into the world that will become dependent on mobile devices. And they were unsure of how the world will perceive different applications and different devices.

they go on to describe their tests and what their conclusions are from this.

Then it describes what their work means, what their conclusions are and what further needs to be done with their work.

Advertisements

Why do we look for ‘Academic’ Articles?

This week’s blog albeit a bit late is on searching for an academic paper, in this we need to find 2 academic papers and answer the following questions;

  • the title
  • the authors (usually with an email address and affiliation)
  • the abstract
  • the introduction
  • a review of other papers relevant to the topic ( a literature review)
  • a description of what the research was and what the researchers did
  • the results of what they did
  • a discussion about what the results mean
  • a conclusion
  • a list of references

So here I will go and find two academic papers, I’m assuming these will be in the IT field as. I managed to find a Google Scholar Function to find academic papers, this is interesting.

google Schoolr.PNG

 

 

Empirical Evaluation of User Experience in two Adaptive Mobile Application Prototypes

The title;

Empirical Evaluation of User Experience in two Adaptive Mobile Application Prototypes

The authors (usually with an email address and affiliation);

  • Leena Arhippainen University of Oulu, P.O. Box 3000, 90014 University of Oulu, Finland leena.arhippainen@oulu.fi
  • Marika Tähti University of Oulu, P.O. Box 3000, 90014 University of Oulu, Finland marika.tahti@oulu.fi

The abstract

Today’s applications such as ubiquitous systems are more and more aware of user’s habits and the context of use. The features of products and the context of use will affect the human’s experiences and preferences about the use of device. Thus; user experience in user-product interaction has been regarded as an important research topic in the mobile application design area. The purpose of this paper is to clarify how user experience can be evaluated in adaptive mobile applications. The user experience evaluations were performed through interviews and observation while test users were using PDA-based adaptive mobile application prototypes. As a result; this paper presents the analysis of the test methods for further user experience evaluations.

CR Categories: J.m [Computer Applications]: Miscellaneous; Experimentation; Human Factors.

The introduction

In the recent years, the use of different mobile products such as mobile phones and Personal Digital Assistant (PDA) devices has increased rapidly. Moreover, ubiquitous computing has become a popular topic in research and design areas. Nowadays, systems are more and more aware of their context of use. [Dey and Abowd 1999; Weiser 1991] In order to be useful, ubiquitous applications need to be designed so that the user’s needs and preferences and the context of use have been taken into account [Consolvo et al. 2002]. However, the evaluation of pervasive computing systems and their influences on users is quite difficult because the evaluation will require analysis of real users in a real context [Bellotti et al. 2002]. In addition, in continuous interaction research, test users should have a fully operational, reliable, and robust tool [Bellotti et al. 2002]. Evaluation with an incomplete prototype will not give a realistic test result. Nevertheless, preliminary tests in early phases of product development are necessary to perform in order to achieve information about the end user’s preferences and needs. In the recent years, in the Human-Computer Interaction (HCI) research area the capturing of user experience has been seen as an important and interesting research issue. In general, user experience has been captured with techniques like interviews, observations, surveys, storytelling, and diaries among others [Johanson et al. 2002; Nikkanen 2001]. However, in the HCI research area the understanding of user experience and its evaluation has not been established. One reason for this may be shortcomings in the definition of user experience and its relation to usability issues. Also, future proactive environments and adaptive mobile devices bring new aspects to the field of user experience research. The aim of the paper is to study how user experience can be evaluated in adaptive mobile applications. User experience research and its methods are briefly present ed in Chapter 2. Adaptive mobile prototypes and user experience evaluations are described and methods analyzed in Chapter 3. The results of the paper are presented in Chapter 4. Finally, the research is concluded and further work discussed in Chapter 5.

A review of other papers relevant to the topic ( a literature review)

BELLOTTI; F.; BERTA; R.; DEGLORIA; A. AND MARGARONE; M. 2002. User Testing a Hypermedia Tour Guide. IEEE Pervasive Computing; 33-41.
BUCHENAU; M. AND FULTON SURI; J. 2000. Experience Prototyping; in Proceedings of the DIS 2000 seminar; Communications of the ACM; 424-433.
CONSOLVO; S.; ARNSTEIN ; L. AND FRANZA; B. R. 2002. User Study Techniques in the Design and Evaluation of a Ubicomp Environment.In the Proceedings of UbiComp 2002; LNCS 2498; Springer-Verlag; Berlin; 73-90.
DEWEY; J. 1980.Art as Experience; New York: Perigee; (reprint);355.
DEY; A. K. AND ABOWD; G.D. 1999. Towards a Better Understanding of Context and Context-Awareness. GVU Technical Report. GIT-GVU- 99-22. Georgia Institute of Technology.
FLECK; M.; FRID; M.; KINDBERG ; T.; O’BRIEN-STRAIN; E.; RAJANI; R. AND SPASOJEVIC; M. 2002. From Informing to Remembering: Ubiquitous Systems in Interactive Museums. IEEE Pervasive Computing 1/2; 17-25.
FORLIZZI; J. AND FORD ; S. 2000. The Building Blocks of Experience: An Early Framework for Interaction Designers; in Proceedings of the DIS 2000 seminar; Communications of the ACM; 419–423.
GARRETT; J. J. 2002. The Elements of user experience. User-centered design for the web. New Riders; 208.
HILTUNEN; M.; LAUKKA; M. AND LUOMALA; J. 2002. Mobile User Experience; Edita Publishing Inc. Finland; 214.
JOHANSON; B.; FOX; A. AND WINOGRAD; T. 2002. The Interactive Workspaces Project: Experiences with Ubiquitous Computing Rooms. IEEE Pervasive computing 1/2 ; 67-74.
NIKKANEN; M. 2001. Käyttäjän kokemusta kartoittavien tutkimus- ja suunnittelumenetelmien käyttö tuotekehitysprosessissa. Licentiate’s degree. University of Helsinki; 102.
PALEN; L. AND SALZMAN; M. 2002. Voice-mail Diary Studies for Naturalist ic Data Capture under Mobile Conditions; CSCW; New Orleans; Louisiana; USA; November 16-20;87-95.
RANTAKOKKO; T. AND PLOMP ; J. 2003. An Adaptive Map-Based Interface for Situated Services; in proceedings of the Smart Objects Conference;Grenoble; France.
WEISER; M. 1991. THE Computer for the 21st Century . Scientific American 265(3);94-104.

Description of what the research was and what the researchers did and the results of what they did

User Experience for the 1st achidemic papper.PNG

a discussion about what the results mean

This chapter is divided into two parts. Firstly, the benefits and challenges of the interview and observation methods from the viewpoint of user experience research are summarized. Secondly, the suitability of interviews and observations for user experience research is discussed. 4.1 Benefits and Challenges Interview is a good method for user experience evaluation, because then the test situation can be like a “chat session” with the test user. It gives the possibility to create a calm and nice atmosphere in test situation. This is also an easy way to get information about the user’s background (age, education), prior experiences, expectations and motivation, etc.

However, there are some interesting challenges for the interviewers to clarify. Firstly, questions related to user experience should be formulated very carefully so that the users can understand them easily. Secondly, usually the user can express his/her opinions about a device and its characteristics, but verbally describing his/her feelings about the device is more difficult. In that kind of a situation, the interviewer can try to “read between the lines” when the user speaks about his/her experiences. Nevertheless, this challenge may require using some other methods as well. Observation also gave information about user experience. However, researchers need to interpret the user’s facial expression, body movements and gestures carefully, because the personality of the user will affect how they behave. For example, one test person said that she is very nervous, but her outward appearance was really calm. Moreover, humans make gestures very differently, for instance while one moves his or her eyebrows a lot, the other can move his/her eyes only a little. These two user experience evaluations elicited that a comprehensive observation will require video recording. In the first evaluation, video recording was not used, and thus only some facial expression was captured. However, the second evaluation was video recorded but still some challenges occurred. The first thing in video recording in user experience research is that it must not influence the user and his/her experiences. This is an interesting challenge. However, in order to collect the user’s facial expressions, gestures and actions on the screen, the video recording should be organized from different perspectives, for instance, from the front of the user’s face, the top of the screen and a little bit farther away so that the user is in the picture. In order for the observation to be reliable, a tool or a method for interpreting different gestures and emotions is required. 4.2 Suitability for user experience research The picture (Figure 1) presented in Chapter 2 illustrates what different factors affect user experience in user-product interaction. In evaluations, some factors can change; for instance, in the user experience evaluation presented in this paper, the user was one part that changed. The device, social and cultural factors and the context of use were the same. Consequently, when the user changes, interaction and user experience change as well (grey areas) (Figure 5). User experience factors can be captured via interviews or observations on a particular level. Factors, which did not appear in the evaluations, are underlined in the picture (Figure 5) and marked as NE (Not Emerged in the evaluations) in the table (Table 1). However, this paper does not deny that those factors could not be captured via interviews and observations. The evaluations elicited that some user experience factors can be gathered via both of the methods. For example, the user can comment on the product’s functions and say that they are easy to understand and learn. However, when he/she uses product, the observer can perceive that he/she uses it in the wrong way. On the other hand, observation does not always bring out the user’s emotions properly, and thus interview can reveal the true emotions more easily. Hence, interviews and observations can give different information about the same factor, and thus give a more comprehensive view to user experience. This paper presents what user experience factors were captured via interviews and observations (Table 1).

a conclusion

5 Conclusion The purpose of this paper was to define how user experience can be evaluated in adaptive mobile applications. In general, the capturing of user experience is quite difficult, because there are so many different factors in user-product interaction (Figure 1). For the evaluation, those factors should be clarified and a goal for the test defined in a test plan. This may help make the evaluation more systematic. Both the examinations illustrated that interviews and observations are appropriate methods for capturing user experience (Table 1). However, this study confirmed that several methods need to be used in order to evaluate user experience. In addition to the interviews and observations, researchers will need more efficient ways to get information about the user’s emotions and experiences, concerning for example collection and interpretation of body gestures and facial expressions. In order to collect authentic emotions, the test situation should be organized so that is as natural as possible. As further research, more user experience evaluations will be done for different adaptive mobile devices, using different methods.

a list of references

BELLOTTI, F., BERTA, R., DEGLORIA, A. AND MARGARONE, M. 2002. User Testing a Hypermedia Tour Guide. IEEE Pervasive Computing, 33-41.

BUCHENAU, M. AND FULTON SURI, J. 2000. Experience Prototyping, in Proceedings of the DIS 2000 seminar, Communications of the ACM, 424-433. CONSOLVO, S.,

CONSOLVO, S., ARNSTEIN , L. AND FRANZA, B. R. 2002. User Study Techniques in the Design and Evaluation of a Ubicomp Environment. In the Proceedings of UbiComp 2002, LNCS 2498, Springer-Verlag, Berlin, 73-90. DEWEY, J. 1980.Art as

DEWEY, J. 1980.Art as Experience,New York: Perigee, (reprint),355. DEY, A. K. AND ABOWD, G.D. 1999. Towards a Better Understanding of Context and Context-Awareness. GVU Technical Report. GIT-GVU- 99-22. Georgia Institute of Technology. FLECK, M., FRID, M.,

DEY, A. K. AND ABOWD, G.D. 1999. Towards a Better Understanding of Context and Context-Awareness. GVU Technical Report. GIT-GVU- 99-22. Georgia Institute of Technology. FLECK, M., FRID, M.,

FLECK, M., FRID, M., KINDBERG , T., O’BRIEN-STRAIN, E., RAJANI, R. AND SPASOJEVIC, M. 2002. From Informing to Remembering: Ubiquitous Systems in Interactive Museums. IEEE Pervasive Computing 1/2, 17-25. FORLIZZI, J. AND

FORLIZZI, J. AND FORD , S. 2000. The Building Blocks of Experience: An Early Framework for Interaction Designers, in Proceedings of the DIS 2000 seminar, Communications of the ACM, 419–423. GARRETT, J. J. 2002. The Elements of user experience. User-centered design for the web. New Riders, 208. HILTUNEN, M., LAUKKA, M. AND LUOMALA, J. 2002. Mobile User

GARRETT, J. J. 2002. The Elements of user experience. User-centered design for the web. New Riders, 208. HILTUNEN, M., LAUKKA, M. AND LUOMALA, J. 2002. Mobile User

HILTUNEN, M., LAUKKA, M. AND LUOMALA, J. 2002. Mobile User Experience,Edita Publishing Inc. Finland, 214. JOHANSON, B., FOX, A. AND WINOGRAD,

JOHANSON, B., FOX, A. AND WINOGRAD, T . 2002. The Interactive Workspaces Project: Experiences with Ubiquitous Computing Rooms. IEEE Pervasive computing 1/2 , 67-74.

 

 

 

GUI Ripping: Reverse Engineering of Graphical User Interfaces for Testing

The authors (usually with an email address and affiliation)

 

Atif Memon Department of Computer Science and Fraunhofer Center for Experimental Software Engineering University of Maryland College Park, Maryland, USA atif@cs.umd.edu

Ishan Banerjee, Adithya Nagarajan Department of Computer Science University of Maryland College Park, Maryland, USA {ishan, sadithya}@cs.umd.edu

The abstract

Graphical user interfaces (GUIs) are important parts of today’s software and their correct execution is required to ensure the correctness of the overall software. A popular technique to detect defects in GUIs is to test them by executing test cases and checking the execution results. Test cases may either be created manually or generated automatically from a model of the GUI. While manual testing is unacceptably slow for many applications, our experience with GUI testing has shown that creating a model that can be used for automated test case generation is difficult. We describe a new approach to reverse engineer a model represented as structures called a GUI forest, event-flow graphs and an integration tree directly from the executable GUI. We describe “GUI Ripping”, a dynamic process in which the software’s GUI is automatically “traversed” by opening all its windows and extracting all their widgets (GUI objects), properties, and values. The extracted information is then verified by the test designer and used to automatically generate test cases. We present algorithms for the ripping process and describe their implementation in a tool suite that operates on Java and Microsoft Windows’ GUIs. We present results of case studies which show that our approach requires very little human intervention and is especially useful for regression testing of software that is modified frequently. We have successfully used the “GUI Ripper” in several large experiments and have made it available as a downloadable tool.

The introduction and A description of what the research was and what the researchers did

Graphical user interfaces (GUIs) are one of the most important parts of today’s software [13]. They make software easy to use by providing the user with highly visual controls that represent everyday objects such as menus, buttons, lists, and windows. Recognizing the importance of GUIs, software developers are dedicating large parts of the code to implementing GUIs [12]. The correctness of this code is essential to the correct execution of the overall software. A popular technique to detect defects in software is testing [3, 2, 23]. During testing, test cases are created and executed on the software. Test cases may either be created manually by a tester [10, 27, 8] or automatically by using a model of the software derived from its specifications [20]. In all our work to date [20, 17, 21, 16, 18, 19, 15, 12, 14], we have observed that software specifications are rarely in a form to be used for automated GUI testing. GUI testing requires that test cases (sequences of GUI events that exercise GUI widgets) be generated and executed on the GUI [13]. However, currently available techniques for obtaining GUI test cases are resource intensive, requiring significant human intervention. The most popular technique to test GUIs is by using capture/replay tools [10]. When using a capture/replay tool, a human tester interacts with the application under test (AUT); the capture component of the tool stores this interaction in a file that can be replayed later using the replay component of the tool. Our experience has shown that generating a typical test case with 50 events for different widgets takes 20-30 minutes using capture-replay tools. A few automated GUI test case generation techniques have been proposed [20]. However, they all require creating a model of the GUI – a significant resource intensive step that intimidates many practitioners and prevents the application of the techniques. In this paper, we present a technique, called GUI Ripping to reverse engineer the GUI’s model directly from the executing GUI. Once verified by the test designer, this model is then used to automatically generate test cases. GUI ripping has numerous other applications such as reverse engineering of COTS GUI products to test them within the context of their use, porting and controlling legacy applications to new platforms [22], and developing model checking tools for GUIs [6]. For space reasons, in this paper, we will provide details relevant to the testing process. GUI ripping is a dynamic process that is applied to an executing software’s GUI. Starting from the software’s first window (or set of windows), the GUI is “traversed” by opening all child windows. All the window’s widgets (building blocks of the GUI, e.g., buttons, text-boxes), their properties (e.g., background-color, font), and values (e.g., red, Times New Roman, 18pt) are extracted. Developing this process has several challenges that required us to develop novel solutions. First, the source code of the software may not always be available; we had to develop techniques to extract information from the executable files. Second, there are no GUI standards across different platforms and implementations; we had to extract all the information via low-level implementation-dependent system calls, which we have found are not always well-documented. Third, some implementations may provide less information than necessary to perform automated testing; we had to rely on heuristics and human intervention to determine missing parts. Finally, the presence of infeasible paths in GUIs prevents full automation. For example, some windows may be available only after a valid password has been provided. Since the GUI Ripper may not have access to the password, it may not be able to extract information from such windows. We had to provide another process and tool support to visually add parts to the extracted GUI model. We use GUI ripping to extract both the structure and execution behavior of the GUI – both essential for automated testing. We represent the GUI’s structure as a GUI forest and its execution behavior as event-flow graphs, and an integration tree [21]. Each node of the GUI forest represents a window and encapsulates all the widgets, properties and values in that window; there is an edge from node x to node y if the window represented by y is opened by performing an event in the window represented by node x, e.g., by clicking on a button. Intuitively, event-flow graphs and the integration tree show the flow of events in the GUI. We provide details of these structures in Section 2. We have implemented our algorithm in a software called the GUI Ripper. We use the GUI Ripper as a central part of two large software systems called GUITAR1 and DART (Daily Automated Regression Tester) to generate, execute, verify GUI test cases, and perform regression testing [15]. We provide details of two instances of the GUI Ripper, one for Microsoft Windows and the other for Java Swing applications. We then empirically evaluate the performance of the ripper on four Java applications with complex GUIs, Microsoft’s WordPad, Yahoo Messenger, and Winzip. The results of our empirical studies show that the ripping pro- 1http://guitar.cs.umd.edu cess is efficient, in that it is very fast and requires little human intervention. We also show that relative to other testing activities, ripping consumes very little resources. We also observe that automated testing would not be possible without the help of the GUI Ripper. The specific contributions of our work include the following. • We provide an efficient algorithm to extract a software’s GUI model without the need for its source code. • We describe a new structure called a GUI forest. • We provide implementation details of a new tool that can be applied to a large number of MS Windows and Java Swing GUIs. In the next section, we present a formal model of the GUI specifications that are obtained by the GUI Ripper. In Section 3, we present the design of the ripper and provide an algorithm that can be used to implement the ripper. In Section 4 we discuss the MS Windows and Java implementations of the GUI Ripper. In Section 5, we empirically evaluate our algorithms on several large and popular software. We then conclude with a discussion of related work in Section 6, and ongoing and future work in Section 7.

A review of other papers relevant to the topic ( a literature review)

Moore [22] describes experiences with manual reverse engineering of legacy applications to build a model of the user interface functionality. A technique to partially automate this process is also outlined. The results show that a language-independent set of rules can be used to detect user interface components from legacy code. Developing such rules is a nontrivial task, especially for the type of information that we need for software testing. Systa has used reverse engineering to study and analyze the run-time behavior of Java software [26]. Event trace information is generated as a result of running the target software under a debugger. The event trace, represented as scenario diagrams, is given as an input to a prototype tool SCED [11] that outputs state diagrams. The state diagrams can be used to examine the overall behavior of a desired class, object, or method. Several different types of representations have been used to generate test information. Anderson and Fickas have used preconditions/postconditions to represent software requirements and specifications [1, 7]. These representations have been successfully used to generate test cases [24, 20]. Scheetz at al. have used a class diagram representation of the system’s architecture to generate test cases using an AI planning system [25]. There are various techniques used for testing GUIs [9, 12]. One of our earlier techniques makes use of specifications to generate test cases. In the PATHS [19, 16, 18] system we used an AI planner to generate test cases from GUI specifications. PATHS system uses a semi-automatic approach requiring substantial test designer participation. Our GUI ripping technique is different in that we focus on generating the specifications automatically thereby minimizing test designers involvement. Chen et al. [4] develop a specification-based technique to test GUIs. Users graphically manipulate test specifications represented by finite state machines (FSM). They provide a visual environment for manipulating these FSMs. We have successfully used the GUI Ripper software in large GUI testing studies of our DART system [15]. The GUI Ripper was used to generate the GUI structure for several applications. Test cases and test oracle information (expected output) [17] were automatically generated from the extracted information.

the results of what they did and A conclusion

Automated testing of software that have a graphical user interface (GUI) has become extremely important as GUIs become increasingly complex and popular. A key step to automatically test GUI software is test case generation from a model of the software. Our experience with GUI testing has shown that such models are very expensive to create manually and software specifications are rarely available in a form to derive these models automatically. We presented a new technique, called GUI ripping to obtain models of the GUI’s structure and execution behavior automatically. We represented the GUI’s structure as a GUI forest, and its execution behavior as event-flow graphs and an integration tree. We described the GUI ripping process, which is applied to the executing software. The process opens all the software’s windows automatically and extracts all their widgets, properties, and values. The execution model of the GUI was obtained by using a classification of the GUI’s events. Once the extracted information is verified by a test designer, it is used to automatically generate test cases. We empirically showed that our approach requires very little human intervention. We have implemented our algorithms in a tool called a “GUI Ripper” and have made it available as a downloadable tool.s.

A discussion about what the results mean

In the future, we will extend our implementation to handle more MS Windows GUIs, Unix, and web applications. We will also use the GUI ripper for performing usability anlysis of GUIs. It will also be extended for measuring specification conformanc of GUI

A list of references

[1] J. S. Anderson. Automating Requirements Engineering Using Artificial Intelligence Techniques. Ph.D. thesis, Dept. of Computer and Information Science, University of Oregon, Dec. 1993. [2] I. Bashir and A. L. Goel. Testing Object-Oriented Software, Life Cycle Solutions. Springer-Verlag, 1999. [3] B. Beizer. Black-Box Testing: Techniques for Functional Testing of Software and Systems. John Wiley & Sons, 1999. [4] J. Chen and S. Subramaniam. A GUI environment to manipulate

[2] I. Bashir and A. L. Goel. Testing Object-Oriented Software, Life Cycle Solutions. Springer-Verlag, 1999. [3] B. Beizer. Black-Box Testing: Techniques for Functional Testing of Software and Systems. John Wiley & Sons, 1999. [4] J. Chen and S. Subramaniam. A GUI environment to manipulate

[3] B. Beizer. Black-Box Testing: Techniques for Functional Testing of Software and Systems. John Wiley & Sons, 1999. [4] J. Chen and S. Subramaniam. A GUI environment to manipulate

[4] J. Chen and S. Subramaniam. A GUI environment to manipulate fsms for testing GUI-based applications in java. In Proceeding of the 34th Hawaii International Conferences on System Sciences, Jan 2001. [5] T. Cormen, C. Leiserson, and R. Rivest. Introduction to Algorithms, chapter 23.3, pages 477–485. Prentice-Hall of India Private Limited, September 2001. [6] M. B. Dwyer, V. Carr, and L. Hines. Model checking graphical user interfaces using abstractions. In M. Jazayeri and H. Schauer, editors, ESEC/FSE ’97, volume 1301 of Lecture Notes in Computer Science, pages 244–261. Springer / ACM Press, 1997. [7] S. Fickas and J. S. Anderson. A proposed perspective shift: Viewing specification design as a planning problem. In D. Partridge, editor, Artificial Intelligence & Software Engineering, pages 535–550. Ablex, Norwood, NJ, 1991. [8] H. Foster, T. Goradia, T. Ostrand, and W.

[5] T. Cormen, C. Leiserson, and R. Rivest. Introduction to Algorithms, chapter 23.3, pages 477–485. Prentice-Hall of India Private Limited, September 2001. [6] M. B. Dwyer, V. Carr, and L. Hines. Model checking graphical user interfaces using abstractions. In M. Jazayeri and H. Schauer, editors, ESEC/FSE ’97, volume 1301 of Lecture Notes in Computer Science, pages 244–261. Springer / ACM Press, 1997. [7] S. Fickas and J. S. Anderson. A proposed perspective shift: Viewing specification design as a planning problem. In D. Partridge, editor, Artificial Intelligence & Software Engineering, pages 535–550. Ablex, Norwood, NJ, 1991. [8] H. Foster, T. Goradia, T. Ostrand, and W.

[6] M. B. Dwyer, V. Carr, and L. Hines. Model checking graphical user interfaces using abstractions. In M. Jazayeri and H. Schauer, editors, ESEC/FSE ’97, volume 1301 of Lecture Notes in Computer Science, pages 244–261. Springer / ACM Press, 1997. [7] S. Fickas and J. S. Anderson. A proposed perspective shift: Viewing specification design as a planning problem. In D. Partridge, editor, Artificial Intelligence & Software Engineering, pages 535–550. Ablex, Norwood, NJ, 1991. [8] H. Foster, T. Goradia, T. Ostrand, and W.

[7] S. Fickas and J. S. Anderson. A proposed perspective shift: Viewing specification design as a planning problem. In D. Partridge, editor, Artificial Intelligence & Software Engineering, pages 535–550. Ablex, Norwood, NJ, 1991. [8] H. Foster, T. Goradia, T. Ostrand, and W.

[8] H. Foster, T. Goradia, T. Ostrand, and W. Szermer. A visual test development environment for GUI systems. In 11th International Software Quality Week. IEEE Press, 26-29 May 1998. [9] P. Gerrard. Testing GUI applications. In

[9] P. Gerrard. Testing GUI applications. In EuroSTAR, Nov 1997. [10] J. H. Hicinbothom and W. W. Zachary. A tool for automatically generating transcripts of human-computer interaction. In Proceedings of the Human Factors and Ergonomics Society 37th Annual Meeting, volume 2 of SPECIAL SESSIONS: Demonstrations, page 1042, 1993. [11] K. Koskimies, T.

[10] J. H. Hicinbothom and W. W. Zachary. A tool for automatically generating transcripts of human-computer interaction. In Proceedings of the Human Factors and Ergonomics Society 37th Annual Meeting, volume 2 of SPECIAL SESSIONS: Demonstrations, page 1042, 1993. [11] K. Koskimies, T.

[11] K. Koskimies, T. Mnnist, T. Syst, and J. Tuomi. Automated support for modeling oo software. In IEEE Software, pages 87–94, Jan-Feb 1998. [12] A. M. Memon. A Comprehensive Framework for Testing Graphical User Interfaces. Ph.D. thesis, Department of Computer Science, University of Pittsburgh, July 2001. [13] A. M. Memon. GUI testing: Pitfalls and process. IEEE Computer, 35(8):90–91, Aug. 2002. [14] A. M. Memon. Advances in GUI testing. In Advances in Computers, ed. by Marvin V. Zelkowitz, volume 57. Academic Press, 2003. [15] A. M. Memon, I. Banerjee, N. Hashmi, and A. Nagarajan. DART: A framework for regression testing nightly/daily builds of GUI applications. In Proceedings of the International conference on software maintenance 2003, September 2003. [16] A. M. Memon, M. E. Pollack, and M. L. Soffa. Using a goal-driven approach to generate test cases for GUIs. In Proceedings of the 21st International Conference on Software Engineering, pages 257–266. ACM Press, May 1999. [17] A. M. Memon, M. E. Pollack, and M. L. Soffa. Automated test oracles for GUIs. In Proceedings of the ACM SIGSOFT 8th International Symposium on the Foundations of Software Engineering (FSE-8), pages 30–39, NY, Nov. 8–10 2000. [18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[12] A. M. Memon. A Comprehensive Framework for Testing Graphical User Interfaces. Ph.D. thesis, Department of Computer Science, University of Pittsburgh, July 2001. [13] A. M. Memon. GUI testing: Pitfalls and process. IEEE Computer, 35(8):90–91, Aug. 2002. [14] A. M. Memon. Advances in GUI testing. In Advances in Computers, ed. by Marvin V. Zelkowitz, volume 57. Academic Press, 2003. [15] A. M. Memon, I. Banerjee, N. Hashmi, and A. Nagarajan. DART: A framework for regression testing nightly/daily builds of GUI applications. In Proceedings of the International conference on software maintenance 2003, September 2003. [16] A. M. Memon, M. E. Pollack, and M. L. Soffa. Using a goal-driven approach to generate test cases for GUIs. In Proceedings of the 21st International Conference on Software Engineering, pages 257–266. ACM Press, May 1999. [17] A. M. Memon, M. E. Pollack, and M. L. Soffa. Automated test oracles for GUIs. In Proceedings of the ACM SIGSOFT 8th International Symposium on the Foundations of Software Engineering (FSE-8), pages 30–39, NY, Nov. 8–10 2000. [18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[13] A. M. Memon. GUI testing: Pitfalls and process. IEEE Computer, 35(8):90–91, Aug. 2002. [14] A. M. Memon. Advances in GUI testing. In Advances in Computers, ed. by Marvin V. Zelkowitz, volume 57. Academic Press, 2003. [15] A. M. Memon, I. Banerjee, N. Hashmi, and A. Nagarajan. DART: A framework for regression testing nightly/daily builds of GUI applications. In Proceedings of the International conference on software maintenance 2003, September 2003. [16] A. M. Memon, M. E. Pollack, and M. L. Soffa. Using a goal-driven approach to generate test cases for GUIs. In Proceedings of the 21st International Conference on Software Engineering, pages 257–266. ACM Press, May 1999. [17] A. M. Memon, M. E. Pollack, and M. L. Soffa. Automated test oracles for GUIs. In Proceedings of the ACM SIGSOFT 8th International Symposium on the Foundations of Software Engineering (FSE-8), pages 30–39, NY, Nov. 8–10 2000. [18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[14] A. M. Memon. Advances in GUI testing. In Advances in Computers, ed. by Marvin V. Zelkowitz, volume 57. Academic Press, 2003. [15] A. M. Memon, I. Banerjee, N. Hashmi, and A. Nagarajan. DART: A framework for regression testing nightly/daily builds of GUI applications. In Proceedings of the International conference on software maintenance 2003, September 2003. [16] A. M. Memon, M. E. Pollack, and M. L. Soffa. Using a goal-driven approach to generate test cases for GUIs. In Proceedings of the 21st International Conference on Software Engineering, pages 257–266. ACM Press, May 1999. [17] A. M. Memon, M. E. Pollack, and M. L. Soffa. Automated test oracles for GUIs. In Proceedings of the ACM SIGSOFT 8th International Symposium on the Foundations of Software Engineering (FSE-8), pages 30–39, NY, Nov. 8–10 2000. [18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[15] A. M. Memon, I. Banerjee, N. Hashmi, and A. Nagarajan. DART: A framework for regression testing nightly/daily builds of GUI applications. In Proceedings of the International conference on software maintenance 2003, September 2003. [16] A. M. Memon, M. E. Pollack, and M. L. Soffa. Using a goal-driven approach to generate test cases for GUIs. In Proceedings of the 21st International Conference on Software Engineering, pages 257–266. ACM Press, May 1999. [17] A. M. Memon, M. E. Pollack, and M. L. Soffa. Automated test oracles for GUIs. In Proceedings of the ACM SIGSOFT 8th International Symposium on the Foundations of Software Engineering (FSE-8), pages 30–39, NY, Nov. 8–10 2000. [18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[16] A. M. Memon, M. E. Pollack, and M. L. Soffa. Using a goal-driven approach to generate test cases for GUIs. In Proceedings of the 21st International Conference on Software Engineering, pages 257–266. ACM Press, May 1999. [17] A. M. Memon, M. E. Pollack, and M. L. Soffa. Automated test oracles for GUIs. In Proceedings of the ACM SIGSOFT 8th International Symposium on the Foundations of Software Engineering (FSE-8), pages 30–39, NY, Nov. 8–10 2000. [18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[17] A. M. Memon, M. E. Pollack, and M. L. Soffa. Automated test oracles for GUIs. In Proceedings of the ACM SIGSOFT 8th International Symposium on the Foundations of Software Engineering (FSE-8), pages 30–39, NY, Nov. 8–10 2000. [18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A planningbased approach to GUI testing. In Proceedings of The 13th International Software/Internet Quality Week, May 2000. [20] A. M. Memon, M. E. Pollack, and M. L. Soffa. Hierarchical GUI test case generation using automated planning. IEEE Transactions on Software Engineering, 27(2):144–155, Feb. 2001. [21] A. M. Memon, M. L. Soffa, and M. E. Pollack. Coverage criteria for GUI testing. In Proceedings of the 8th European Software Engineering Conference (ESEC) and 9th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE-9), pages 256–267, Sept. 2001. [22] M. M. Moore. Rule-based detection for reverse engineering user interfaces. In Proceedings of the Third Working Conference on Reverse Engineering, pages 42–8, Monterey, CA, 8–10 Nov. 1996. IEEE. [23] R. M. Poston. Automating Specification-Based Software Testing. IEEE Computer Society, Los Alamitos, 1 edition, 1996. [24] M. Scheetz, A. V. Mayrhauser, E. Dahlman, and A. E. Howe. Generating goal-oriented test cases. [25] M. Scheetz, A. V. Mayrhauser, R. France, E. Dahlman, and A. E. Howe. Generating test cases from an oo model with an ai planning system. In Proceedings in the Twenty-Third Annual International Computer Software and Applications Conference, March 2000. [26] T.

[20] A. M. Memon, M. E. Pollack, and M. L. Soffa. Hierarchical GUI test case generation using automated planning. IEEE Transactions on Software Engineering, 27(2):144–155, Feb. 2001. [21] A. M. Memon, M. L. Soffa, and M. E. Pollack. Coverage criteria for GUI testing. In Proceedings of the 8th European Software Engineering Conference (ESEC) and 9th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE-9), pages 256–267, Sept. 2001. [22] M. M. Moore. Rule-based detection for reverse engineering user interfaces. In Proceedings of the Third Working Conference on Reverse Engineering, pages 42–8, Monterey, CA, 8–10 Nov. 1996. IEEE. [23] R. M. Poston. Automating Specification-Based Software Testing. IEEE Computer Society, Los Alamitos, 1 edition, 1996. [24] M. Scheetz, A. V. Mayrhauser, E. Dahlman, and A. E. Howe. Generating goal-oriented test cases. [25] M. Scheetz, A. V. Mayrhauser, R. France, E. Dahlman, and A. E. Howe. Generating test cases from an oo model with an ai planning system. In Proceedings in the Twenty-Third Annual International Computer Software and Applications Conference, March 2000. [26] T.

[21] A. M. Memon, M. L. Soffa, and M. E. Pollack. Coverage criteria for GUI testing. In Proceedings of the 8th European Software Engineering Conference (ESEC) and 9th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE-9), pages 256–267, Sept. 2001. [22] M. M. Moore. Rule-based detection for reverse engineering user interfaces. In Proceedings of the Third Working Conference on Reverse Engineering, pages 42–8, Monterey, CA, 8–10 Nov. 1996. IEEE. [23] R. M. Poston. Automating Specification-Based Software Testing. IEEE Computer Society, Los Alamitos, 1 edition, 1996. [24] M. Scheetz, A. V. Mayrhauser, E. Dahlman, and A. E. Howe. Generating goal-oriented test cases. [25] M. Scheetz, A. V. Mayrhauser, R. France, E. Dahlman, and A. E. Howe. Generating test cases from an oo model with an ai planning system. In Proceedings in the Twenty-Third Annual International Computer Software and Applications Conference, March 2000. [26] T.

[22] M. M. Moore. Rule-based detection for reverse engineering user interfaces. In Proceedings of the Third Working Conference on Reverse Engineering, pages 42–8, Monterey, CA, 8–10 Nov. 1996. IEEE. [23] R. M. Poston. Automating Specification-Based Software Testing. IEEE Computer Society, Los Alamitos, 1 edition, 1996. [24] M. Scheetz, A. V. Mayrhauser, E. Dahlman, and A. E. Howe. Generating goal-oriented test cases. [25] M. Scheetz, A. V. Mayrhauser, R. France, E. Dahlman, and A. E. Howe. Generating test cases from an oo model with an ai planning system. In Proceedings in the Twenty-Third Annual International Computer Software and Applications Conference, March 2000. [26] T.

[23] R. M. Poston. Automating Specification-Based Software Testing. IEEE Computer Society, Los Alamitos, 1 edition, 1996. [24] M. Scheetz, A. V. Mayrhauser, E. Dahlman, and A. E. Howe. Generating goal-oriented test cases. [25] M. Scheetz, A. V. Mayrhauser, R. France, E. Dahlman, and A. E. Howe. Generating test cases from an oo model with an ai planning system. In Proceedings in the Twenty-Third Annual International Computer Software and Applications Conference, March 2000. [26] T.

[24] M. Scheetz, A. V. Mayrhauser, E. Dahlman, and A. E. Howe. Generating goal-oriented test cases. [25] M. Scheetz, A. V. Mayrhauser, R. France, E. Dahlman, and A. E. Howe. Generating test cases from an oo model with an ai planning system. In Proceedings in the Twenty-Third Annual International Computer Software and Applications Conference, March 2000. [26] T.

[25] M. Scheetz, A. V. Mayrhauser, R. France, E. Dahlman, and A. E. Howe. Generating test cases from an oo model with an ai planning system. In Proceedings in the Twenty-Third Annual International Computer Software and Applications Conference, March 2000. [26] T.

[26] T. Systa. Dynamic reverse engineering of java software. Technical report, University of Tampere, Finland, Box 607, 33101 Tampere, Finland, 2001. http://www.fzi.de/Ecoop99- WS-Reengineering/papers/tarjan/ecoop.html. [27] A. Walworth. Java GUI testing. Dr. Dobb’s Journal of Software Tools, 22(2):30, 32, 34, Feb. 1997.

[27] A. Walworth. Java GUI testing. Dr. Dobb’s Journal of Software Tools, 22(2):30, 32, 34, Feb. 1997.

Creidiabil Evedance

We are to search both Digital Citizenship and Virtualization Technology, then we need to answer the following questions. This needs to be done 3 times on each subject.

  • How we found it
  • when it was written
  • who it was written by (expert, undergrad student,….)
  • where it was published or what type of ‘thing’ it is (book, article, blog)
  • what others have said about it (reviews)
  • whether others have used the information in their own work (citations)
  • how it is written (style)

 

Digital Citizenship

http://elearning.tki.org.nz/Teaching/Digital-citizenship

  • how we found it
    • I typed into google digital citizenship, and this came up
  • when it was written
    • This did not say when it was written, though it seems fairly relevant so I would say in the last year.
  • Who it was written by (expert, undergrad student,….)
    • as I cannot find who wrote this, it seems as though Sean Lyons Wrote it.
  • where it was published or what type of ‘thing’ it is (book, article, blog)
    • this is an article on a website, that is followed up with a small video
  • what others have said about it (reviews)
    • People update the social media, but I haven’t seen any comments on their Facebook
  • whether others have used the information in their own work (citations)
    • This is a website for individuals as well as people and groups, so this is designed to be used by many people and as an education piece so I would assume so
  • how it is written (style)
    • as a short blurb

 

http://core-ed.org/legacy/thought-leadership/ten-trends/ten-trends-2013/digital-citizenship

  • how we found it
      • I typed into google digital citizenship, and this came up
  • when it was written
    • This was written in 2013 and assume this has been revised since then as well as it’s up to date
  • who it was written by (expert, undergrad student,….)
    • I am uncertain who wrote this as all it states is  EDtalks, so I assume it was originally written by someone and revised over and over again
  • where it was published or what type of ‘thing’ it is (book, article, blog)
    • This was originally written for the website, and Because it’s an Educational piece I would assume this is in phablet version as well
  • what others have said about it (reviews)
    • There seem to be no active reviews that I can see on the page
  • whether others have used the information in their own work (citations)
    • This is an educational site so assume this is in the
  • how it is written (style)
    • In a List and a Paragraphs.

 

http://www.digitalcitizenship.nsw.edu.au/parent_Splash/index.htm

  • how we found it
    • I typed into google digital citizenship, and this came up
  • when it was written
    • OCTOBER 24, 2014 though I think tha=is has been reiterated.
  • Who it was written by (expert, undergrad student,….)
    • Computer Fundamentals, Computer Science and IT Integrator from Camilla, GA
  • where it was published or what type of ‘thing’ it is (book, article, blog)
    • Article on the net, but as this is an educational piece I assume that this printed as a pamphlet
  • what others have said about it (reviews)
    • The articles are spoken very highly of the educational piece
  • whether others have used the information in their own work (citations)
    • This is an education piece so I would assume this is used in the classroom
  • how it is written (style)
    • As an educational piece, so this is a step by step guide for students.

 

 

Virtualization Technology

http://www.vmware.com/solutions/virtualization.html

  • how we found it
      • I typed into google digital citizenship, and this came up
  • when it was written
    • This is not stated, but because it is an article that has been written by a major VM retailer I would assume this is very recent. Within the last year
  • who it was written by (expert, undergrad student,….)
    • This has been written by an expert by what has been sadi and the way its been said, But there is no name associated with the blog
  • where it was published or what type of ‘thing’ it is (book, article, blog)
    • This was written as a blog on the web page, so it is an article
  • what others have said about it (reviews)
    • This has been locked down, and no one has been able to comment on the page.
  • Whether others have used the information in their own work (citations)
    • I don’t not know, though I assume unless they have educational classes on their product I would say no.
  • How it is written (style)
    • A blog on the VMWear page.

 

http://searchservervirtualization.techtarget.com/definition/virtualization

  • how we found it
    • I typed into google digital citizenship, and this came up
  • when it was written
    • This has been written, but the last iteration of this blog has been in October 2016
  • who it was written by (expert, undergrad student,….)
    • This has been written by Margret Rouse, She has no previous qualifications that I can see.
  • Where it was published or what type of ‘thing’ it is (book, article, blog)
    • Blog on the VMwear website
  • what others have said about it (reviews)
    • This is an inciteful website that has a lot of information.
  • Whether others have used the information in their own work (citations)
    • This has no citations that I can fine
  • how it is written (style)
    • This is a blog written on TechTarget

 

https://software.intel.com/en-us/articles/the-advantages-of-using-virtualization-technology-in-the-enterprise

  • how we found it
    • I typed into google digital citizenship and this came up
  • when it was written
    • March 5, 2012
  • who it was written by (expert, undergrad student,….)
    • Thomas Wolfgang Burger is the owner of Thomas Wolfgang Burger Consulting. He has been a consultant, instructor, writer, analyst, and applications developer since 1978
  • where it was published or what type of ‘thing’ it is (book, article, blog)
    • Article Written for Intel
  • what others have said about it (reviews)
    • There is forum for people to write to for information, but there are no specific comments for this web page
  • whether others have used the information in their own work (citations)
    • They have not
  • how it is written (style)
    • This has been written as an Article.

Meta-Analysis

What is meta-analysis?  Here I will try to go into and explain what it is, here I go ……

“Meta-analysis is the statistical procedure for combining data from multiple studies. When the treatment effect (or effect size) is consistent from one study to the next, meta-analysis can be used to identify this common effect. When the effect varies from one study to the next, meta-analysis may be used to determine the reason for the variation.”  (MetaAnalysis.Php)

What is it? (Short description of how it works)

So this is a different type of analysis, this takes all the information from various studies and pools it together into one output. The output is answers combined for an average statistical answer.

What kinds of questions/problems might it be useful for?

I believe this could be helpful for saying the housing crisis Government based study IE; population and household usage. In this they can take some people in a house and get an average of that, they could also look into how much power and water an average house consumes and then average it down to each person. And then from then make a solution based off what each person needs instead of e.g. turning off water to a particular place.

How could it be used in IT research  (try to think of an example)?

I personally believe that this could be used to gather and then further gain more knowledge of how people use technology to learn more things we could use there for more to create and make money off.

What are the strengths of the approach?

This gives an excellent over un-bais knowledge of the field you’re looking at. This pools a lot of information into one study so there is no way there could be a bias opinion.

What are the weaknesses of the approach?

This doesn’t allow for a one of difference, for example, if there was a need to go and do something different for one particular group that is part of the Analysis, Say if there was one team that needed something completely different but it was a Meta-Analyst

 

Blog; 2

Research methods;

While in class were asked what ontology and epistemology is, and what is this in research terms. These are massive words that I have no prior knowledge of; these words are difficult to spell too, luckily clear has them in her blog notes.

So what are these big words? First of all, I will look into the singular subjects then relate them back to each other.

Ontology;

First of all, I looked into what this word was, and I took from the Oxford dictionaries which follow;

“NOUN
  • 1mass noun,The branch of metaphysics dealing with the nature of being.
Example sentences
  • 2A set of concepts and categories in a subject area or domain that shows their properties and the relations between them.
‘what’s new about our ontology is that it is created automatically from large datasets.’
‘We’re using ontologies to capture and analyze some of the knowledge in our department.’
More example sentences
Origin
Early 18th century: from modern Latin ontologia, from Greek ōn, ont- ‘being’ + -logy.

So, what this is saying in from is it’s a set of information, this information is derived from meta Physics, Meta means referring to its self and physics are a study of nature, matter and what some consider to be energy from the both.

 

So, from what I can get from this, ontology is constantly questioning what we know about topics as in; we go back to what we have previously been taught and constantly question it for further knowledge and a more evolved knowledge. This is because what we know is constantly evolving and growing. These are typically in the fields of What it came from, what it’s made from or evolved from, and where it’s going.

How is it relatable to research?

This is what research is made up from; this is the actual method of research we do. In this, we are always constantly re-asking ourselves what we believe to be true, and what is true.

 

What is epistemology?

NOUN
mass nounPhilosophy 
  • The theory of knowledge, especially about its methods, validity, and scope, and the distinction between justified belief and opinion.
Example sentences
Origin
Mid 19th century: from Greek epistēmē ‘knowledge’, from epistasthai ‘know, know how to do.’
 

This doesn’t tell me much about it, so I looked further into the topic; in which, I took from Wiki;

Epistemology (i/ᵻˌpɪstᵻˈmɒlədʒi/; from Greek ἐπιστήμη, epistēmē, meaning ‘knowledge,’ and λόγοςlogos, meaning ‘logical discourse’) is the branch of philosophy concerned with the theory of knowledge.[1]
Epistemology studies the nature of knowledge, justification, and the rationality of belief. Much of the debate in epistemology centers on four areas: (1) the philosophical analysis of the nature of knowledge and how it relates to such concepts as truthbelief, and justification,[2][3] (2) various problems of skepticism, (3) the sources and scope of knowledge and justified belief, and (4) the criteria for knowledge and justification.
The term ‘Epistemology’ was first used by Scottish philosopher James Frederick Ferrier in 1854.[a] However, according to Brett Warren, King James VI of Scotland had previously personified this philosophical concept as the character Epistemon in 1591.[5]

 

So what I understand Epistemology to be, this is studied our method of actual knowledge its self. This goes into the perceived knowledge based on different aspects of the knowledge. Summed up in one sentence “ How do we know what we know.” This is the nature of the knowledge its self, how we came across it, what we use to base of knowledge of, how valid these ideas are, e.g., logical reasoning, any thoughts we have, memories of the knowledge and the emotional attachment to these ideas.

So to elaborate on this, it’s our bias knowledge; this is taken from our personal belief from memory, emotions, and logical reasoning. These could be correct or wrong, ADD more to this.

 How is it relatable to research

This is relatable because we need to look at the sources our knowledge come from and how we got that knowledge so see how true it is.

 

So how are these related to each other ? and research?

So Epistemology is the study of how we know something, and what things affect our knowledge. And Ontology is what we know. These are intertwined, and you cannot have one without the other. Epistemology is the first as we need to look at how we know things to see how objectively true they are. And the Ontology is the things we know, and this comes second in my mind.

So in the research retrospect, we can think of the knowledge were gaining as Ontology, and the way was gaining the knowledge as Epistemology. So we go out trying to find more knowledge that’s is correct being ontology, and way we do it and the way we figure out that knowledge as Epistemology.

justified-true-beliefs-equal-knowledge

Bibliography  ;

Ontology

Epistemology

 

First RES701 blog post

  • What do you think ‘research’ is?
    • I think research is the actual research we will do for our project,  this can be looking online, getting books from the library or asking different people what information on the topic at hand they have
  • Do you think you will ever need research skills?
    • Yes, to quote mark ” you don’t know what you don’t know”, and in this, we will need to research to gain more information on what we don’t know and there for research to gain more knowledge.
  • What do you think a research journal is and who is it written for?
    • Well even before researching what it is, I think it’s a journal written as someone researches a topic they would like to know more about. I think it is written for one’s self. As in they write it for themselves as they go along to help retain some of the knowledge they have gathered.
  • What is plagiarism?
    • Something we don’t do ?. Plagiarism is a big word for copying someone else’s work.
  • Why is it important to avoid it?
    • Because I want to pass, simply put. Because if you do it and get found out you fail and get kicked out, and I want to achieve the paper. I want to work hard and get a bachelor.