Blog; Languages that will work on the raspberry Pi and how easy will they be to learn.

So, now I have figured out that I am going to be working with a raspberry pi I should now look into the languages that it will support, and then further, how to learn these languages.

So for this to happen we need to work out what OS’s are supported, These are of which that follows.

  • Raspbian
  • Ubuntu MATE
  • Snappy Ubuntu
  • Pidora
  • Linutop
  • SARPi
  • Arch Linux ARM
  • Gentoo Linux
  • Kali Linux

These are some of the Operating Systems that are supported by Raspberry Pi, Now let’s look at some of the  Languages that this Support;

  • Raspbian
  • Linux

Now I looked up the “forDummies” version, and these are what I found;

  • Scratch
  • Python
  • HTML5
  • JavaScript
  • JQUERY
  • Java
  • C ( C Programming Languages)
  • PERL
  • Erlang

Now these are interesting first of the Erlang and Perl seems like some long lost elfish city that is yet to be found again, but they appear to sound pretty reasonable. Let’s look into these.

  • Scratch
    • This seems more of a necessary code to then further learn others off, there is quite a lot of games to help learn Scratch itself, but not a lot of Coding tutorials from what I can see. Raspberry Pi has there owned Community Run Training series on the language itself, which is a really cool learning resource but not much help in my case. So on to the next one.
  • Python
  • HTML5
  • JavaScript
  • JQUERY
    • Thi seems to be a Subsidiary of the Javascript and From what I understand will run off a singular code sheet,
    • jQuery is a concise and fast JavaScript library that can be used to simplify event handling, HTML document traversing, Ajax interactions and animation for speedy website development. jQuery simplifies the HTML’s client-side scripting, thus simplifying Web 2.0 applications development.jQuery is a free, open-source and dual-licensed library under the GNU General Public License. It is considered one of the favorite JavaScript (JS) libraries available today. As of 2012, it is used by more than half of the Web’s top sites.”
  • C ( C Programming Languages)
    • From my understanding of this language, there are different variations. This is for when you have mastered the Previous simpler Languages. This is for writing hardware or from what I understand Firmware which I think is software.
    • C belongs to the structured, procedural paradigms of languages. It is proven, flexible and robust and may be used for a variety of different applications. Although high-level, C and assembly language shares many of the same attributes.Some of C’s most important features include:
      • Fixed number of keywords, including a set of control primitives, such as if, for, while, switch and do while
      • Multiple logical and mathematical operators, including bit manipulators
      • Various assignments may be applied in a single statement.
      • Function return values are not always required and may be ignored if unneeded.
      • Typing is static. All data has the type but may be implicitly converted.
      • Basic form of modularity, as files may be separately compiled and linked
      • Control of function and object visibility to other files via extern and static attributes.”
    • Below are some websites to look into this further if needed.
  • PERL
    • This seems like a complicated Language consisting of a stable cross Platform Launge. This seems far too complicated for me, but below is some websites if I chose to use this large
    • “Perl is a general-purpose programming language originally developed for text manipulation and now used for a broad range of tasks including system administration, web development, network programming, GUI development, and more.”
  • Erlang
    • From what I can see this is even more complicated than Perl, This is not what I am after so therefore not something I am going to use
    • Erlang is a programming language used to build massively scalable soft real-time systems with requirements on high availability. Some of its uses are in telecoms, banking, e-commerce, computer telephony and instant messaging. Erlang’s runtime system has built-in support for concurrency, distribution and fault tolerance.
Advertisements

Why do we look for ‘Academic’ Articles?

This week’s blog albeit a bit late is on searching for an academic paper, in this we need to find 2 academic papers and answer the following questions;

  • the title
  • the authors (usually with an email address and affiliation)
  • the abstract
  • the introduction
  • a review of other papers relevant to the topic ( a literature review)
  • a description of what the research was and what the researchers did
  • the results of what they did
  • a discussion about what the results mean
  • a conclusion
  • a list of references

So here I will go and find two academic papers, I’m assuming these will be in the IT field as. I managed to find a Google Scholar Function to find academic papers, this is interesting.

google Schoolr.PNG

 

 

Empirical Evaluation of User Experience in two Adaptive Mobile Application Prototypes

The title;

Empirical Evaluation of User Experience in two Adaptive Mobile Application Prototypes

The authors (usually with an email address and affiliation);

  • Leena Arhippainen University of Oulu, P.O. Box 3000, 90014 University of Oulu, Finland leena.arhippainen@oulu.fi
  • Marika Tähti University of Oulu, P.O. Box 3000, 90014 University of Oulu, Finland marika.tahti@oulu.fi

The abstract

Today’s applications such as ubiquitous systems are more and more aware of user’s habits and the context of use. The features of products and the context of use will affect the human’s experiences and preferences about the use of device. Thus; user experience in user-product interaction has been regarded as an important research topic in the mobile application design area. The purpose of this paper is to clarify how user experience can be evaluated in adaptive mobile applications. The user experience evaluations were performed through interviews and observation while test users were using PDA-based adaptive mobile application prototypes. As a result; this paper presents the analysis of the test methods for further user experience evaluations.

CR Categories: J.m [Computer Applications]: Miscellaneous; Experimentation; Human Factors.

The introduction

In the recent years, the use of different mobile products such as mobile phones and Personal Digital Assistant (PDA) devices has increased rapidly. Moreover, ubiquitous computing has become a popular topic in research and design areas. Nowadays, systems are more and more aware of their context of use. [Dey and Abowd 1999; Weiser 1991] In order to be useful, ubiquitous applications need to be designed so that the user’s needs and preferences and the context of use have been taken into account [Consolvo et al. 2002]. However, the evaluation of pervasive computing systems and their influences on users is quite difficult because the evaluation will require analysis of real users in a real context [Bellotti et al. 2002]. In addition, in continuous interaction research, test users should have a fully operational, reliable, and robust tool [Bellotti et al. 2002]. Evaluation with an incomplete prototype will not give a realistic test result. Nevertheless, preliminary tests in early phases of product development are necessary to perform in order to achieve information about the end user’s preferences and needs. In the recent years, in the Human-Computer Interaction (HCI) research area the capturing of user experience has been seen as an important and interesting research issue. In general, user experience has been captured with techniques like interviews, observations, surveys, storytelling, and diaries among others [Johanson et al. 2002; Nikkanen 2001]. However, in the HCI research area the understanding of user experience and its evaluation has not been established. One reason for this may be shortcomings in the definition of user experience and its relation to usability issues. Also, future proactive environments and adaptive mobile devices bring new aspects to the field of user experience research. The aim of the paper is to study how user experience can be evaluated in adaptive mobile applications. User experience research and its methods are briefly present ed in Chapter 2. Adaptive mobile prototypes and user experience evaluations are described and methods analyzed in Chapter 3. The results of the paper are presented in Chapter 4. Finally, the research is concluded and further work discussed in Chapter 5.

A review of other papers relevant to the topic ( a literature review)

BELLOTTI; F.; BERTA; R.; DEGLORIA; A. AND MARGARONE; M. 2002. User Testing a Hypermedia Tour Guide. IEEE Pervasive Computing; 33-41.
BUCHENAU; M. AND FULTON SURI; J. 2000. Experience Prototyping; in Proceedings of the DIS 2000 seminar; Communications of the ACM; 424-433.
CONSOLVO; S.; ARNSTEIN ; L. AND FRANZA; B. R. 2002. User Study Techniques in the Design and Evaluation of a Ubicomp Environment.In the Proceedings of UbiComp 2002; LNCS 2498; Springer-Verlag; Berlin; 73-90.
DEWEY; J. 1980.Art as Experience; New York: Perigee; (reprint);355.
DEY; A. K. AND ABOWD; G.D. 1999. Towards a Better Understanding of Context and Context-Awareness. GVU Technical Report. GIT-GVU- 99-22. Georgia Institute of Technology.
FLECK; M.; FRID; M.; KINDBERG ; T.; O’BRIEN-STRAIN; E.; RAJANI; R. AND SPASOJEVIC; M. 2002. From Informing to Remembering: Ubiquitous Systems in Interactive Museums. IEEE Pervasive Computing 1/2; 17-25.
FORLIZZI; J. AND FORD ; S. 2000. The Building Blocks of Experience: An Early Framework for Interaction Designers; in Proceedings of the DIS 2000 seminar; Communications of the ACM; 419–423.
GARRETT; J. J. 2002. The Elements of user experience. User-centered design for the web. New Riders; 208.
HILTUNEN; M.; LAUKKA; M. AND LUOMALA; J. 2002. Mobile User Experience; Edita Publishing Inc. Finland; 214.
JOHANSON; B.; FOX; A. AND WINOGRAD; T. 2002. The Interactive Workspaces Project: Experiences with Ubiquitous Computing Rooms. IEEE Pervasive computing 1/2 ; 67-74.
NIKKANEN; M. 2001. Käyttäjän kokemusta kartoittavien tutkimus- ja suunnittelumenetelmien käyttö tuotekehitysprosessissa. Licentiate’s degree. University of Helsinki; 102.
PALEN; L. AND SALZMAN; M. 2002. Voice-mail Diary Studies for Naturalist ic Data Capture under Mobile Conditions; CSCW; New Orleans; Louisiana; USA; November 16-20;87-95.
RANTAKOKKO; T. AND PLOMP ; J. 2003. An Adaptive Map-Based Interface for Situated Services; in proceedings of the Smart Objects Conference;Grenoble; France.
WEISER; M. 1991. THE Computer for the 21st Century . Scientific American 265(3);94-104.

Description of what the research was and what the researchers did and the results of what they did

User Experience for the 1st achidemic papper.PNG

a discussion about what the results mean

This chapter is divided into two parts. Firstly, the benefits and challenges of the interview and observation methods from the viewpoint of user experience research are summarized. Secondly, the suitability of interviews and observations for user experience research is discussed. 4.1 Benefits and Challenges Interview is a good method for user experience evaluation, because then the test situation can be like a “chat session” with the test user. It gives the possibility to create a calm and nice atmosphere in test situation. This is also an easy way to get information about the user’s background (age, education), prior experiences, expectations and motivation, etc.

However, there are some interesting challenges for the interviewers to clarify. Firstly, questions related to user experience should be formulated very carefully so that the users can understand them easily. Secondly, usually the user can express his/her opinions about a device and its characteristics, but verbally describing his/her feelings about the device is more difficult. In that kind of a situation, the interviewer can try to “read between the lines” when the user speaks about his/her experiences. Nevertheless, this challenge may require using some other methods as well. Observation also gave information about user experience. However, researchers need to interpret the user’s facial expression, body movements and gestures carefully, because the personality of the user will affect how they behave. For example, one test person said that she is very nervous, but her outward appearance was really calm. Moreover, humans make gestures very differently, for instance while one moves his or her eyebrows a lot, the other can move his/her eyes only a little. These two user experience evaluations elicited that a comprehensive observation will require video recording. In the first evaluation, video recording was not used, and thus only some facial expression was captured. However, the second evaluation was video recorded but still some challenges occurred. The first thing in video recording in user experience research is that it must not influence the user and his/her experiences. This is an interesting challenge. However, in order to collect the user’s facial expressions, gestures and actions on the screen, the video recording should be organized from different perspectives, for instance, from the front of the user’s face, the top of the screen and a little bit farther away so that the user is in the picture. In order for the observation to be reliable, a tool or a method for interpreting different gestures and emotions is required. 4.2 Suitability for user experience research The picture (Figure 1) presented in Chapter 2 illustrates what different factors affect user experience in user-product interaction. In evaluations, some factors can change; for instance, in the user experience evaluation presented in this paper, the user was one part that changed. The device, social and cultural factors and the context of use were the same. Consequently, when the user changes, interaction and user experience change as well (grey areas) (Figure 5). User experience factors can be captured via interviews or observations on a particular level. Factors, which did not appear in the evaluations, are underlined in the picture (Figure 5) and marked as NE (Not Emerged in the evaluations) in the table (Table 1). However, this paper does not deny that those factors could not be captured via interviews and observations. The evaluations elicited that some user experience factors can be gathered via both of the methods. For example, the user can comment on the product’s functions and say that they are easy to understand and learn. However, when he/she uses product, the observer can perceive that he/she uses it in the wrong way. On the other hand, observation does not always bring out the user’s emotions properly, and thus interview can reveal the true emotions more easily. Hence, interviews and observations can give different information about the same factor, and thus give a more comprehensive view to user experience. This paper presents what user experience factors were captured via interviews and observations (Table 1).

a conclusion

5 Conclusion The purpose of this paper was to define how user experience can be evaluated in adaptive mobile applications. In general, the capturing of user experience is quite difficult, because there are so many different factors in user-product interaction (Figure 1). For the evaluation, those factors should be clarified and a goal for the test defined in a test plan. This may help make the evaluation more systematic. Both the examinations illustrated that interviews and observations are appropriate methods for capturing user experience (Table 1). However, this study confirmed that several methods need to be used in order to evaluate user experience. In addition to the interviews and observations, researchers will need more efficient ways to get information about the user’s emotions and experiences, concerning for example collection and interpretation of body gestures and facial expressions. In order to collect authentic emotions, the test situation should be organized so that is as natural as possible. As further research, more user experience evaluations will be done for different adaptive mobile devices, using different methods.

a list of references

BELLOTTI, F., BERTA, R., DEGLORIA, A. AND MARGARONE, M. 2002. User Testing a Hypermedia Tour Guide. IEEE Pervasive Computing, 33-41.

BUCHENAU, M. AND FULTON SURI, J. 2000. Experience Prototyping, in Proceedings of the DIS 2000 seminar, Communications of the ACM, 424-433. CONSOLVO, S.,

CONSOLVO, S., ARNSTEIN , L. AND FRANZA, B. R. 2002. User Study Techniques in the Design and Evaluation of a Ubicomp Environment. In the Proceedings of UbiComp 2002, LNCS 2498, Springer-Verlag, Berlin, 73-90. DEWEY, J. 1980.Art as

DEWEY, J. 1980.Art as Experience,New York: Perigee, (reprint),355. DEY, A. K. AND ABOWD, G.D. 1999. Towards a Better Understanding of Context and Context-Awareness. GVU Technical Report. GIT-GVU- 99-22. Georgia Institute of Technology. FLECK, M., FRID, M.,

DEY, A. K. AND ABOWD, G.D. 1999. Towards a Better Understanding of Context and Context-Awareness. GVU Technical Report. GIT-GVU- 99-22. Georgia Institute of Technology. FLECK, M., FRID, M.,

FLECK, M., FRID, M., KINDBERG , T., O’BRIEN-STRAIN, E., RAJANI, R. AND SPASOJEVIC, M. 2002. From Informing to Remembering: Ubiquitous Systems in Interactive Museums. IEEE Pervasive Computing 1/2, 17-25. FORLIZZI, J. AND

FORLIZZI, J. AND FORD , S. 2000. The Building Blocks of Experience: An Early Framework for Interaction Designers, in Proceedings of the DIS 2000 seminar, Communications of the ACM, 419–423. GARRETT, J. J. 2002. The Elements of user experience. User-centered design for the web. New Riders, 208. HILTUNEN, M., LAUKKA, M. AND LUOMALA, J. 2002. Mobile User

GARRETT, J. J. 2002. The Elements of user experience. User-centered design for the web. New Riders, 208. HILTUNEN, M., LAUKKA, M. AND LUOMALA, J. 2002. Mobile User

HILTUNEN, M., LAUKKA, M. AND LUOMALA, J. 2002. Mobile User Experience,Edita Publishing Inc. Finland, 214. JOHANSON, B., FOX, A. AND WINOGRAD,

JOHANSON, B., FOX, A. AND WINOGRAD, T . 2002. The Interactive Workspaces Project: Experiences with Ubiquitous Computing Rooms. IEEE Pervasive computing 1/2 , 67-74.

 

 

 

GUI Ripping: Reverse Engineering of Graphical User Interfaces for Testing

The authors (usually with an email address and affiliation)

 

Atif Memon Department of Computer Science and Fraunhofer Center for Experimental Software Engineering University of Maryland College Park, Maryland, USA atif@cs.umd.edu

Ishan Banerjee, Adithya Nagarajan Department of Computer Science University of Maryland College Park, Maryland, USA {ishan, sadithya}@cs.umd.edu

The abstract

Graphical user interfaces (GUIs) are important parts of today’s software and their correct execution is required to ensure the correctness of the overall software. A popular technique to detect defects in GUIs is to test them by executing test cases and checking the execution results. Test cases may either be created manually or generated automatically from a model of the GUI. While manual testing is unacceptably slow for many applications, our experience with GUI testing has shown that creating a model that can be used for automated test case generation is difficult. We describe a new approach to reverse engineer a model represented as structures called a GUI forest, event-flow graphs and an integration tree directly from the executable GUI. We describe “GUI Ripping”, a dynamic process in which the software’s GUI is automatically “traversed” by opening all its windows and extracting all their widgets (GUI objects), properties, and values. The extracted information is then verified by the test designer and used to automatically generate test cases. We present algorithms for the ripping process and describe their implementation in a tool suite that operates on Java and Microsoft Windows’ GUIs. We present results of case studies which show that our approach requires very little human intervention and is especially useful for regression testing of software that is modified frequently. We have successfully used the “GUI Ripper” in several large experiments and have made it available as a downloadable tool.

The introduction and A description of what the research was and what the researchers did

Graphical user interfaces (GUIs) are one of the most important parts of today’s software [13]. They make software easy to use by providing the user with highly visual controls that represent everyday objects such as menus, buttons, lists, and windows. Recognizing the importance of GUIs, software developers are dedicating large parts of the code to implementing GUIs [12]. The correctness of this code is essential to the correct execution of the overall software. A popular technique to detect defects in software is testing [3, 2, 23]. During testing, test cases are created and executed on the software. Test cases may either be created manually by a tester [10, 27, 8] or automatically by using a model of the software derived from its specifications [20]. In all our work to date [20, 17, 21, 16, 18, 19, 15, 12, 14], we have observed that software specifications are rarely in a form to be used for automated GUI testing. GUI testing requires that test cases (sequences of GUI events that exercise GUI widgets) be generated and executed on the GUI [13]. However, currently available techniques for obtaining GUI test cases are resource intensive, requiring significant human intervention. The most popular technique to test GUIs is by using capture/replay tools [10]. When using a capture/replay tool, a human tester interacts with the application under test (AUT); the capture component of the tool stores this interaction in a file that can be replayed later using the replay component of the tool. Our experience has shown that generating a typical test case with 50 events for different widgets takes 20-30 minutes using capture-replay tools. A few automated GUI test case generation techniques have been proposed [20]. However, they all require creating a model of the GUI – a significant resource intensive step that intimidates many practitioners and prevents the application of the techniques. In this paper, we present a technique, called GUI Ripping to reverse engineer the GUI’s model directly from the executing GUI. Once verified by the test designer, this model is then used to automatically generate test cases. GUI ripping has numerous other applications such as reverse engineering of COTS GUI products to test them within the context of their use, porting and controlling legacy applications to new platforms [22], and developing model checking tools for GUIs [6]. For space reasons, in this paper, we will provide details relevant to the testing process. GUI ripping is a dynamic process that is applied to an executing software’s GUI. Starting from the software’s first window (or set of windows), the GUI is “traversed” by opening all child windows. All the window’s widgets (building blocks of the GUI, e.g., buttons, text-boxes), their properties (e.g., background-color, font), and values (e.g., red, Times New Roman, 18pt) are extracted. Developing this process has several challenges that required us to develop novel solutions. First, the source code of the software may not always be available; we had to develop techniques to extract information from the executable files. Second, there are no GUI standards across different platforms and implementations; we had to extract all the information via low-level implementation-dependent system calls, which we have found are not always well-documented. Third, some implementations may provide less information than necessary to perform automated testing; we had to rely on heuristics and human intervention to determine missing parts. Finally, the presence of infeasible paths in GUIs prevents full automation. For example, some windows may be available only after a valid password has been provided. Since the GUI Ripper may not have access to the password, it may not be able to extract information from such windows. We had to provide another process and tool support to visually add parts to the extracted GUI model. We use GUI ripping to extract both the structure and execution behavior of the GUI – both essential for automated testing. We represent the GUI’s structure as a GUI forest and its execution behavior as event-flow graphs, and an integration tree [21]. Each node of the GUI forest represents a window and encapsulates all the widgets, properties and values in that window; there is an edge from node x to node y if the window represented by y is opened by performing an event in the window represented by node x, e.g., by clicking on a button. Intuitively, event-flow graphs and the integration tree show the flow of events in the GUI. We provide details of these structures in Section 2. We have implemented our algorithm in a software called the GUI Ripper. We use the GUI Ripper as a central part of two large software systems called GUITAR1 and DART (Daily Automated Regression Tester) to generate, execute, verify GUI test cases, and perform regression testing [15]. We provide details of two instances of the GUI Ripper, one for Microsoft Windows and the other for Java Swing applications. We then empirically evaluate the performance of the ripper on four Java applications with complex GUIs, Microsoft’s WordPad, Yahoo Messenger, and Winzip. The results of our empirical studies show that the ripping pro- 1http://guitar.cs.umd.edu cess is efficient, in that it is very fast and requires little human intervention. We also show that relative to other testing activities, ripping consumes very little resources. We also observe that automated testing would not be possible without the help of the GUI Ripper. The specific contributions of our work include the following. • We provide an efficient algorithm to extract a software’s GUI model without the need for its source code. • We describe a new structure called a GUI forest. • We provide implementation details of a new tool that can be applied to a large number of MS Windows and Java Swing GUIs. In the next section, we present a formal model of the GUI specifications that are obtained by the GUI Ripper. In Section 3, we present the design of the ripper and provide an algorithm that can be used to implement the ripper. In Section 4 we discuss the MS Windows and Java implementations of the GUI Ripper. In Section 5, we empirically evaluate our algorithms on several large and popular software. We then conclude with a discussion of related work in Section 6, and ongoing and future work in Section 7.

A review of other papers relevant to the topic ( a literature review)

Moore [22] describes experiences with manual reverse engineering of legacy applications to build a model of the user interface functionality. A technique to partially automate this process is also outlined. The results show that a language-independent set of rules can be used to detect user interface components from legacy code. Developing such rules is a nontrivial task, especially for the type of information that we need for software testing. Systa has used reverse engineering to study and analyze the run-time behavior of Java software [26]. Event trace information is generated as a result of running the target software under a debugger. The event trace, represented as scenario diagrams, is given as an input to a prototype tool SCED [11] that outputs state diagrams. The state diagrams can be used to examine the overall behavior of a desired class, object, or method. Several different types of representations have been used to generate test information. Anderson and Fickas have used preconditions/postconditions to represent software requirements and specifications [1, 7]. These representations have been successfully used to generate test cases [24, 20]. Scheetz at al. have used a class diagram representation of the system’s architecture to generate test cases using an AI planning system [25]. There are various techniques used for testing GUIs [9, 12]. One of our earlier techniques makes use of specifications to generate test cases. In the PATHS [19, 16, 18] system we used an AI planner to generate test cases from GUI specifications. PATHS system uses a semi-automatic approach requiring substantial test designer participation. Our GUI ripping technique is different in that we focus on generating the specifications automatically thereby minimizing test designers involvement. Chen et al. [4] develop a specification-based technique to test GUIs. Users graphically manipulate test specifications represented by finite state machines (FSM). They provide a visual environment for manipulating these FSMs. We have successfully used the GUI Ripper software in large GUI testing studies of our DART system [15]. The GUI Ripper was used to generate the GUI structure for several applications. Test cases and test oracle information (expected output) [17] were automatically generated from the extracted information.

the results of what they did and A conclusion

Automated testing of software that have a graphical user interface (GUI) has become extremely important as GUIs become increasingly complex and popular. A key step to automatically test GUI software is test case generation from a model of the software. Our experience with GUI testing has shown that such models are very expensive to create manually and software specifications are rarely available in a form to derive these models automatically. We presented a new technique, called GUI ripping to obtain models of the GUI’s structure and execution behavior automatically. We represented the GUI’s structure as a GUI forest, and its execution behavior as event-flow graphs and an integration tree. We described the GUI ripping process, which is applied to the executing software. The process opens all the software’s windows automatically and extracts all their widgets, properties, and values. The execution model of the GUI was obtained by using a classification of the GUI’s events. Once the extracted information is verified by a test designer, it is used to automatically generate test cases. We empirically showed that our approach requires very little human intervention. We have implemented our algorithms in a tool called a “GUI Ripper” and have made it available as a downloadable tool.s.

A discussion about what the results mean

In the future, we will extend our implementation to handle more MS Windows GUIs, Unix, and web applications. We will also use the GUI ripper for performing usability anlysis of GUIs. It will also be extended for measuring specification conformanc of GUI

A list of references

[1] J. S. Anderson. Automating Requirements Engineering Using Artificial Intelligence Techniques. Ph.D. thesis, Dept. of Computer and Information Science, University of Oregon, Dec. 1993. [2] I. Bashir and A. L. Goel. Testing Object-Oriented Software, Life Cycle Solutions. Springer-Verlag, 1999. [3] B. Beizer. Black-Box Testing: Techniques for Functional Testing of Software and Systems. John Wiley & Sons, 1999. [4] J. Chen and S. Subramaniam. A GUI environment to manipulate

[2] I. Bashir and A. L. Goel. Testing Object-Oriented Software, Life Cycle Solutions. Springer-Verlag, 1999. [3] B. Beizer. Black-Box Testing: Techniques for Functional Testing of Software and Systems. John Wiley & Sons, 1999. [4] J. Chen and S. Subramaniam. A GUI environment to manipulate

[3] B. Beizer. Black-Box Testing: Techniques for Functional Testing of Software and Systems. John Wiley & Sons, 1999. [4] J. Chen and S. Subramaniam. A GUI environment to manipulate

[4] J. Chen and S. Subramaniam. A GUI environment to manipulate fsms for testing GUI-based applications in java. In Proceeding of the 34th Hawaii International Conferences on System Sciences, Jan 2001. [5] T. Cormen, C. Leiserson, and R. Rivest. Introduction to Algorithms, chapter 23.3, pages 477–485. Prentice-Hall of India Private Limited, September 2001. [6] M. B. Dwyer, V. Carr, and L. Hines. Model checking graphical user interfaces using abstractions. In M. Jazayeri and H. Schauer, editors, ESEC/FSE ’97, volume 1301 of Lecture Notes in Computer Science, pages 244–261. Springer / ACM Press, 1997. [7] S. Fickas and J. S. Anderson. A proposed perspective shift: Viewing specification design as a planning problem. In D. Partridge, editor, Artificial Intelligence & Software Engineering, pages 535–550. Ablex, Norwood, NJ, 1991. [8] H. Foster, T. Goradia, T. Ostrand, and W.

[5] T. Cormen, C. Leiserson, and R. Rivest. Introduction to Algorithms, chapter 23.3, pages 477–485. Prentice-Hall of India Private Limited, September 2001. [6] M. B. Dwyer, V. Carr, and L. Hines. Model checking graphical user interfaces using abstractions. In M. Jazayeri and H. Schauer, editors, ESEC/FSE ’97, volume 1301 of Lecture Notes in Computer Science, pages 244–261. Springer / ACM Press, 1997. [7] S. Fickas and J. S. Anderson. A proposed perspective shift: Viewing specification design as a planning problem. In D. Partridge, editor, Artificial Intelligence & Software Engineering, pages 535–550. Ablex, Norwood, NJ, 1991. [8] H. Foster, T. Goradia, T. Ostrand, and W.

[6] M. B. Dwyer, V. Carr, and L. Hines. Model checking graphical user interfaces using abstractions. In M. Jazayeri and H. Schauer, editors, ESEC/FSE ’97, volume 1301 of Lecture Notes in Computer Science, pages 244–261. Springer / ACM Press, 1997. [7] S. Fickas and J. S. Anderson. A proposed perspective shift: Viewing specification design as a planning problem. In D. Partridge, editor, Artificial Intelligence & Software Engineering, pages 535–550. Ablex, Norwood, NJ, 1991. [8] H. Foster, T. Goradia, T. Ostrand, and W.

[7] S. Fickas and J. S. Anderson. A proposed perspective shift: Viewing specification design as a planning problem. In D. Partridge, editor, Artificial Intelligence & Software Engineering, pages 535–550. Ablex, Norwood, NJ, 1991. [8] H. Foster, T. Goradia, T. Ostrand, and W.

[8] H. Foster, T. Goradia, T. Ostrand, and W. Szermer. A visual test development environment for GUI systems. In 11th International Software Quality Week. IEEE Press, 26-29 May 1998. [9] P. Gerrard. Testing GUI applications. In

[9] P. Gerrard. Testing GUI applications. In EuroSTAR, Nov 1997. [10] J. H. Hicinbothom and W. W. Zachary. A tool for automatically generating transcripts of human-computer interaction. In Proceedings of the Human Factors and Ergonomics Society 37th Annual Meeting, volume 2 of SPECIAL SESSIONS: Demonstrations, page 1042, 1993. [11] K. Koskimies, T.

[10] J. H. Hicinbothom and W. W. Zachary. A tool for automatically generating transcripts of human-computer interaction. In Proceedings of the Human Factors and Ergonomics Society 37th Annual Meeting, volume 2 of SPECIAL SESSIONS: Demonstrations, page 1042, 1993. [11] K. Koskimies, T.

[11] K. Koskimies, T. Mnnist, T. Syst, and J. Tuomi. Automated support for modeling oo software. In IEEE Software, pages 87–94, Jan-Feb 1998. [12] A. M. Memon. A Comprehensive Framework for Testing Graphical User Interfaces. Ph.D. thesis, Department of Computer Science, University of Pittsburgh, July 2001. [13] A. M. Memon. GUI testing: Pitfalls and process. IEEE Computer, 35(8):90–91, Aug. 2002. [14] A. M. Memon. Advances in GUI testing. In Advances in Computers, ed. by Marvin V. Zelkowitz, volume 57. Academic Press, 2003. [15] A. M. Memon, I. Banerjee, N. Hashmi, and A. Nagarajan. DART: A framework for regression testing nightly/daily builds of GUI applications. In Proceedings of the International conference on software maintenance 2003, September 2003. [16] A. M. Memon, M. E. Pollack, and M. L. Soffa. Using a goal-driven approach to generate test cases for GUIs. In Proceedings of the 21st International Conference on Software Engineering, pages 257–266. ACM Press, May 1999. [17] A. M. Memon, M. E. Pollack, and M. L. Soffa. Automated test oracles for GUIs. In Proceedings of the ACM SIGSOFT 8th International Symposium on the Foundations of Software Engineering (FSE-8), pages 30–39, NY, Nov. 8–10 2000. [18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[12] A. M. Memon. A Comprehensive Framework for Testing Graphical User Interfaces. Ph.D. thesis, Department of Computer Science, University of Pittsburgh, July 2001. [13] A. M. Memon. GUI testing: Pitfalls and process. IEEE Computer, 35(8):90–91, Aug. 2002. [14] A. M. Memon. Advances in GUI testing. In Advances in Computers, ed. by Marvin V. Zelkowitz, volume 57. Academic Press, 2003. [15] A. M. Memon, I. Banerjee, N. Hashmi, and A. Nagarajan. DART: A framework for regression testing nightly/daily builds of GUI applications. In Proceedings of the International conference on software maintenance 2003, September 2003. [16] A. M. Memon, M. E. Pollack, and M. L. Soffa. Using a goal-driven approach to generate test cases for GUIs. In Proceedings of the 21st International Conference on Software Engineering, pages 257–266. ACM Press, May 1999. [17] A. M. Memon, M. E. Pollack, and M. L. Soffa. Automated test oracles for GUIs. In Proceedings of the ACM SIGSOFT 8th International Symposium on the Foundations of Software Engineering (FSE-8), pages 30–39, NY, Nov. 8–10 2000. [18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[13] A. M. Memon. GUI testing: Pitfalls and process. IEEE Computer, 35(8):90–91, Aug. 2002. [14] A. M. Memon. Advances in GUI testing. In Advances in Computers, ed. by Marvin V. Zelkowitz, volume 57. Academic Press, 2003. [15] A. M. Memon, I. Banerjee, N. Hashmi, and A. Nagarajan. DART: A framework for regression testing nightly/daily builds of GUI applications. In Proceedings of the International conference on software maintenance 2003, September 2003. [16] A. M. Memon, M. E. Pollack, and M. L. Soffa. Using a goal-driven approach to generate test cases for GUIs. In Proceedings of the 21st International Conference on Software Engineering, pages 257–266. ACM Press, May 1999. [17] A. M. Memon, M. E. Pollack, and M. L. Soffa. Automated test oracles for GUIs. In Proceedings of the ACM SIGSOFT 8th International Symposium on the Foundations of Software Engineering (FSE-8), pages 30–39, NY, Nov. 8–10 2000. [18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[14] A. M. Memon. Advances in GUI testing. In Advances in Computers, ed. by Marvin V. Zelkowitz, volume 57. Academic Press, 2003. [15] A. M. Memon, I. Banerjee, N. Hashmi, and A. Nagarajan. DART: A framework for regression testing nightly/daily builds of GUI applications. In Proceedings of the International conference on software maintenance 2003, September 2003. [16] A. M. Memon, M. E. Pollack, and M. L. Soffa. Using a goal-driven approach to generate test cases for GUIs. In Proceedings of the 21st International Conference on Software Engineering, pages 257–266. ACM Press, May 1999. [17] A. M. Memon, M. E. Pollack, and M. L. Soffa. Automated test oracles for GUIs. In Proceedings of the ACM SIGSOFT 8th International Symposium on the Foundations of Software Engineering (FSE-8), pages 30–39, NY, Nov. 8–10 2000. [18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[15] A. M. Memon, I. Banerjee, N. Hashmi, and A. Nagarajan. DART: A framework for regression testing nightly/daily builds of GUI applications. In Proceedings of the International conference on software maintenance 2003, September 2003. [16] A. M. Memon, M. E. Pollack, and M. L. Soffa. Using a goal-driven approach to generate test cases for GUIs. In Proceedings of the 21st International Conference on Software Engineering, pages 257–266. ACM Press, May 1999. [17] A. M. Memon, M. E. Pollack, and M. L. Soffa. Automated test oracles for GUIs. In Proceedings of the ACM SIGSOFT 8th International Symposium on the Foundations of Software Engineering (FSE-8), pages 30–39, NY, Nov. 8–10 2000. [18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[16] A. M. Memon, M. E. Pollack, and M. L. Soffa. Using a goal-driven approach to generate test cases for GUIs. In Proceedings of the 21st International Conference on Software Engineering, pages 257–266. ACM Press, May 1999. [17] A. M. Memon, M. E. Pollack, and M. L. Soffa. Automated test oracles for GUIs. In Proceedings of the ACM SIGSOFT 8th International Symposium on the Foundations of Software Engineering (FSE-8), pages 30–39, NY, Nov. 8–10 2000. [18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[17] A. M. Memon, M. E. Pollack, and M. L. Soffa. Automated test oracles for GUIs. In Proceedings of the ACM SIGSOFT 8th International Symposium on the Foundations of Software Engineering (FSE-8), pages 30–39, NY, Nov. 8–10 2000. [18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[18] A. M. Memon, M. E. Pollack, and M. L. Soffa. Plan generation for GUI testing. In Proceedings of The Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 226–235. AAAI Press, Apr. 2000. [19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A

[19] A. M. Memon, M. E. Pollack, and M. L. Soffa. A planningbased approach to GUI testing. In Proceedings of The 13th International Software/Internet Quality Week, May 2000. [20] A. M. Memon, M. E. Pollack, and M. L. Soffa. Hierarchical GUI test case generation using automated planning. IEEE Transactions on Software Engineering, 27(2):144–155, Feb. 2001. [21] A. M. Memon, M. L. Soffa, and M. E. Pollack. Coverage criteria for GUI testing. In Proceedings of the 8th European Software Engineering Conference (ESEC) and 9th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE-9), pages 256–267, Sept. 2001. [22] M. M. Moore. Rule-based detection for reverse engineering user interfaces. In Proceedings of the Third Working Conference on Reverse Engineering, pages 42–8, Monterey, CA, 8–10 Nov. 1996. IEEE. [23] R. M. Poston. Automating Specification-Based Software Testing. IEEE Computer Society, Los Alamitos, 1 edition, 1996. [24] M. Scheetz, A. V. Mayrhauser, E. Dahlman, and A. E. Howe. Generating goal-oriented test cases. [25] M. Scheetz, A. V. Mayrhauser, R. France, E. Dahlman, and A. E. Howe. Generating test cases from an oo model with an ai planning system. In Proceedings in the Twenty-Third Annual International Computer Software and Applications Conference, March 2000. [26] T.

[20] A. M. Memon, M. E. Pollack, and M. L. Soffa. Hierarchical GUI test case generation using automated planning. IEEE Transactions on Software Engineering, 27(2):144–155, Feb. 2001. [21] A. M. Memon, M. L. Soffa, and M. E. Pollack. Coverage criteria for GUI testing. In Proceedings of the 8th European Software Engineering Conference (ESEC) and 9th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE-9), pages 256–267, Sept. 2001. [22] M. M. Moore. Rule-based detection for reverse engineering user interfaces. In Proceedings of the Third Working Conference on Reverse Engineering, pages 42–8, Monterey, CA, 8–10 Nov. 1996. IEEE. [23] R. M. Poston. Automating Specification-Based Software Testing. IEEE Computer Society, Los Alamitos, 1 edition, 1996. [24] M. Scheetz, A. V. Mayrhauser, E. Dahlman, and A. E. Howe. Generating goal-oriented test cases. [25] M. Scheetz, A. V. Mayrhauser, R. France, E. Dahlman, and A. E. Howe. Generating test cases from an oo model with an ai planning system. In Proceedings in the Twenty-Third Annual International Computer Software and Applications Conference, March 2000. [26] T.

[21] A. M. Memon, M. L. Soffa, and M. E. Pollack. Coverage criteria for GUI testing. In Proceedings of the 8th European Software Engineering Conference (ESEC) and 9th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE-9), pages 256–267, Sept. 2001. [22] M. M. Moore. Rule-based detection for reverse engineering user interfaces. In Proceedings of the Third Working Conference on Reverse Engineering, pages 42–8, Monterey, CA, 8–10 Nov. 1996. IEEE. [23] R. M. Poston. Automating Specification-Based Software Testing. IEEE Computer Society, Los Alamitos, 1 edition, 1996. [24] M. Scheetz, A. V. Mayrhauser, E. Dahlman, and A. E. Howe. Generating goal-oriented test cases. [25] M. Scheetz, A. V. Mayrhauser, R. France, E. Dahlman, and A. E. Howe. Generating test cases from an oo model with an ai planning system. In Proceedings in the Twenty-Third Annual International Computer Software and Applications Conference, March 2000. [26] T.

[22] M. M. Moore. Rule-based detection for reverse engineering user interfaces. In Proceedings of the Third Working Conference on Reverse Engineering, pages 42–8, Monterey, CA, 8–10 Nov. 1996. IEEE. [23] R. M. Poston. Automating Specification-Based Software Testing. IEEE Computer Society, Los Alamitos, 1 edition, 1996. [24] M. Scheetz, A. V. Mayrhauser, E. Dahlman, and A. E. Howe. Generating goal-oriented test cases. [25] M. Scheetz, A. V. Mayrhauser, R. France, E. Dahlman, and A. E. Howe. Generating test cases from an oo model with an ai planning system. In Proceedings in the Twenty-Third Annual International Computer Software and Applications Conference, March 2000. [26] T.

[23] R. M. Poston. Automating Specification-Based Software Testing. IEEE Computer Society, Los Alamitos, 1 edition, 1996. [24] M. Scheetz, A. V. Mayrhauser, E. Dahlman, and A. E. Howe. Generating goal-oriented test cases. [25] M. Scheetz, A. V. Mayrhauser, R. France, E. Dahlman, and A. E. Howe. Generating test cases from an oo model with an ai planning system. In Proceedings in the Twenty-Third Annual International Computer Software and Applications Conference, March 2000. [26] T.

[24] M. Scheetz, A. V. Mayrhauser, E. Dahlman, and A. E. Howe. Generating goal-oriented test cases. [25] M. Scheetz, A. V. Mayrhauser, R. France, E. Dahlman, and A. E. Howe. Generating test cases from an oo model with an ai planning system. In Proceedings in the Twenty-Third Annual International Computer Software and Applications Conference, March 2000. [26] T.

[25] M. Scheetz, A. V. Mayrhauser, R. France, E. Dahlman, and A. E. Howe. Generating test cases from an oo model with an ai planning system. In Proceedings in the Twenty-Third Annual International Computer Software and Applications Conference, March 2000. [26] T.

[26] T. Systa. Dynamic reverse engineering of java software. Technical report, University of Tampere, Finland, Box 607, 33101 Tampere, Finland, 2001. http://www.fzi.de/Ecoop99- WS-Reengineering/papers/tarjan/ecoop.html. [27] A. Walworth. Java GUI testing. Dr. Dobb’s Journal of Software Tools, 22(2):30, 32, 34, Feb. 1997.

[27] A. Walworth. Java GUI testing. Dr. Dobb’s Journal of Software Tools, 22(2):30, 32, 34, Feb. 1997.

Creidiabil Evedance

We are to search both Digital Citizenship and Virtualization Technology, then we need to answer the following questions. This needs to be done 3 times on each subject.

  • How we found it
  • when it was written
  • who it was written by (expert, undergrad student,….)
  • where it was published or what type of ‘thing’ it is (book, article, blog)
  • what others have said about it (reviews)
  • whether others have used the information in their own work (citations)
  • how it is written (style)

 

Digital Citizenship

http://elearning.tki.org.nz/Teaching/Digital-citizenship

  • how we found it
    • I typed into google digital citizenship, and this came up
  • when it was written
    • This did not say when it was written, though it seems fairly relevant so I would say in the last year.
  • Who it was written by (expert, undergrad student,….)
    • as I cannot find who wrote this, it seems as though Sean Lyons Wrote it.
  • where it was published or what type of ‘thing’ it is (book, article, blog)
    • this is an article on a website, that is followed up with a small video
  • what others have said about it (reviews)
    • People update the social media, but I haven’t seen any comments on their Facebook
  • whether others have used the information in their own work (citations)
    • This is a website for individuals as well as people and groups, so this is designed to be used by many people and as an education piece so I would assume so
  • how it is written (style)
    • as a short blurb

 

http://core-ed.org/legacy/thought-leadership/ten-trends/ten-trends-2013/digital-citizenship

  • how we found it
      • I typed into google digital citizenship, and this came up
  • when it was written
    • This was written in 2013 and assume this has been revised since then as well as it’s up to date
  • who it was written by (expert, undergrad student,….)
    • I am uncertain who wrote this as all it states is  EDtalks, so I assume it was originally written by someone and revised over and over again
  • where it was published or what type of ‘thing’ it is (book, article, blog)
    • This was originally written for the website, and Because it’s an Educational piece I would assume this is in phablet version as well
  • what others have said about it (reviews)
    • There seem to be no active reviews that I can see on the page
  • whether others have used the information in their own work (citations)
    • This is an educational site so assume this is in the
  • how it is written (style)
    • In a List and a Paragraphs.

 

http://www.digitalcitizenship.nsw.edu.au/parent_Splash/index.htm

  • how we found it
    • I typed into google digital citizenship, and this came up
  • when it was written
    • OCTOBER 24, 2014 though I think tha=is has been reiterated.
  • Who it was written by (expert, undergrad student,….)
    • Computer Fundamentals, Computer Science and IT Integrator from Camilla, GA
  • where it was published or what type of ‘thing’ it is (book, article, blog)
    • Article on the net, but as this is an educational piece I assume that this printed as a pamphlet
  • what others have said about it (reviews)
    • The articles are spoken very highly of the educational piece
  • whether others have used the information in their own work (citations)
    • This is an education piece so I would assume this is used in the classroom
  • how it is written (style)
    • As an educational piece, so this is a step by step guide for students.

 

 

Virtualization Technology

http://www.vmware.com/solutions/virtualization.html

  • how we found it
      • I typed into google digital citizenship, and this came up
  • when it was written
    • This is not stated, but because it is an article that has been written by a major VM retailer I would assume this is very recent. Within the last year
  • who it was written by (expert, undergrad student,….)
    • This has been written by an expert by what has been sadi and the way its been said, But there is no name associated with the blog
  • where it was published or what type of ‘thing’ it is (book, article, blog)
    • This was written as a blog on the web page, so it is an article
  • what others have said about it (reviews)
    • This has been locked down, and no one has been able to comment on the page.
  • Whether others have used the information in their own work (citations)
    • I don’t not know, though I assume unless they have educational classes on their product I would say no.
  • How it is written (style)
    • A blog on the VMWear page.

 

http://searchservervirtualization.techtarget.com/definition/virtualization

  • how we found it
    • I typed into google digital citizenship, and this came up
  • when it was written
    • This has been written, but the last iteration of this blog has been in October 2016
  • who it was written by (expert, undergrad student,….)
    • This has been written by Margret Rouse, She has no previous qualifications that I can see.
  • Where it was published or what type of ‘thing’ it is (book, article, blog)
    • Blog on the VMwear website
  • what others have said about it (reviews)
    • This is an inciteful website that has a lot of information.
  • Whether others have used the information in their own work (citations)
    • This has no citations that I can fine
  • how it is written (style)
    • This is a blog written on TechTarget

 

https://software.intel.com/en-us/articles/the-advantages-of-using-virtualization-technology-in-the-enterprise

  • how we found it
    • I typed into google digital citizenship and this came up
  • when it was written
    • March 5, 2012
  • who it was written by (expert, undergrad student,….)
    • Thomas Wolfgang Burger is the owner of Thomas Wolfgang Burger Consulting. He has been a consultant, instructor, writer, analyst, and applications developer since 1978
  • where it was published or what type of ‘thing’ it is (book, article, blog)
    • Article Written for Intel
  • what others have said about it (reviews)
    • There is forum for people to write to for information, but there are no specific comments for this web page
  • whether others have used the information in their own work (citations)
    • They have not
  • how it is written (style)
    • This has been written as an Article.

IOT Computers

I have decided to do an IOT device, how am I going to create this? First of all, I need to think miniature computers, this is my next idea for a blog. What miniature computers are out there? What are their downfalls?  What one would be the best value for money for me, as a student? So see below, some miniature computers that I have looked into.

 

Omega2

This is the first miniature computer, Mark talked about this during class in semester one. I first looked at the Indie GoGo site, they have made over $914, 000 which is impressive when I found out the base product is only $5. What’s more impressive they made this in under 7 months. Now to look at the actual product.This is the product is Linux

Now to look at the actual product, it is Linux based and is the size of a cherry. This runs a lot of different languages from Ruby, C++, Python, Php and others.

“The Omega2 is seamlessly integrated with the Onion Cloud. This allows you to remote control it from anywhere in the world with our intuitive Web UI or RESTful APIs. You can also view the status of your Omega2 in real-time, and deploy software updates to it when it is on the field.”

There are a few different docks and expansion packs that can add to what you do with it, they range in price from $5 to $500.Capture 2.PNG

Chip

“the world’s first $9.00 Computer.”

The website for this is fantastic, clearly well developed and is really user-friendly. It has a lot of good specs as shown below.

Capture.PNG

It boast’s over 100 games and you can even learn how to make your own. They have a console bundle which includes; C.H.I.P., HDMI DIP, Power Brick, a Controller, and PICO-8! I am really tempted to buy this, maybe when my next pay comes through.

They have different products that you can buy, all items are on back order. After some research I found a page that has a review on it, this page compares it to The Pi Zero which is another cheap computer. Here is the link, I think it is an interesting read. The writer is obviously in favor of this product opposed to the Raspberry Pi device.

Intel Compute Sticks

Intel has tried to join the IOT devices, this looks seriously cool, very limited in what it can do but reach’s a specific market. It seems to be a streaming device, something you plug into the back of the tv and can be used like a Chromebook. The Intel Compute Stick is the size of a pack of gum that can transform any HDMI display into a complete computer. This is a really cool device but is not what I need.

 

 

 

Raspberry Pi

This is the original mini computer, there are four different versions, each has new additions. The newest version of this is a mere $5, this is to compete with the new and cheaper computers. It has been specifically developed to be a training device for the IOT’s. In the education part of this website, there is a part for “Noobs” and a part for people with previous knowledge this is called “Raspbian.”

On the website, there are third party O/S from LIBREELEC to weather station, which is interesting because I have not heard of these OS, yet. There are weekly blogs from the makers of Raspberry Pi.

This is the most easily accessible item out of all of these, it is not only available in NZ, but there are also different versions. These are available from reputable sellers from;

Capture

Class notes; Blog; 6

Secondary research,

  • Makes use of information given, this draws on other information.
  • Great for anything

Observational Research  Sarah/Prerna

  • 2
  • 3

Exploratory Research      Simranjit /Lisa

  • 1
  • 2
  • 3

Case Study Research       David/Luke

  • 1
  • 2
  • 3

Experimental Research  Jaydon/Zuohao

  • 1
  • 2
  • 3

Discourse Analysis         Cody/Jared

  • 1
  • 2
  • 3

Action Research              Yanglong/Kai

  • 1
  • 2
  • 3

Design Science                Toby/JingBo

  • 1
  • 2
  • 3

Argumentative                Alex/Dejan

  • 1
  • 2
  • 3

Interview                         Bhoj/Weilong

  • 1
  • 2
  • 3

Survey                              Harry/Becca

  • 1
  • 2
  • 3

Randomised Controlled Trials  Jonathan/Brandon

  • 1
  • 2
  • 3

Meta-Analysis               Katie/Amber

  • 1
  • 2
  • 3

Focus Groups                 Sihan/James

  • 1
  • 2
  • 3

 

Meta-Analysis

What is meta-analysis?  Here I will try to go into and explain what it is, here I go ……

“Meta-analysis is the statistical procedure for combining data from multiple studies. When the treatment effect (or effect size) is consistent from one study to the next, meta-analysis can be used to identify this common effect. When the effect varies from one study to the next, meta-analysis may be used to determine the reason for the variation.”  (MetaAnalysis.Php)

What is it? (Short description of how it works)

So this is a different type of analysis, this takes all the information from various studies and pools it together into one output. The output is answers combined for an average statistical answer.

What kinds of questions/problems might it be useful for?

I believe this could be helpful for saying the housing crisis Government based study IE; population and household usage. In this they can take some people in a house and get an average of that, they could also look into how much power and water an average house consumes and then average it down to each person. And then from then make a solution based off what each person needs instead of e.g. turning off water to a particular place.

How could it be used in IT research  (try to think of an example)?

I personally believe that this could be used to gather and then further gain more knowledge of how people use technology to learn more things we could use there for more to create and make money off.

What are the strengths of the approach?

This gives an excellent over un-bais knowledge of the field you’re looking at. This pools a lot of information into one study so there is no way there could be a bias opinion.

What are the weaknesses of the approach?

This doesn’t allow for a one of difference, for example, if there was a need to go and do something different for one particular group that is part of the Analysis, Say if there was one team that needed something completely different but it was a Meta-Analyst

 

Blog; Class notes; 4

Credibility and Validity

Does the epistemology correlate with credible

What is the impact of ‘fun’ on work?

 

Dose the epistemology match the ontology?

Was the method followed suffaintenlly rigorous?

Who did the work/research do you see them credible?

Where and when was the work made public?

 

  1. So for each question below you need to decide
    1. Is this a good question? (problems might be e.g.  assumptions, ambiguity, too broad)
    1. What kind of knowledge/evidence would be needed to answer it? (e.g. numbers, words, both, other…?)
    1. How would I gain/gather that knowledge/evidence? (e.g. interviews, survey, experiments)
    1. Which of these two laptops gives the best performance?

      • This qestion relys on the answers baisniss of the laptops them selfs, this qestion is also far to braod is what retrospect are they asking, they need to ad more of a definition when they ask what kind of performance.
      • This requires the aswerer to have quite an extensive knowlage of the products, in which case they would not.
      • Reviews both online and vai friends
    2. Are virtual worlds like Second Life or Minecraft useful for teaching?

      • Asumptions and baisness would get in the way for this question,
      • the asnwer would need to have previous knowlage of the Vurital world to get an acurate answer and would further more need to have expreienced in the topic
      • I would gain knowlage from experimenting myself but not only this i would gain knowlage of this from someone that teaches in the vitueral world.
    3. Why don’t many school students (16-18yrs old) choose to study IT at Polytechnic or University?

      • No this is a bad qestion it is too broad, and would gather to much infomation. it needs to be defined.
      • There needs to be consenus infomation about what school age leavers do after school
      • I would proposistion the people that would gather the infomation ( though this could be a lagal thing.
    4. Which ISP in NZ gives the best value for money?

      • This is a bad question, it is again to broad. and needs to be dinfined. 
      • They would need to have previous knowlage of all the different aspects of the ISP them selfs, plans, internet speed ect.
      • Servaying the different companys, asking about speed, price and conectivy and downtimes.
    5. How do I feel about trying to work with slow internet speeds?

      • This is extreamly bais, and does not elborate to far. But is that needed ?
      1. This Requires the user to know how they feel about the system
      1. ask my self 
    6. What are the main security issues associated with ‘cloud computing’

      • The answer could be vastly different depending on different baisness’s
      • Vast aray of knowledge on all cloud systems 
      • This would require decandes of research

    Five things, i cant see the board becca is blocking it.

  2. What kid of
  3. Exsample of how its used in it
  4. what is the structer

I can’t keep up and becca is purposely blocking the bord so i must look later