Forum Trends in EFL technology and educational coding: A case study of an evaluation application developed on LiveCode

The availability of user-friendly coding software is increasing, yet teachers might hesi-tate to use this technology to develop for educational needs. This paper discusses studies related to technology for educational uses and introduces an evaluation application being developed. Through questionnaires by student users and open-ended discussion by the developers, the feasibility of educators to be involved in developing a new evaluation application is discussed. Results showed user satisfaction level of the application was high overall, although application updates incorporating feedback from users should be considered. Developing an evaluation application was achieved, and recommendations of design and software development are provided. 1

to teachers beliefs which are deeply rooted and more difficult to change (Ertmer, 1999).
Since 2000, mext has also encouraged curricula that develop presentation skills in classrooms, in part to provide students and future workers with skills thought necessary to keep pace with external pressures (i.e., globalization). With large class sizes, presentation evaluations are likely to be time-consuming; and language educators might feel challenged to effectively evaluate learners presentations in an l2. Thus, with the vision that technology could be used to more efficiently evaluate presentations, one language instructor and one student programmer collaborated to develop presentation evaluation software for educational purposes.

EFL technology use
E-literacy is important for English language educators in order to adapt to the increasing technology in this digital world (Kumaravadivelu, 2013), and educators might need to be able to evaluate and correctly execute new programs (Robb, 2006;Warschauer, 2002).
Classroom-based ict might include learning management systems (e.g. Moodle, https:// moodle.com/; Blackboard, blackboard.com), social network sites, cloud-based services (e.g., Google docs, https://www.google.com/docs/; Dropbox, www.dropbox.com). Social network systems might require setup to maintain a certain level of privacy for the users and thus might not be suitable in educational settings (Thongmak, 2013). According to Duncan and Chandler (2011), secure, alternative, and private social networks such as Edmodo (www.edmodo.com) are designed for educational use (del Val, Campos, & Garaizar, 2010). Thongmak (2013) investigated the technology acceptance model (Davis, 1989), namely tam3 (Venkatesh & Bala, 2008) and found that the benefits of using ict, educating learners on selected features, and encouraging online collaboration should be emphasized. Thus, one aspect of this current paper is to highlight the benefits of ict use, including, if possible, learning to code for language educators.
Learn to code Amiri (2000) argued that ma tesol (Teachers of English to Speakers of Other Languages) courses should include training for language teachers both as consumers and producers of computer based materials. The University of Leeds and the University of Brighton provide such courses titled, ma in Teaching English to Speakers of Other Languages and Information and Communication Technology and ma tesol with ict respectively, however, such courses remain rare. Moreover, Chappelle and Hegelheimer (2004) indicated that teachers should have basic programming skills. Today, there is thought to be an increasing interest in programming and coding to benefit education in general. Coding allows for increased creativity, designing and remixing for the purpose of individual goals (Resnick, 2012). Several new free user-friendly applications for coding exist, and various application development platforms are accessible (Luterbach & Hubbell, 2015;Fox, 2013). Luterbach and Hubbell compared the cost, ease of use, stability and adaptability of 13 platforms and discussed the diversity and garment of applications on various online stores in a section on "Motivation for learning to develop Apps".
One trend for efl instructors includes learning basic programming, and this trend is boosted by the evolution of gui friendly, online programming platforms such as Scratch (http://scratch.mit.edu/, Resnick, et al., 2009); Wix (www.wix.com), a desktop publishing-like website builder; and LiveCode (www.livecode.com), a programming software with English syntax. The commonalities of these platforms are that the sophisticated trial versions are accessible, free, user friendly, and include professional, creative templates and online instructional videos. Non-professionals can therefore develop products with minimal effort, tailored to the users' needs.

Presentations in the classroom
Educators incorporating learner presentations in the classroom must evaluate and provide feedback efficiently and effectively. In one study on providing customized feedback to learners for phonetic evaluation of recorded speech, it was noted that instructors face a challenge to focus on individual strengths and weaknesses manually for large class sizes, in a limited time; thus, the development of an application was suggested as a possible solution (Uehara, 2014 February). Furthermore, Uehara (2012) indicated that automated messages could reduce the teacher workload for repetitive tasks. Individual feedback often involves the replication of standard responses to the learners in different combinations; and a common form of evaluation is the use of rubrics. Rubrics provide uniform assessments, and are a constructive method to communicate expectations and performance to learners (Petkov & Petkova, 2006). Technology can transform assessment practices (Quellmalz & Pellegrino, 2009) and digitalized delivery of evaluations is increasingly popular (Buzzetto-More & Alade, 2006), thus, feedback provided to each individual learner could be achieved by compiling pre-determined automated messages in different combinations for each learner. The use of ict could therefore greatly reduce the repetitive workload for language teachers, save time, and allow for tailored feedback for each individual learner. The technology should then allow instructors to expend their time and energy on other aspects of language teaching.

Rationale
With the increased accessibility of free real-time coding platforms, and an educational goal to provide tailored feedback to learners for presentation skills in a large efl classes, how feasible would it be to develop an application for language education evaluation with the support of a graduate school student programmer? What precautions and advice can the developers provide to language educators who are considering to develop their own applications for the same or similar purpose? With these concepts in mind, the following research questions will be covered in this paper.

Research questions
rq1) What presentation evaluation system can be developed by a non-programmer-efl language instructor using a free real-time coding program with the aid of a graduate school programmer? rq2) How useful do students' perceive the evaluation software, and what are their opinions of it?

Method for research question one
Between September 2014 to June 2015, one efl instructor at a national university in Japan with no previous programming experience (instructor, hence forth) and a doctorate candidate in the field of programming (programmer, hence forth) conducted weekly sessions to develop an efl educational application. The instructor was in charge of content and design, and the programmer was in charge of design and coding using LiveCode Ltd. (n.d.). The instructor and programmer met multiple times during the week for approximately two hours each session to discuss the hardware and platform to be used, the design, content and functions, to program, and test the work in progress. The session aims were not independent of each other, and as new functions were added, design and content were modified. Learners beta-tested the application during class, in a manner that did not disturb the objectives of the course, and the application was subsequently updated for function, design and content accordingly. The total number of hours spent for the instructor and programmer to meet, discuss the design and to test the product was 169 hours; the programmer spent 81 hours to learn the programming language, and 338 hours for programming. The programmer and instructor spent approximately 100 hours to draft a manual describing the development objectives and usage. The total number of real-time testing in the classroom was 96 times to 96 participants, of which, 66 responded to surveys related to the application, a response rate of 68.75%.
The platform chosen was LiveCode, a free programming language used to develop crossplatform applications including common operating systems for mobile and desktop computers. The programing language has the ability to run and compile real-time, thus simplifying the coding process. LiveCode has a gui interface with drag and drop functions with English-like language to enable simple coding and creation of original applications.
For a code to be recognized by the computer's cpu, it must be converted from source (a high-level language) into machine code (a low-level language). This process is referred to as "compiling" the code. An Apple Inc. iPad was selected as the hardware due to its high specifications compared to other competitive products, and ease of interface with existing computer hardware, which both the instructor and programmer had prior to this study. In order to develop using LiveCode, the complete development software was downloaded from the LiveCode link: http://LiveCode.com/download (LiveCode Ltd., n.d.), and instructions on how to develop on an iPad platform were reviewed thoroughly (http://livecode/ com/guides-documentation/). LiveCode support forum and related topics on free online videos were also referred to, and no other programmer was consulted in the development. The programmer had no knowledge of the LiveCode program prior to the inception of this study. The presentation evaluation application is described in the section on Results for research question one.

Method for research question two
Two groups of learners (N = 66) responded to an online questionnaire which described the purpose of this study, and contained a 5-point Likert scale with responses ranging from 1 (strongly disagree) to 5 (strongly agree) for 22 statements related to the design, content, speed, form and frequency, perception of presentations improvement, and understanding of the evaluation criteria. Additionally, there was one Likert scale item, with responses ranging from 1 (very dissatisfied) to 5 (strongly satisfied) for one statement on the overall evaluation of the application, and finally, an open-ended question requesting comments to improve the application was included. The survey was bilingual (Japanese and English) and, students responded to the open-ended question in their l1 or l2. The first group of learners (Group 1) were second-and third-year science and engineering university students (n = 8), who attended a 15-week English course focusing on academic presentations skills during the spring or fall semester of 2015. Each student conducted one academic presentation between five to ten minutes with an additional two to five minutes for questions and answers, and the instructor evaluated the students based on the evaluation criteria described in the results for rq1. Students assigned to be the timekeeper announced the length of each presentation and the instructor noted the time. The evaluation results were recorded using the application, and the results were sent to the students by email after each presentation. In November 2015, Group 1 responded to the online survey. A second group of learners (Group 2) were first-year students (n = 58) who attended a 15-week Academic spoken English course during the fall semester of 2015. Each student conducted a threeminute two-sided argument presentation. In January 2016, the instructor evaluated the students' based on the criteria described in Appendix A. Evaluations were recorded using the application, and results were forwarded by email immediately following each student's presentation. This email included a link to the same online survey received by the students in Group 1. A summary of the responses is described in the results for rq2, and a survey sample including the average response value in parentheses is shown in Appendix B.

Results for research question one: about the application
The instructor and programmer developed a new learning application to support presentation evaluation. Tentatively named GradeMe, the application can send out tailored feedback and advice to multiple students over multiple classes. The complete source code and templates required to run this version of the evaluation software on LiveCode version 7.0.5 described in this paper and "GradeMe version α", an in-house document which provides detailed development instructions are available upon request, and the current version is described in the following sections.
The purpose of GradeMe is to evaluate the proficiency of efl learners' academic presentations conducted in English. This application is operated using the touch panel on an iPad mini. A student presenter is selected from a pre-loaded list, then the evaluator scores the presenter on a scale of 0 to 6 following the definitions for six presentation evaluation criteria. The evaluation results are displayed on a pie chart and the positive results (the skills performed favorably) and negative results (skills requiring improvement) are displayed in a simple color-coded format. Evaluation criteria headings are displayed in green and red for the positive results and negative results respectively: and results can also be displayed with descriptive text. One photo can be taken during the evaluation. The evaluator can send by email the full descriptive evaluation result, an image of the pie chart, one photo taken during the presentation, and two points of advice. The email is created automatically, by retrieving selected pre-determined feedback scripts for each score and evaluation criteria systematically, thereby reducing the workload of the evaluator to compile individual feedback to the presenter, which in the past may have been done manually. The presenter can retrieve the feedback immediately following the presentation, if email is accessible, which in the past would have been received by the presenter in the next class session or at a later date. Feedback provided directly following a presentation would likely be useful for the presenter to recognize elements of presentation skills that are performed well or that the presenter can improve; and GradeMe enables the presenter to retain such feedback for future reference.

Setup
To start using the application, an iPad mini and a computer with Microsoft Excel were prepared. A spreadsheet (Template A) was prepared with the following details; the course name and semester, student number, name in Japanese, name in romaji, and email for each student (see Appendix C). A second spreadsheet (Template B) was prepared with empty fields for evaluation criteria, criteria definition, feedback and advice (see Appendix D). Then the texts in Appendix E1, E2, E3 and E4 were inserted in the relevant section of Template B and saved as an Excel file on Microsoft Excel. Due to proprietary rights, LiveCode can not load the file using the original format (e.g. .xls or .xlsx); therefore these files must be exported from Excel to a simple format for LiveCode such as .txt. Files are uploaded to the application (iPad mini) using iTunes, by drag and drop. In this way, users can upload class lists, feedback and advice as required, however one of the restrictions is the fixed matrix of six evaluation criteria and the 7-point descriptor scale.   Class and student selection. Select Spring or Fall, then Class from the pull down menu and select a student. When the student information is highlighted light blue, click the Grade button to evaluate the selected student. See Figure 3. Note. Currently, photos can only be loaded manually using the LiveCode editor, as a server connection to a host data needs to be implemented to enable such function.   Figure 4, then final score is 77%. The percentage value can be disabled by clicking on the Disable Evaluation Percentage box. The pie is divided into six sections, and these sections are colored more with higher scores, and less with lower scores. See Figure 5. Variations of the feedback are: Highlight criteria title of good results in green and poor results in red by clicking on Well done or Try again button. See Figure 6; Display all feedback text in one window by clicking on the Feedback button. See Figure 7; Display feedback text for selected results only. See Figure 8.  Send results by email. Click on "Send Feedback" in Card #4 to open an email. GradeMe opens an email and creates an email message to the student selected in Card #2; email information and name is retrieved from the pre-set class list. The automated message and can be edited before sending if required, and could also include an image of the pie chart in Card #4 with or without the green or red highlights. It is recommended to send an image highlighting the good points or bad points depending on the focus at that time. The email information and name is retrieved from the pre-set class list. A sample email message when a presenter is evaluated with score for Opening statement 6, Structure and Originality 3, Language 4, Interaction 5, Confidence 5, and Time limit 6 is shown in Figure 9. Evaluation criteria, definitions, descriptors and advice. Evaluation criteria, definition, descriptors and advice for the academic presentations were developed by Uehara (2015, February) and included feedback concepts adopted in the ielts (International English

Application user instructions
Language Testing System, https://www.ielts.org/) grading scale where standardized descriptor feedback is provided. It also cites sections of the Common European Framework of Reference (cefr) for "spoken fluency" and "grammatical accuracy" a1, a2, b1, b2 (Council of Europe, 2001). From these, six criteria items were identified as necessary for academic presentations in an efl class for science and engineering students.
With GradeMe, a score of 0-3 on any particular criteria is considered negative: meaning the presenter requires more training and practice. A score of 4-5 means the presenter is doing well but there is room for improvement. Comparative adjectives are used to specify competence range: 0 = not been achieved; 1 = greatly lacking the described criterion; 2 = lacking the described criterion; 3 = slightly lacking the criterion; 4 = achieved the criterion but room for improvement; 5 = achieved the criterion with a slight room for improvement; Score 6 = achieved the criterion. In addition to the score description, two points of advice are provided as feedback. The grammatical structure of the advice is in the imperative form to indicate what skills should be maintained and what skills the speaker should improve. The application retrieves one piece of feedback advice from the highest scoring criterion, and one from the lowest scoring criterion. If the values are equal, the application prioritizes using the following order: Language, Interaction, Confidence, Structure & Originality, Time limit, and finally, Opening Statement. The grading score competence description for these six criteria are shown in Appendix E.

Results for research question two: student surveys
On a 5-point Likert scale, the participant response averages are indicated in parentheses where applicable (see Appendix B). Student comments were randomly marked s1, s2, etc. to mean student 1, student 2, and so forth, where s1 to s8 were from Group 1 and s9 to s66 were from Group 2. Responses for open-ended statements by the respondents were in English or Japanese. The translated version into English marked with l1 will be shown here where applicable. Overall, for the 23 Likert-scale items, students endorsed GradeMe very positively (4.21). The participant responses to the statements related to the overall design showed positive results (4.29), and found the evaluation score was easy to understand (4.29), on the positive side about whether the photo aided where to stand (3.91), and whether it helped to check their own facial expressions (3.88) and gestures during their own presentations (3.86).
The students evaluated for presentations were attending seminars where the aim was to teach English through English. These concepts were indicated at the start of the course, however, multiple students still wished to receive feedback in Japanese; "I wanted to see the evaluations in Japanese" (s16 l1). The participants did however understand the evaluation and advice (4.30) and while there was the desire for the feedback to be in their l1 (3.71), they were equally positive with the feedback in English (3.71). Most participants were positive about receiving evaluations and advice instantly (3.95), were very content that the results were sent to their own email (4.52) that they were the only ones able to view the results (4.55) and they strongly endorsed a desire for a similar feedback on more than one occasion (4.24). For issues related to the participants' perceived improvement in presentation skills, while participants responded that they knew how to improve presentation skills without receiving some sort of evaluation and advice (4.08) as a results of receiving the feedback, the participants' perception was that they knew how to improve their own presentations skills (4.30), knew what skills to work on (4.20), knew what skills they were doing well (4.03) and that this would help improve their presentation skills (4.30) Responses showed that receiving the feedback instantly helped for presentations improvement (4.03). One student said, "It was good to receive evaluation results instantly and the description rather than scores was refreshing…" (s18 l1).

Overall evaluation and open-ended responses
There were 54 statements from the participants to the open-ended question, and four categories emerged relating to; design, technical, content, and positive comments. If one statement included comments to multiple categories, they were each counted once for each category. For example, s34 wrote, "I was able to see what I was lacking (for academic presentations)" in l1, hence this statement was counted as 1 for content and 1 for positive comment.
Another respondent wrote, "It was helpful to received feedback in English. I would appreciate to receive demo videos" (s59 l1), hence this statement was counted as 1 for content, 1 for design, 1 for technical, and 1 for positive comments. There were 14 comments related to the application design, eight comments related to technical issues, 12 related to content, 33 related to positive comments, and 11 that were interpreted as having no comment.
For design, one respondent found it difficult to understand the graph (s14). There were overlapping for statements related to both design and technical issues, where a couple learners thought the traditional radar chart would be an improvement (s58). One respondent wished to see a class average (s31), note though that would not be feasible when sending individualized evaluations following each presentation; and another wanted to view scores on a longitudinal scale (s3). Three students desired for either a demo video or a video of their own presentation (s15, s17, s59); and two mentioned they did not receive comments immediately (s41, s47). It was later found that this was due to either network issues or an incorrect email address. One respondent said that the photo was not viewable on his smart phone (s26). There were split opinions related to the content where one subgroup was satisfied with the clarity of the automated feedback and that it helped them to understand what they lacked in presentation skills (s12, s18, s25, s27, s34, s59, s60, s64). Another sub-group wanted the feedback to be in their l1 (s4, s16, s50, s57). Despite the goal of instructor and programmer to create general, automated feedback, two learners felt personalization was also a topic of concern, and requested for feedback that was directed more specifically to the student (s18, s33). Finally, one respondent was overwhelmed with the length of the feedback and wrote, "please more conpact" (sic) (s9). Out of the 33 positive comments, 24 respondents simply wrote that they thought the application was good, easy to understand, innovative, or felt technologically advanced; while others were more descriptive in their statements noting that it was a good application because "the previous systems which use paper will take more time than this one" (s19), "evaluation is very detail where I should improve. I think that point is important and want them to continue" (sic) (s60), and others wanted to receive this type of feedback more frequently (s39, s40) noting, "This was a good system. I wish it was used in the spring semester as well." (s39 l1).
Finally, the overall evaluation from the participants of the study was very positive (4.21), and one response was, "I thought this application is highly polished" (s30), while another was, "it was easy to see what changes need to be made, and so (I think) it is a very useful system" (s27 l1).

Discussion
The applications itself is in working condition as shown in rq1, and it is apparent from the survey results in rq2 that the majority of the participants are satisfied with the application; however, the current application design remains limited. The evaluation criteria and evaluation parameters are not variable, and the weight of each evaluation criterion is all equal. Structure and Originality should ideally be in a separate criteria; however, due to the limitations of the freeware in LiveCode, and to program efficiently, six evaluation parameters were a feasible solution considering the time and funds dedicated to this project. The current evaluation criterion description was constructively criticized that lower level efl learners might not understand the language, and a simpler form could be useful. To overcome this issue, users can incorporate their own evaluation criteria and of course class list, so customization of the application is feasible within the fixed template, however not outside the template. The developers resolved the issue of the time lag in learners receiving feedback (Uehara, 2012), and the next step would be to consider solutions to store and retrieve information for multiple evaluations per presenter to enable longitudinal feedback. In addition, student observers or the presenter do not see the evaluator selection in realtime, the benefits of a digitally shared viewing of the evaluation process and evaluation results should be considered. The implementation of a server connection to a host data will enable record keeping and hosting real-time updates that can be shared amongst the class near synchronously, and this is considered for the future. GradeMe should also be further tested in class and studies on not only student users, but teacher users should be compiled in order to consider future updates.

conclusion
This project was a collaboration between a language teacher with no programming skills (in charge of content and design) and a doctoral student programmer (in charge of programing and design) with visions to create an application that will enable customizable and concise feedback that can be sent instantly to the learners. The recommendations for future teacher coders are to scaffold your own learning goals for coding and if working with a programmer, communicate your visions, draw images, consider screen real estate, and expect to spend much time and start up funds on your first project. Competent teacher-coders could also create the first version, and then have a more tailored version made by professional programmers. Test frequently and keep a record of the development in the form of a development and user manual, and the educational goals will become clearer in this way.
Coding might become the norm for some language instructors, where these instructors might code their own instructional tools. Teaching methods might likely change in the same way teaching methods have changed with the use of personal computers and desktop publishing software. The growth of educational technology and user-friendly coding technology is creating an environment for instructors to befriend technology inside and outside the classroom; whether it be to use existing technology, or by creating their own, as has been demonstrated in this paper. Furthermore, the aim of having the knowledge in coding is for educators to make better judgments, and not necessarily for them to become professional programmers (Godwin-Jones, 2015). In the ESL context, this might mean efficient integration of call resources. From a broader prospective, understanding code, implies one has the understanding of the internalization of methods, the analysis and the critical thought process, which is the foundation of programing, and this way of thinking might help language educators to apply these skills to learn or integrate new technology for instructional use. However, "technologies, by themselves, will not give us a place in heaven" (de Val, Campos, & Garaizar, 2010), and to understand the basic concepts of technology and coding might bring light to a possibly more seamless, or less bumpy journey of technology use in education.

acknowledgements
The application development was partially achieved through funds from the Support Office for Female Researchers and the Office of Education for Practical Communication at the University of Electro-Communications. We also thank Jean-Pierre Richard and the two anonymous reviewers for their many invaluable comments on earlier versions of the manuscript. Uehara, S. (2014, February). Criteria 4. Delivery Delivery refers to the performance of the presentation in terms of having an attractive opening statement, fluency, and interaction. The speaker should not rely strongly on printed notes or what is written on their visual aids such as PowerPoint.

Criteria 5. Time Limit
The speaker should speak within the time limit and not over the specified time.
The speaker should respond in full sentences in a manner that answers the questions from the audience in a fluent manner. Open-ended question 23 Please provide any comments that may help to improve the "GradeMe" application. No word limit.

Descriptions for evaluation criterion, criterion definition, feedback and advice
Evaluation Criterion: Opening Statement Criterion Definition: An opening statement is the statement made by the presenter at the start of a presentation. The statement can be used to attract the audience to the presentation and to set the scene. An opening statement is also referred to the hook because of its nature to attract the audience. The statement may include a citation to an academic paper or information demonstrating background knowledge.

Score
Feedback and Advice for Opening Statement Feedback Score 0 There is no opening statement. Score 1 There is an opening statement, but does not attract the audience. Score 2 There is an opening statement, but attracts only a few of the audience. Score 3 There is an opening statement, and attracts some of the audience. Score 4 There is an opening statement with reference to some article related to the topic under consideration and attracts some of the audience. Score 5 There is an opening statement with reference to an academic paper on the topic under consideration and attracts the majority of the audience. Score 6 There is an opening statement with reference to an appropriate academic paper on the topic under consideration and it attracts the majority of the audience. Advice Score 0, 1, 2, 3, 4, or 5 You need an opening statement with an appropriate reference to an academic paper on the topic under consideration. The statement should be used to attract the audience to the presentation and to set the scene. An opening statement is also referred to the hook because of its nature to attract the audience. The statement may include a citation to an academic paper or information demonstrating background knowledge. Score 6 You have an opening statement with an appropriate reference to an academic paper on the topic under consideration and have been able to attract the majority of the audience.
Evaluation Criterion: Structure and Originality Criterion Definition: A structure in a presentation should be straightforward and logical and the topic should be original. It should contain the following elements: A welcoming and informative introduction; a coherent series of main points presented in a logical sequence; a lucid and purposeful conclusion. The speaker uses transitions to aid the listeners. Originality in a presentation refers to innovative, creative and new ideas presented by the speaker, or ideas at the very least that are not plagiarized.

Score
Feedback and Advice for Structure and Originality Feedback Score 0 There is no structure. The idea is plagiarized. Score 1 The presentation is not straightforward, logical or original and is missing the majority of the points below: Coherent structure, use of transitions, welcoming and informative introduction, main points presented in a logical sequence, lucid and purposeful conclusion, an original and innovative topic. Score 2 The presentation is not straightforward, logical or original and is missing the many of the points below: Coherent structure, use of transitions, welcoming and informative introduction, main points presented in a logical sequence, lucid and purposeful conclusion, an original and innovative topic. Score 3 The presentation is not straightforward, logical or original and is missing the some of the points below: Coherent structure, use of transitions, welcoming and informative introduction, main points presented in a logical sequence, lucid and purposeful conclusion, an original and innovative topic. Score 4 The presentation is fairly straightforward, logical and original but is missing the some of the points below: Coherent structure, use of transitions, welcoming and informative introduction, main points presented in a logical sequence, lucid and purposeful conclusion, an original and innovative topic. Score 5 The presentation is straightforward, logical and original but is missing a few of the points below: Coherent structure, use of transitions, welcoming and informative introduction, main points presented in a logical sequence, lucid and purposeful conclusion, an original and innovative topic. Score 6 The presentation is straightforward, logical and original and covers all the points below: Coherent structure, use of transitions, welcoming and informative introduction, main points presented in a logical sequence, lucid and purposeful conclusion, an original and innovative topic. Advice Score 0, 1, 2, 3, 4, or 5 The presentation should be straightforward, logical and original and cover all the points below. Check which one you are missing and add: Coherent structure, use of transitions, welcoming and informative introduction, main points presented in a logical sequence, lucid and purposeful conclusion, an original and innovative topic. Score 6 Your presentation is straightforward, logical and original and covers all the points below: Coherent structure, use of transitions, welcoming and informative introduction, main points presented in a logical sequence, lucid and purposeful conclusion, an original and innovative topic. The presenter should not rely on presentation notes and should paraphrase clearly and concisely when using presentation slides. Without notes: [CEFR C2 SPOKEN FLUENCY] Your aim is to express yourself at length with a natural, effortless, unhesitating flow. Pausing only to reflect on precisely the right words to express your thoughts or to find an appropriate example or explanation. [CEFR C2 GRAMMATICAL ACCURACY] You should maintain consistent grammatical control of complex language, even while attention is otherwise engaged (e.g. in forward planning, in monitoring others' reactions). Score 6

Descriptions for evaluation criterion, criterion definition, feedback and advice
You do not rely on presentation notes and you paraphrase clearly and concisely when necessary. Without notes, you: [CEFR C2 SPOKEN FLUENCY] Can express yourself at length with a natural, effortless, unhesitating flow. Pausing only to reflect on precisely the right words to express your thoughts or to find an appropriate example or explanation. [CEFR C2 GRAMMATICAL ACCURACY] Maintain consistent grammatical control of complex language, even while attention is otherwise engaged (e.g. in forward planning, in monitoring others' reactions).

Descriptions for evaluation criterion, criterion definition, feedback and advice
Evaluation Criterion: Interaction Criterion Definition: Interaction in a presentation refers to the level of speaker and audience contact. At the basic level it includes making eye contact with the audience. At advanced level it refers to the speaker interacting with the audience by for example asking questions, conducting polls, and other participation opportunities which encourages audience engagement.

Score
Feedback and Advice for Interaction Feedback Score 0 There is no eye contact or interaction with the audience. Score 1 The presenter makes very little attempt to make eye contact with the audience. The presenter is constantly facing away from the audience, or is constantly looking at the screen. The presenter does not attempt to interact with the audience and the audience is not engaged. If there are no questions, during a QA session, the presenter does not try to encourage questions in any way. Score 2 The presenter makes little attempt to make eye contact with the audience. The presenter is often facing away from the audience, or is often looking at the screen. The presenter attempts to interact with the audience somewhat but the audience is not engaged. If there are no questions, during a QA session, the presenter may try to encourage questions but unsuccessfully. Score 3 The presenter makes some attempt to make eye contact with the audience but it is awkward. The presenter sometimes faces the audience, but it is limited and s/he is often looking at the screen. The presenter attempts to interact with the audience somewhat and few are engaged. If there are no questions, during a QA session, the presenter encourages questions but unsuccessfully. Score 4 There is eye contact with the audience. The presenter faces the audience, but usually in one direction. The presenter interacts with the audience for example by asking questions or polls, and some respond and some are engaged. Score 5 There is much eye contact with the audience. The presenter faces the audience often and scans the room. The presenter interacts with the audience for example by asking questions or polls, and many respond, and many are engaged. Score 6 There is consistent eye contact with the audience. The presenter continuously faces the audience and scans the room naturally. The presenter interacts with the audience on many occasions for example by asking questions or polls, and most of the audience are fully engaged. Advice Score 0, 1, 2, 3, 4, or 5 You need to increase eye contact with the audience. You must face the audience and scans the room naturally. You should try to interact with the audience by, for example asking relevant questions or polls, and engage the audience. Score 6 You have consistent eye contact with the audience. You continuously face the audience and scan the room naturally. You interact with the audience on many occasions for example by asking questions or polls, and the majority of the audience are fully engaged.
Evaluation Criterion: Confidence Criterion Definition: Confidence in a presentation refers to the level of the speaker looking confident in the presentation. At the basic level it includes making eye contact with the audience with a certain comfort level. At an advanced level the speaker looks confident (not arrogant or presumptuous) about what the speaker is presenting. A confident appearance during the presentation is necessary to convince the audience that the new and innovative idea is what it is worth.

Score
Feedback and Advice for Confidence Feedback Score 0 The presenter looks very unconfident or over confident. Score 1 The presenter looks unconfident. Score 2 The presenter occasionally looks confident. Score 3 The presenter looks somewhat confident. Score 4 The presenter looks fairly confident. Score 5 The presenter looks confident. Score 6 The presenter looks very confident. Advice Score 0, 1, 2, 3, 4, or 5 You need to look more confident (or less over confident). Score 6 You look very confident.

Descriptions for evaluation criterion, criterion definition, feedback and advice
Evaluation Criterion: Time limit Criterion Definition: The speaker should speak within the time limit and not over the specified time.
Score Feedback and Advice for Time limit Feedback Score 0 There is no speech. Score 1 The presentation is far too short (far under the time limit). Score 2 The presentation is too short (under the time limit). Score 3 The presentation is slightly short or too long (slightly under or completely over the time limit). Score 4 The presentation is within the time limit, but there is some time to expand and provide depth. Score 5 The presentation is within the time limit, but there is some more time to expand and provide depth. Score 6 The presentation was within the time limit and ended at the right timing. Advice Score 0, 1, 2, 3, 4, or 5 You need to continue the presentation up to the time limit but not over. Try to time the presentation by expanding and providing some more depth. Score 6 The presentation was within the time limit and ended at the right timing.