IEEE ©2001 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

Return to Table of Contents


Web Based Peer Assessment: Attitude and Achievement

Sunny San-Ju Lin, Eric Zhi-Feng Liu, and Shyan-Ming Yuan


Abstract - The specific features of the Web-based peer assessment are to utilize Internet resources to facilitate contacts between individuals and information, to assist in brainstorming among individuals, and to generate more meaningful learning at the higher education level. In this research, authors focus predominantly on attitudes of computer science students toward Web-based peer assessment using NetPeas as the interactive channel and management center. NetPeas is a Web-based peer assessment system implemented for two-way anonymous peer assessment. In an evaluation held in spring 1999, this study recruited a sample of fifty-eight computer science undergraduate students enrolled in an operating systems class in a research university of Taiwan. Attitudes toward Web-based peer assessment were measured by a post-test questionnaire, including several affective components, for example,"satisfied" or "unsatisfied" about the Web-based peer assessment. The result demonstrated that 1) significantly more students favored this new learning strategy and 2) students with positive attitude outperformed those with negative attitude. No matter positive attitude toward Web-based peer assessment brings about higher achievement or higher achievement promotes positive attitude, teachers must take care of students’ subjective feelings in enhancing effective Web-based peer assessment.


I. Introduction

The main concept behind the Networked Peer Assessment System (hereafter abbreviated as NetPeas) [1, 2] is that teachers ask students to undertake and submit project work over the Internet. This is then discussed and graded by classmates, who then offer suggestions for improving others’ project works. The student is then asked to make follow up revisions to the original project work in line with peers’ review feedback. After several rounds of this interaction, the teacher offers a summative grade for pieces of project work, based on his or her professional knowledge. In this set-up, the teacher plays a role that is more akin to that of the general editor of a journal, whilst the participating students are like either an author (in the event that it is their project work), or reviewers. Previously, the authors have examined and proved that network-based peer review is an effective learning strategy and the NetPeas is a satisfactory system [1, 2, 9]. The present study attempted to understand if a student's attitude (affective aspect) toward peer review or the NetPeas system is related with their achievement (cognitive aspect).

Web-based peer assessment is achieved in seven easy steps:

1. The teacher posts a main theme and asks students to write a project work.

2. Within standard parameters, the student designs a homepage, proposes survey topics and explains the reasoning behind the selection of those survey topics.

3. Students finish the survey paper, and upload the project works to be assessed by peers through the NetPeas.

4. Students appraise, grade and make proposals concerning the project work handed in by six fellow classmates.

5. The system organizes the grading and proposals of the reviewers and informs the student who submitted the project work and the teacher.

6. The students revise the original project work in line with comments made.

7. Steps 4-6 are then repeated.

In the following section, authors provide a brief review of existing peer assessment in higher education, description about methods of evaluation and participating students, peer assessment features of the current research, and Topping's peer assessment typology. Also, authors discuss affective components in peer assessment.

The research layout includes the introduction of project work requirements, assessment criteria and peer assessment score, attitude questionnaire score to show a clear and complete perspective about the experiment. Several statistical analyses were adopted to prove the hypotheses and propositions, thereby demonstrating the effect of attitude of computer science students using the NetPeas on their achievement. Finally, authors state the conclusion and suggestions for future implementation of Web-based peer assessment.


II. Literature Review

A. Peer Assessment

Topping [3] provided a general review of about 109 peer assessment articles from 1980 to 1996, located in several different databases, i.e., Social Science Citation Index, the Educational Resources Information Center, and the Dissertation Abstracts International. The keywords used for searching were: peer assessment, peer marking, peer correction, peer rating, peer feedback, peer review, and peer appraisal, together with university, college, and higher education. Forty-two (38%) articles were considered merely descriptive and anecdotal, while Sixty-seven (62%) included outcome data gathered through an orderly research process.

Topping has reported the use of peer assessment in biological science, second language writing instruction and writing seminar classes in the higher education systems of several countries. Many peer assessment studies only ask students to evaluate and comment on peers’ project work in a summative way, i.e., give a final grade for others’ project work. In such cases are less peer interaction than in the current Web-based peer assessment, and consequently less peer pressure is evoked for further modification.

Overall, the reliability and validity of peer assessment are partially verified by researchers [4, 5, 6, 7, 8]. From our own research experiences of Web-based peer assessment, authors know that teachers are not necessarily slack and their teaching load may not lessen via peer assessment. According to the results of past research, students' deeper intellectual skills, such as critical thinking, monitoring, planning, comparison, and synthesis, as well as positive learning attitude can be easily promoted.

Topping defined peer assessment as an arrangement in which individuals consider the amount, level, value, worth, quality, or success of the products or outcomes of learning of peers of similar status. Because peer assessment allows various arrangements, Topping described the variety of peer assessment by a typology (Table 1). The left-hand column of Table 1 illustrates features or variables of all possible peer assessment and the right-hand column offers various arrangements in a particular feature of peer assessment. 


TABLE 1
Topping's Typology of Peer Assessment Arrangements in Various Fields of Higher Education 
Variables of all possible peer assessment  Range of Variation
Curriculum area/subject All
Objectives 1. Time saving, or 2. Cognitive/affective gains?
Focus 1. Quantitative or qualitative, 2. Summative or formative, 3. Both assessment
Product/output 1. Tests/marks/grades, 2. Writing presentaton, 

3. Oral presentations, or 4. Other skilled behaviors

Relation to staff assessment 1. Substitutional or 2. Supplementary
Official weight 1. Contributing to final official grade or 

2. Not related to official final grade

Directionality 1. One-way, 2. reciprocal, or 3. Mutual
Privacy 1. Anonymous, 2. confidential or 3. Non-anonymous
Peer contact 1. Distant or 2. Face-to-face
Assessment year 1. Same year or 2. Cross year
Peer ability 1. Same ability or 2. Cross ability
Constellation Assessors 1. Individuals, 2. Pairs, or 3. Groups
Constellation Assessed 1. Individuals, 2. Pairs, or 3. Groups
Peer assessment place 1. In class, or 2. Out of class
Peer assessment time 1. Class time, 2. Free time, 3. Informally
Requirement 1. Compulsory, or 2. Voluntary for assessors/assessees
Reward 1. Course credit or 2. other incentives or reinforcement

Following Topping's typology, the Web-based peer assessment arrangements of the current study are listed in Table 2. In Topping's point of view, valid research outcome of network-based peer assessment is still insufficient so that its effectiveness needs further examination. Therefore, our intention is to fill in the gap of this field. 

TABLE 2
Web-based Peer Assessment Arrangements Adopted by the Current Study 
Peer assessment variables of current study Methods used
Curriculum area/subject "Operating Systems" in computer science
Objectives 1. To promote deeper intellectual skills and positive learning attitude, and 

2. To compare teacher's rating with student's rating.

Focus 1. Both quantitative and qualitative, and 

2. Formative assessment.

Product/output A writing survey of any operating system(s) based on current knowledge.
Relation to staff assessment Peer assessment as a substitution to teachers' rating.
Official grade weight Peer assessment scores become assessee's partial grade of Operating Systems course.
Directionality Students mutually assess others.
Privacy Both assessors and assessees are anonymous.
Peer contact Distant peer assessment through Internet.
Assessment year Students are sophomores to seniors.
Peer ability Peers are of similar ability.
Constellation Assessors Groups.
Constellation Assessed Groups.
Peer assessment place Peer assessment takes place after class.
Peer assessment time Informal time.
Requirement Peer assessment is compulsory for both assessors and assessees.
Reward Course credit for participation.

B. Affective Components in Peer Assessment

Most peer assessment studies found that students favor such inventive assessment procedure [3]; however Lin, Liu, Chiu, and Yuan [9] also reported that some students have negative feelings.

  1. Lin et al. [9] indicated that some students experienced greater peer pressure in the peer assessment. When they found their works have to be reviewed by peers but not teachers, to perform better is to avoid shame and to maintain self-esteem in front of their peers.
  2. Some students dislike peer assessment because raters are at the same time competitors. They are afraid of undermarking in which peers give very low scores to others or overmarking in which peers give very high scores to everyone. In a small pilot test of NetPeas system, Lin et al. [9] found a detrimental effect. If peer assessment lasts for a certain period and in this period students are legitimate for multiple access to assessment mode, in receiving unexpectedly low score from peers, students immediately jump to reduce the previous scores they gave to others.
  3. Students often hold the belief that only the teacher has the ability and knowledge to evaluate and distribute critical feedback [10]. They may suspect their peers' abilities, especially those who get lower scores perceive peer assessment as inaccurate.
  4. Addressing cultural comparisons among Chinese and Latino Americans, Carson and Nelson [11] found that in peer review interaction, Chinese-speaking students' preference for group harmony prevented their critiquing of peers' work.
Some researchers have suggested some arrangements to promote effectiveness of peer assessment. Topping [3] indicated the necessity of announcing and clarifying goals of peer assessment to promote a trusting environment. Zhao [10] suggested anonymity is to increase critical feedback and to make the participants feeling more comfortable to criticize.


III. Research Design

A. Research Questions
  1. Do the majority of students take a positive attitude toward Web-based peer assessment?
  2. Is the achievement of students with positive attitude significantly better than the achievement of students with negative attitude?
  3. Are the peer and expert project scores of students with positive attitude significantly better than those of students with negative attitude?
  4. Is the feedback quality of students with positive attitude better than that of students with negative attitude?
B. Participants

Fifty-eight students in the operating systems class taught by the third author served as the participants. They were computer science undergraduates in a research university of Taiwan. These students consisted of sophomores, juniors, and seniors and the juniors were the majority (Table 1). After the fourth week of the class, the teacher announced the adoption of Web-based peer assessment. Therefore, participants in entering this class were not aware of the possibility of experiencing an innovative assessment.

All students participated in the course all the way through the whole semester. Because 17 students omitted one or two items in filling the attitude scale, they were then treated in a very conservative way as missing data and were excluded from further hypothesis testing and cluster analysis.

C. Course Description

The main purpose of this operating systems class is to transfer the basic knowledge about different kinds of system programs, hardware architecture, process management, memory management, file systems, and operating systems.

D. Task and Requirements

Students were requested to survey related topics of operating systems. When they write the survey paper, they must present materials other than those in the textbook and comparison, analysis, or synthesis of several operating systems is preferred. The survey paper must include title, motivation, introduction, theory, discussion and reference.

E. Achievement Measures

A student's achievement was decided by three components with various weights. First, 40% of the achievement score was decided by peer assessment about the survey project (peer project score), 40% by teachers about the survey project (expert project score), and 20% was referred to performance in giving comments (feedback quality).

1. Peer and expert project scores: Project scores, no matter whether rated by peers or experts, were graded using the following six criteria.

  1. Is there a high correlation between survey content and operating systems?
  2. Is the survey content complete?
  3. Are there adequate introductions of theory and background knowledge?
  4. Are discussions of operating systems clear?
  5. Are the conclusions for this assignment sufficiently robust?
  6. Are there sufficient references for this assignment?
A 10-point Likert-style scale, ranging from 1 (poor) to 10 (satisfactory) with 5 as the midpoint (average), was used in rating a project. Peer project score was an average of six peer assessors’ grades. In doing so, the authors hope to increase the stability of peer rating. The third author (the teacher) and a teaching assistant, served as experts, separately graded each project. The inter-rater reliability of two experts' grading was estimated by Pearson correlation which is significantly positive (r = .78**, df = 57, p < .01).

2. Feedback Quality: The experts also separately graded assessors' comments. Comment is of high quality if it offers suggestions for the next step of modifying and explaining the peers' survey project [12]. The feedback quality scores rated by the teacher and the TA were highly correlated (r = .82**, df = 57, p < .01).

F. Attitude Questionnaire

The present study developed an attitude questionnaire containing 11 questions (in the appendix) with a five-point Likert-style scale. Using this questionnaire, authors hope to examine whether positive and negative attitude factors can be measured accurately and whether most students have a positive attitude toward Web-based peer assessment.

G. Statistic Procedures and Software

The data was analyzed using several statistical procedures, factor analysis, cluster analysis, t test, and chi square analysis with SPSS 8.0. In this study, two clusters (n = 14 and n = 27) were further separated and the sample sizes were small. According to Hinkle, Wiersma, and Jurs [13], in hypothesis testing when variance of population (d ) is unknown and with small samples, the normal distribution is inappropriate for describing the sampling distribution of the mean. Under these conditions, t test based on t distribution that accommodates small sample size is appropriate to test the hypothesis. Besides, the authors used an asterisk to represent a significant level of .05 and two asterisks to represent a significant level of 0.01 according to the usage of common statistical practice.


IV. Results

The background information of the participants is listed in Table 3. There were far more men than women and juniors than sophomores or seniors. 

TABLE 3
Background Information of Participants 
variable value
Total 58(100%)
Valid (response to attitude questionnaire) 41(71%)
Missing (response to attitude questionnaire) 17(29%)
Male 35(85%)
Female 6(15%)
Sophomore 12(29%)
Junior 25(61%)
Senior 4(10%)

Using exploratory factor analysis, authors found the attitude questionnaire consisted of two factors, positive attitude or negative attitude toward Web-based peer assessment. Our purpose is to reduce the unnecessary questions. Authors therefore deleted two of the eleven questions because their factor loadings were less than 0.40. Two factors together interpreted 52% variances of the whole questionnaire. The internal consistency for the positive factor was satisfactory (Cronbach alpha = .77), however, for the negative factor was very low (.32). The initial results showed students either take positive or negative attitude toward Web-based peer assessment. The results are listed in Table 4. 

TABLE 4
The Result of Exploratory Factor Analysis on Attitude Questionnaire 
Questions Positive Negative
1 .686  
2 .758  
3 .644  
4 .687  
5 .810  
6   -.653
7   .723
8   .506
9   .649
Eigen value 2.653 1.987
Variance explained (%) 29.5 22.1
Internal consistency (Alpha) .77 .32

Research Question 1: Do the majority of students take a positive attitude toward Web-based peer assessment?

A K-Means cluster analysis was conducted with the scores of positive and negative factors. Participants who omitted any one attitude item were excluded for further cluster analysis. The results are shown in Table 5.

The cluster analysis grouped 14 students into cluster 1 and 27 students into the cluster 2. According to the maximum value of the factor loading of each cluster, cluster 1 was named as "Students do not take positive attitude" and cluster 2 as "Students take positive attitude." 


TABLE 5
Results of K-Means Cluster Analysis 
Clusters
Cluster 1

Students do not take positive attitude

Cluster 2

Students take positive attitude

Positive factor -.799 .414
Negative factor .637 -.330
N 14 27
% 34.1 65.9

A chi square analysis showed that there were more students in the cluster 2(65.9%) who took positive attitude than students who took negative attitude (34.1%, Chi-Square = 4.12*, df = 1, p< .05) in using Web-based peer assessment.

The t tests (in Table 6) demonstrated that the positive attitude score of cluster 1 (mean = 14.71) is significantly lower than that of cluster 2 (mean = 19.04, t = -4.85**, p < .01). While, negative attitude score of cluster 1 (mean = 15.43) is higher than that of cluster 2 (mean = 13.74, t = 2.91**, p < .01). This result confirmed that of cluster analysis: cluster 1 is composed of those who take negative attitude and cluster 2 contains those who take positive attitude. 


TABLE 6
The t tests on Attitude Differences between Cluster 1 and 2 
H1 H1:Sum of Positive factorcluster1 < Sum of Positive factorcluster2

H0:Sum of Positive factorcluster1 >=Sum of Positive factorcluster2

H2 H1:Sum of Negative factorcluster1 < Sum of Negative factorcluster2

H0:Sum of Negative factorcluster1>=Sum of Negative factorcluster2

  Cluster 1 Cluster 2 t test
Positive Attitude Factor Mean = 14.71

SD = 2.87

Mean=19.04

SD = 2.62

-4.85**

df = 39

Negative Attitude Factor Mean =15.43

SD = 1.79

Mean=13.74

SD = 1.75

2.91**

df = 39

** p < .01

H : the abbreviation of Hypothesis


Research Question 2: Is the achievement of students in cluster 2 better than the achievement of students in cluster 1?

The null hypothesis of this research question stated that the achievement of students in cluster 2 is worse than that in cluster 1. The statistical results (Table 7) rejected the null hypothesis and thus illustrated that the achievement of students in cluster 2 (mean = 7.13) was significantly higher than in cluster 1 (mean = 5.40, t = -6.42**, p < .01). 


TABLE 7
The t test on Achievement Difference between  Cluster 1 and 2
H1 = Student achievementcluster1 < Student achievementcluster2

H0 = Student achievementcluster1 > = Student achievementcluster2

Cluster 1 Cluster 2 t test
Achievement Mean =5.40

SD =.79

Mean=7.13

SD =.83

-6.42**

df = 39

** p < .01 

H0: Null Hypothesis 

H1: Alternative Hypothesis


Research Question 3: Are the peer and expert project scores of students included in cluster 2 better than those in cluster 1?

The null hypothesis of this research question stated that the peer and expert project scores of students in cluster 2 are lower than those in cluster 1. The statistical results (Table 8) rejected the null hypotheses of research question 3 and first of all indicated that the peer project score of students in cluster 2 (mean = 7.25) was significantly better than it was in cluster 1 (mean = 6.46, t= -2.95**, p < .01). Besides, the expert project score of students in cluster 2 (mean = 7.56) was also significantly higher than that of cluster 1 (mean = 5.00, t = -4.36**, p < .01). 


TABLE 8
The t tests on Differences of Peer and Expert Project Scores between Cluster 1 and 2 
H1 H1 = Peer Project Scorecluster1 < Peer Project Scorecluster2

H0 = Peer Project Scorecluster1 > = Peer Project Scorecluster2.

H2 H1 = Expert Project Scorecluster1 < Expert Project Scorecluster2

H0 = Expert Project Scorecluster1 > = Expert Project Scorecluster2

  Cluster 1 Cluster 2 t test
Peer Project Score Mean =6.46

SD = .82

Mean=7.25

SD = .80

-2.95**

df = 39

Expert Project Score Mean =5.00

SD = 2.0

Mean=7.56

SD =1.25

-4.36**

df = 18.43

** p < .01 

H : the abbreviation of Hypothesis


Research Question 4: Is the feedback quality of students in cluster 2 better than that in cluster 1?

The null hypothesis of this research question is that the feedback quality of students in cluster 2 is less than that in cluster 1. The statistical results (Table 9) rejected the null hypothesis of research question 4 and demonstrated that the feedback quality of students in cluster 2 (mean = 5.94) was significantly higher than that in cluster 1 (mean = 3.57, t = -3.24**, p< .01). 


TABLE 9
The t test on Feedback Quality Difference between Cluster 1 and 2 
H1 = Feedback Qualitycluster1 < Feedback Qualitycluster2

H0 = Feedback Qualitycluster1 > = Feedback Qualitycluster2

Cluster 1 Cluster 2 t test
Mean =3.57

SD =1.65

Mean=5.94

SD =2.46

-3.24**

df = 39

** p < .01 

H0: Null Hypothesis 

H1: Alternative Hypothesis



V. Conclusion and Suggestions

Broadly speaking, the above results showed wide acceptance of and support for this innovative method of web learning. The main results showed that (1) about 66% students in a computer science course favored Web-based peer assessment in the writing survey project and (2) the attitude held by students was significantly related to their overall performance. It is likely that the students who subjectively favored Web-based peer assessment obtained higher performance, no matter that their performance was evaluated by experts or peers, regarding project achievement or feedback quality. Another way of interpretation is that those who achieved higher scores in Web-based peer assessment felt more positive towards this instructional method.

To promote effective peer assessment, it is critical to understand students' attitudes. Teachers can explain more about the advantages of peer assessment in introducing this innovative instruction. During the assessment process, teachers must watch over peers’ social interaction cautiously for signs of negative attitudes. Besides, the authors recommend the following arrangements for educators interested in implementing Web-based peer assessment:

  1. In writing assignments, it is better to offer students the opportunity for discussion with peers. To this end, students should be provided with suitable tools, such as a Web BBS [14] or a Message Board to facilitate discussion. Such discussions may promote student’s positive attitude toward Web-based peer assessment.
  2. Students should be allowed to discuss when grading and making comments. The authors believe that grading is a subjective process. Future studies may adopt a review board in the peer assessment process. Only after full discussion can a general consensus be reached on how to grade and provide related comments. Thus, grades achieved in this manner will be viewed as fairer and more persuasive.
In the future studies, researchers are encouraged to find out the interactions between students' performance and attitude. Thus, to find out how and why students generate negative feelings or expectations about peer assessment, authors may eventually find out more effective ways in implementing We-based peer assessment.

There are some limitations in this study. First, the factor analysis extracted two factors from the attitude questionnaire, positive and negative attitudes toward Web-based peer assessment. Though the positive attitude factor is reliable, the negative attitude factor is pretty low in terms of internal consistency. A possible reason could be that the negative factor includes three items with negative wording (#7, #8, and #9 in the appendix) but one item in positive wording (#6). Or, may be the sample size (n=58) is relatively small, so as the variance, in testing reliability. For future use of this questionnaire, authors suggest to modify the items of the negative factor or run the factor analysis with larger sample size.

Second, question 5 of the attitude questionnaire contains at least two concepts: "in making comments one may (1) think reflectively and (2) improve the project." The answer to this question may be an indication that the student feels that the project work was improved or that critical thinking may improve the project work. In future use of this questionnaire, this item should be simplified by separating the two concepts into two questions.

Third, the participants of this study were not randomly selected because in a university setting, computer science majors could enter any class offered by their department following appropriate registration. However, these students were not aware of what innovative instruction method they would go through while they entered. Authors suggest be cautious in interpretation and generalization of the results.


Acknowledgments

The authors would like to thank the National Science Council of the Republic of China for financially supporting this research under Contract No. NSC89-2520-S-009-001 and NSC89-2520-S-009-004. Particular gratitude is extended to two anonymous reviewers for their valuable suggestions.


References

[1] C. H. Chiu, W. R. Wang, and  S. M. Yuan, "Web-based Collaborative Homework Review System," Proceedings of ICCE' 98, vol.2, pp.474-477, 1998.

[2] E. Z. F. Liu, C. H. Chiu, S. S. J. Lin, and S. M. Yuan, "Student participation in computer science courses via the Networked Peer Assessment System (NetPeas)," Proceedings of the ICCE' 99, vol. 1, pp. 774-777, 1999.

[3] K. Topping, "Peer Assessment Between Students in Colleges and Universities," Review of Educational Research, vol.68, pp.249-276, 1998.

[4] M. Catterall, Peer learning research in marketing. In S. Griffiths, K. Houston, & A. Lazenblatt (Eds.), Enhancing student learning through peer tutoring in higher education: Section 3-Implementing (Vol. 1, pp. 54-62). Coleraine, Northen Ireland: University of Ulster, 1995.

[5] C. Rushton, P. Ramsey, and R. Rada, "Peer assessment in a collaborative hypermedia environment: A case-study," Journal of Computer-Based Instruction, vol.20, pp.75-80, 1993.

[6] M. Freeman, "Peer assessment by groups of group work," Assessment and Evaluation in Higher Education, vol.20, pp.289-300, 1995.

[7] I. E. Hughes, "Peer assessment," Capability, vol.1, pp.39-43, 1995."

[8] M. Korman, and R. L. Stubblefield, "Medical school evaluation and internship performance," Journal of Medical Education, vol.46, pp.670-673, 1971.

[9] S. S. J. Lin, E. Z. F. Liu, C. H. Chiu, and S. M. Yuan, "Peer review: An effective web-learning strategy with the learner as both adapter and reviewer," IEEE transactions on Education, 1999. (Manuscript submitted to IEEE Transactions on Education, in revision).

[10] Y. Zhao, "The Effects of Anonymity on Computer-Mediated Peer Review," International Journal of Educational Telecommunications, vol.4, pp.311-345, 1998.

[11] J. G. Carson, and G. L. Nelson, "Chinese students’ perceptions of ESL peer response and group interaction," Journal of Second Language Writing, vol.5, pp.1-19, 1996.

[12] M. T. H. Chi, "Constructing self-explanations and scaffolded explanations in tutoring," Applied Cognitive psychology, vol. 10, S33-S49, 1996.

[13] D. E. Hinkle, W. Wiersma, and S. G. Jurs, Applied statistics for the behavioral sciences. Boston, MA: Houghton Mifflin.

[14] E. Z. F. Liu, and S. M.  Yuan, "Collaborative Learning via World Wide Web Bulletin Board System," Proceedings of ICCE’98, vol.1, pp.133-140, 1998.


Appendix

A. Attitude Questionnaire

Read each of the following statements, and then rate yourself on a 1-5 scale, where each rating corresponds to how well a statement describes you: 1 = Not very well; 2 = Slightly Well; 3 = Somewhat well; 4 = Well; 5 = Very Well.

1. I think Web-based peer assessment could be applied to all operating systems project work.

2. I think Web-based peer assessment can be perfectly applied to survey writing task.

3. I am more willing to give comments because of the anonymous nature of peer assessment.

4. Inspecting others' project work on the NetPeas, I am better able to improve my project work.

5. When I comment others' project work on the NetPeas, I can think reflectively and improve my project work.

6. I prefer using criticism as a way of learning.

7. I prefer teacher assessment to peer assessment, because I trust in the teacher's professional knowledge.(-)

8. I prefer teacher assessment to peer assessment, because I worry about the standards of peer judgment.(-)

9. I think the rounds of peer assessment should be reduced.(-)


Author Contact Information

Sunny S. J. Lin
Ceter for Teacher Education
National Chiao Tung University
1001 Ta-Hsueh Road
Hsinchu, Taiwan
Phone: 011-886-3-5712121-58057
Fax: 011-886-3-573-8083
E-mail: sunnylin@cc.nctu.edu.tw

Eric Zhi-Feng Liu
Dept. of Computer and Information Science
National Chiao Tung University
1001 Ta-Hsueh Road
Hsinchu, Taiwan
Phone: 011-886-3-5712121-59265
Fax: 011-886-3-572-1490
E-mail: totem@cis.nctu.edu.tw

Shyan-Ming Yuan
Dept. of Computer and Information Science
National Chiao Tung University
1001 Ta-Hsueh Road
Hsinchu, Taiwan
Phone: 011-886-3-5712121-56631
Fax: 011-886-3-572-1490
E-mail: smyuan@cis.nctu.edu.tw


Author Biographies

Sunny San-Ju Lin, San-Ju is an associate professor in the Center of Teacher Education at National Chiao Tung University in Taiwan, where she teaches courses in Educational Psychology, Educational and Psychological testing and measurement, Learning theories and Individual differences. She holds a Ph.D. degree in Counseling and Educational Psychology from the University of Southern California in 1995. Her active research interests are in Learning through the Internet and Internet addiction.

Eric Zhi-Feng Liu, Zhi-Feng was born on November 11, 1972 in Tainan, Taiwan, Republic of China. He received the B.S. degree from National Chiao Tung University in 1996, the M.S. degree in Computer and Information Science from National Chiao Tung University in 1999, and He is pursuing the Ph.D. degree in Computer and Information Science from National Chiao Tung University.

Shyan-Ming Yuan, He received the B.S.E.E degree from National Taiwan University in 1981, the M.S. degree in Computer Science from University of Maryland Baltimore County in 1985, and the Ph.D. degree in Computer Science from University of Maryland, College Park in 1989. He joined the Electronics Research and Service Organization, Industrial Technology Research Institute as a Research Member in Oct. 1989. Since September 1990, he had been an Associate Professor at the Institute and Department of Computer and Information Science, National Chiao Tung University, Hsinchu, Taiwan. He was promoted as a Professor in June, 1995.


Return to Table of Contents