UCOSP

Undergraduate Capstone Open Source Projects

Archive for October, 2009

Ingres Geospatial students grading scheme

Posted by iwawong on 2009/10/31

Total grade: 100%

*Assessment for team grade: 40%*
– Pro-rated for meeting attendance: 10%
– Evidence of participation in the Ingres community – mailing list posts: 5%
– Evidence of participation in the Ingres community – IRC interactions: 5%
– Blogging about one’s status on one’s blog or on UCOSP blog as one goes on: 10%
– Peer evaluation among the UCOSP student team at the end of term plus an email to Andrew. The email to Andrew should contains how you rate yourself and why: include 3 positive items that you felt went well and at least 3 things you felt you could have done better. : 10%

 

*Assessment for individual grade: 60%*

Team Member(s): Eva Wong, Scott Bishop
Area: Ingres [c code]
Mentor(s): Charles Thibert, Alex Trofast, Ray Fan, J Hankinson
Goal(s): Improve robustness and error path checking.

Success measure(s): 5 patch contributions from each student by the end of term in December. Contributions must pass code inspection and approval from the team. Patches will be committed to the code repository by Alex Trofast and Chuck Thibert. Screencast video of showing how to pull Ingres source code and compile it.

For each patch(10%): documentation(2%), coding(8%). For the video(10%).

 

Team Member(s): Kelvin Harry
Area: OpenLayers plugin for Drupal [php code]
Mentor(s): Andrew Ross, OpenLayers plugin community
Goal(s): Solve any 5 bugs found in OpenLayers plugin for Drupal – either bugs our team found, or existing ones.
Success measure(s): Looking for 5 patch contributions the bug tracking system. Contributions must pass code inspection and approval from the team. Patches will be committed to the code repository by the OpenLayers plugin for Drupal team. Submit a video of what the Drupal openlayers plugin is, and a short demo of it in action.

For each bug fix patch(10%): documentation(2%), coding(8%). For the video(10%).

 

Team Member(s): Mary Mootoo, Sarah Danaher, Evan James Bowling
Area: Overall Integration [considerable testing, some minor coding in php or c as needed to help clear backlogs]
Mentor(s): Andrew Ross, various others
Goal(s): Identify and raise 5 different bugs that has not submitted to the bug tracking system and submit them to the bug tracking system. Coordinate with developer assigned the bug to ensure they have enough information and can reproduce the issue. Submit a screencast demo of the technology (Drupal + maps) in action.

For each bug raised(10%). For the video(10%).

Deadline for the patches and video for everyone: 2009-12-11.

Posted in Ingres | 1 Comment »

Grading Scheme for RoboCup

Posted by kyokko on 2009/10/31

Team Grade: 40%

Performance of soccer team – 10%

  • Fails to beat original team – 0/10
  • Wins occasionally – 4/10
  • Consistently wins by narrow margin – 8/10
  • Consistently wins by large margin – 10/10

Report – 10%

Written report explaining everything that was accomplished, remaining problems, TODO for people who might be working with our code. Mark should reflect completeness, justifications and clarity of writing.

Documentation – 10%

Comments in code and design document.

Screencast Demo – 10%

Demo on YouTube or similar service showing a client running.


Individual Grades: 60%

  • Participation – 10%
  • Testing – 10%
  • Completion of Tasks (Code Sprint, Individual Tasks) – 40%

Participation – 10%

  • Writing requirements: blog posts, minutes, etc.
  • Meeting attendance, communication skills, etc.
  • Peer evaluations

Each team member had some tasks assigned. For this project it was hard to find completely non-overlapping problems for each member to work on, also tasks got reassigned occasionally for various reasons. Our individual grading schemes are taking that into account.

Testing/Performance – 10%

Note: RoboCup is a real-time running application, so for most of the functions it is impossible to create Unit tests. We use Debug UI and Human Controlled player to evaluate performance of the client.

Testing for people working on incorporating Machine Learning (ML) algorithms for high-level decision-making is a standard procedure which involves creating a test dataset and using it to assess how well the trained model generalizes to new data.

Tasks from code sprint weekend – 10%

  • Ioana/Chani – Debug UI
  • Patrik/Yulia – Human Controlled Player ver.1
  • Alex – Java Monitor

Individual tasks – 30%

These are projects each person has been working on since Code Sprint. Since we all focused on different tasks, individual grading schemes reflect the amount of effort put into particular area.

Chani

  • Basic Actions (-improvements to basic actions: kicking, dashing, turning, etc) – 15%
  • Complex Actions (developing multi-tick actions, that are based on the basic actions and allow for more complex behaviour : dashing to position, following the ball, finding the ball when it hasn’t been seen for a while, etc) – 15%

Ioana

  • Sight Improvements (integration of better vision algorithm into the complex actions)  – 15%
  • ML (using Decision Trees (DT) or Neural Networks (NN) for better decision-making)- 15%

Alex

  • ML (experiments with various parameters for DTs and bugfixes to parser) – 15%
  • Actions (debugging and improving basic actions and multi-tick actions, in particular better passing) – 15%

Yulia

  • ML testing and experiments  (creating dataset for evaluation of ML algorithm’s performance, various experiments with different training data and parameters) – 15%
  • ML infrastructure  (parser for server logs, Human Client ver. 2, script to feed data to ML algorithm) – 15%

Patrik

  • Basic actions  (debugging and improving algorithms for basic actions : kicking, turning, seeing, etc) – 15%
  • Complex actions  (multi-tick actions, that are composed of basic actions and enable more complex behaviour: interception, dribbling to a position, chasing the ball, etc) – 15%

Posted in RoboCup | 1 Comment »

Marking Scheme for ElmCity project

Posted by Nikita Pchelin on 2009/10/30

Mark Breakdown

1. Individual (50%)

a) 40% – participation:

  • 20% code contributions
  • 10% discussions on friendfeed and google code wiki throughout the term
  • 10% weekly punchlines and how they were met (trackable through the “status” portion)

b) 10% – written component

2. Team (50%)

a) 30% Features and Functionality:

  • all the existing functionality up-to-date (plugins, events filtering, timezones management) still works as expected (test don’t fail), existing bookmarks on delicious produce valid iCal feeds
  • generalized parser that can at least recognize a minimum set of information for the event (date & time, title, and a link) on each of the pages listed here. If for some reason some of the page(s) are odd and cannot be parsed by the generalized plug-in, we must opt-out for completing a site-specific plug-in. If time permits, (outside 30%) we are targeting pages on here and too
  • product has to be ready to be shipped. That is, we have to have a document that describes all the third party libraries (and software) one may need to yank a fresh copy from google code and start working with the code. We can include general steps (i. e. have mysql, phpmyadmin installed and working), but whatever concerns the project itself must be specific (i. e. what file do I edit to change the location of calendar folder?)

b) 10% system has a clear documentation (comments inside the files, high-level description of the system) that could potentially allow other people to contribute to the project. Tests has been implemented (unit and integration) and run smoothly.

c) 10% written report + youtube presentation of the service, that describes what work has been done through out the term, demonstrates the abilities of the project, touches on the future of the project (i.e. if we were to continue, what are things that are missing and that we’d wanted to implement in the future)

Posted in ElmCity | 2 Comments »

Survey on IDE usage

Posted by maximecaron2 on 2009/10/30

I just compiled the results from the survey i did about Integrated Development Environment for the Eclipse4Edu project.
The survey was first set-up by Dennis Acosta but i did a french version of the form. These results contain only the feedback from Sherbrooke university students for now.

Surprisingly more people use the debugger and refactoring features then i would have thought. Also most people said they would not use their built-in ide tutorials.

I would also appreciate your opinion on all this.

Slide of the Results

Posted in Eclipse4Edu | 2 Comments »

MarkUs developers’ status update, October 30th 2009

Posted by Gabriel Roy-Lortie on 2009/10/30

And here is the status update for our last week worth of effort.

Posted in MarkUs | Leave a Comment »

Basie weekly meeting: Oct. 28, 2009

Posted by John Peters on 2009/10/30

The Basie team had its meeting yesterday (chat log). The main topic of this week’s meeting was coming up with a marking rubric for the UCOSP students.

We spent some time going over one of the papers mentioned earlier – Turning Student Groups into Effective Teams (PDF) – focusing on the fourth section on peer ratings. Everyone seemed to agree with the second approach detailed on page 9 of the PDF (page 17 in the journal). Students’ participation marks should not just be based on their code committed to the final project, but also be based on their “team citizenship”: how well they fulfilled their responsibilities, helped other out, etc.

We decided that each team (most of the UCOSP students are split into pairs and working on one feature for the team) would post their marking scheme to the developers’ mailing list. Bill Konrad took the initiative during the meeting and posted the scheme for his ‘team’. Bill categorized his work on Basie into five roles: Participant, Worker, Tester, Enabler (all work on Basie is peer-reviewed), and Supporter (this is more Bill-specific). We all really liked his role-based scheme, and something similar might work for the other UCOSP projects.

A number of other things were discussed – it’s all in the log if you’re interested. One idea worth mentioning is having a demo, such as a screencast on YouTube, showing all the changes made to Basie this term. This might not work for the projects that need little in the way of user interfaces, but it would be neat to have some of the other teams do this too.

Posted in Basie, Status | 1 Comment »

Eclipse4Edu Grading Scheme

Posted by dennisacosta on 2009/10/30

Team component – 45%
– Each member of the team is working on a piece of what we’ve identified to be the minimum feature set for a streamlined Scheme Perspective in Eclipse.
– 25% for the actual deliverables. Based on how much were we able to implement as a team and the quality of our code/product. Each student will receive the same grade. We will attempt the “5-minute screencast on YouTube” if time permits.
– 20% for team participation. Each student will provide Dwight with comments about the working relationship with other members of the team.

Individual component – 35%
– We feel that it is not how much code we are able to commit, but how much effort we put into the project. We struggled with how we could quantify effort, but after great debate, we decide that a written report chronicling our experiences would be a great way to express this.
– 20% for other deliverables: Based on the quality of our code/feature.
– 15% for the written report: Each student will write a report that will summarize their contributions over the term. Problems encountered and how we tried to solve them.

Participation component – 20%
– How active each student was in the open-source and UCOSP communities. This can be measured quantitatively since each student is logging this information and providing it to Dwight in status updates.
– 10% for Eclipse4Edu activity: Mailing list postings, forum postings, bug comments, bug additions, call-ins, virtual meetings, status updates.
– 10% for UCOSP activity: Blog entries, blog comments, code sprint.

Posted in Eclipse4Edu | 8 Comments »

Thunderbird weekly meeting: Oct. 28, 2009

Posted by Jay Schmidek on 2009/10/30

The Thunderbird team had it’s weekly meeting. Our meeting notes and status updates are here.

Posted in Status, Thunderbird | Leave a Comment »

WikiDev marking scheme

Posted by eleni on 2009/10/29

Mark Breakdown

  1. 10% Course Participation
  2. 15% Team Participation
  3. 45% Deliverables
  4. 30% Weekly Progress

Descriptions

  1. Course Participation. Participation as marked by Greg for things like posting to the blog, responding to comments, attending code sprints, etc.
    • Rationale: This is an online, collaborative course. General communication certainly plays a part, and is integral to being a contributing member of the online community. It should not weigh too much, however; as a course, the focus must be on getting real work done.
  2. Team Participation. Participation in team meetings on IRC, use of the wiki and bug tracker, etc., as marked by Eleni.
    • Rationale: A program created in a vacuum is never the right program. Coordination of effort, buy-in from the target users (Eleni and team), and generally keeping developers on the same page are all a must if the project is to succeed. Another aspect where I want your participation is wrt to ideas on how WikiDev could be made more useful. Please review an instance of the different pages offered and comment (on the discussion page) on hw to improve its content, its layout, its relations to other pages
  3. Deliverables. The completion of target goals set at the beginning of the course and again at the end of October, as measured by Eleni.
    • Rationale: By “deliverables”, we mean the “product” eventually produced by the team, throughout the term. It is always satisfying to see projects get completed, with well designed code and through solid process. I would like to assign 20% to the team as a whole and 25% to each individual.
  4. Weekly Progress. A measure of progress / effort put in every week, as measured by Eleni, with input from each member, as reported in the weekly progress reports and in individual post mortem reflections.
    • Rationale: By “weekly progress”, we mean a measure of the effort put. Through our communications, I am aware of the work involved in each accomplishment, and when there is no product to show for the effort, I can still appreciate the wok involved in learning everything you have to in order to get things done.

Posted in WikiDev | 1 Comment »

MarkUs last meeting (October 23rd 2009) minutes

Posted by Gabriel Roy-Lortie on 2009/10/29

The last MarkUs developers’ meeting minutes are now available here.

Posted in MarkUs | Leave a Comment »