tP: v1.4
Deadline for all v1.4 submissions is Mon, Apr 12th 2359 unless stated otherwise.
Penalty for late submission:
-1 mark for missing the deadline (up to 2 hour of delay).
-2 for an extended delay (up to 24 hours late).
Penalty for delays beyond 24 hours is determined on a case by case basis.
Submit to LumiNUS folder we have set up, not to your project space.
Follow submission instructions closely. Any non-compliance will be penalized. e.g. wrong file name/format.
Do not update the code during the 14 days after the deadline. Get our permission first if you need to update the code in the repo during that freeze period.
Submissions:
To convert the UG/DG/PPP into PDF format, go to the generated page in your project's github.io site and use this technique to save as a pdf file. Using other techniques can result in poor quality resolution (will be considered a bug) and unnecessarily large files.
Ensure hyperlinks in the pdf files work. Your UG/DG/PPP will be evaluated using PDF files during the PE. Broken/non-working hyperlinks in the PDF files will be considered as bugs and will count against your project score. Again, use the conversion technique given above to ensure links in the PDF files work.
Try the PDF conversion early. If you do it at the last minute, you may not have time to fix any problems in the generated PDF files (such problems are more common than you think).
The icon indicates team submissions. Only one person need to submit on behalf of the team but we recommend that others help verify the submission is in order
We will not accept requests to limit late penalties of team submissions to one person even if the delay was one person's fault. That is, the responsibility (and the penalty) for team submissions are to be shared by the whole team rather than burden one person with it.
The icon indicates individual submissions. When uploading files to LumiNUS, please upload your individual files yourself. Reason: Penalties related to submission time/format are calculated automatically based on the uploader's identity.
v1.4
or v1.4b
.[team ID][product name].jar
e.g. [TIC4002-F18-2][Contacts Plus].jarAdmin tP → Deliverables → Executable
Admin tP Contstraints → Constraint-File-Size
The file sizes of the deliverables should not exceed the limits given below.
Reason: It is hard to download big files during the practical exam due to limited WiFi bandwidth at the venue:
Product (i.e., the JAR/ZIP file): 100MB (Some third-party software -- e.g., Stanford NLP library, certain graphics libraries -- can cause you to exceed this limit)
Documents (i.e., PDF files): 15MB/file (Not following the recommended method of converting to PDF format can cause big PDF files. Another cause is using unnecessarily high resolution images for screenshots).
@@author
annotations after the deadline will be considered a later submission). Note that the quality of the code attributed to you accounts for a significant component of your final score, graded individually.Admin tP → Deliverables → Source Code
[TEAM_ID][product Name]UG.pdf
e.g.[TIC4002-F18-2][Contacts Plus]UG.pdfAdmin tP → Deliverables → User Guide
In UG/DG, using hierarchical section numbering and figure numbering is optional (reason: it's not easy to do in Markdown), but make sure it does not inconvenience the reader (e.g., use section/figure title and/or hyperlinks to point to the section/figure being referred to). Examples:
In the section Implementation given above ...
The main content you add should be in the docs/UserGuide.md
file (for ease of tracking by grading scripts).
Should cover all current features.
Ensure those descriptions match the product precisely, as it will be used by testers (inaccuracies will be considered bugs).
Optionally, can also cover future features. Mark those as Coming soon
.
It is not necessary for the UG to contain every nitty-gritty detail about the product behavior. Some rarely needed information can be omitted from the UG, if the user is expected to know that information already or if the user is kept informed in other ways. For example, if a certain invalid input is unlikely to be used anyway, it is fine to not specify it in the UG, as long as the product is able to give an informative error message when that invalid input is used.
Beware of overusing screenshots. While it is good to have screenshots in the UG, note that they are hard to maintain. For example, if a future version changes the GUI slightly, it will require all your screenshots to be updated. Here are some tips:
Also note the following constraint:
Admin tP Contstraints → Constraint-File-Size
The file sizes of the deliverables should not exceed the limits given below.
Reason: It is hard to download big files during the practical exam due to limited WiFi bandwidth at the venue:
Product (i.e., the JAR/ZIP file): 100MB (Some third-party software -- e.g., Stanford NLP library, certain graphics libraries -- can cause you to exceed this limit)
Documents (i.e., PDF files): 15MB/file (Not following the recommended method of converting to PDF format can cause big PDF files. Another cause is using unnecessarily high resolution images for screenshots).
[TEAM_ID][product Name]DG.pdf
e.g. [TIC4002-F18-2][Contacts Plus]DG.pdfAdmin tP → Deliverables → Developer Guide
docs/DeveloperGuide.md
file (for ease of tracking by grading scripts)..puml
files in the docs/diagrams
folder.Effort
that evaluators can use to estimate the total project effort.
0..1
vs 1
, composition vs aggregation*Command
classes using a placeholder XYZCommand
).ref
frames to break sequence diagrams to multiple diagrams.These class diagrams seem to have lot of member details, which can get outdated pretty quickly:
In this negative example, the text size in the diagram is much bigger than the text size used by the document:
It will look more 'polished' if the two text sizes match.
delete
command
[TEAM_ID][Your full Name as Given in LumiNUS]PPP.pdf
e.g.[TIC4002-F18-2][Leow Wai Kit, John]PPP.pdf-
in place of /
if your name has it e.g., Ravi s/o Veegan
→ Ravi s-o Veegan
(reason: Windows does not allow /
in file names)github.io
Admin tP → Deliverables → Project Portfolio Page
At the end of the project each student is required to submit a Project Portfolio Page.
Team-tasks are the tasks that someone in the team has to do.
Examples of team-tasks
Here is a non-exhaustive list of team-tasks:
Keep in mind that evaluators will use the PPP to estimate your project effort. We recommend that you mention things that will earn you a fair score e.g., explain how deep the enhancement is, why it is complete, how hard it was to implement etc.
docs/team/githbub_username_in_lower_case.md
e.g., docs/team/goodcoder123.md
To convert the UG/DG/PPP into PDF format, go to the generated page in your project's github.io site and use this technique to save as a pdf file. Using other techniques can result in poor quality resolution (will be considered a bug) and unnecessarily large files.
Ensure hyperlinks in the pdf files work. Your UG/DG/PPP will be evaluated using PDF files during the PE. Broken/non-working hyperlinks in the PDF files will be considered as bugs and will count against your project score. Again, use the conversion technique given above to ensure links in the PDF files work.
Try the PDF conversion early. If you do it at the last minute, you may not have time to fix any problems in the generated PDF files (such problems are more common than you think).
Content | Recommended | Hard Limit |
---|---|---|
Overview + Summary of contributions | 0.5-1 | 2 |
[Optional] Contributions to the User Guide | 1 | |
[Optional] Contributions to the Developer Guide | 3 |
Ui.png
, AboutUs.md
etc.) on GitHub. Ensure the website is auto-published.Admin tP → Deliverables → Product Website
When setting up your team repo, you would be configuring the GitHub Pages feature to publish your documentation as a website.
Ui.png
Ui.png
matches the current productSome common sense tips for a good product screenshot
Ui.png
represents your product in its full glory.
Examples
Reason: Distracting annotations.
Reason: Not enough data. Should have used real profile pictures instead of placeholder images.
Reason: screenshot not cropped cleanly (contains extra background details)
The purpose of the profile photo is for the reader to identify you. Therefore, choose a recent individual photo showing your face clearly (i.e., not too small) -- somewhat similar to a passport photo. Given below are some examples of good and bad profile photos.
If you are uncomfortable posting your photo due to security reasons, you can post a lower resolution image so that it is hard for someone to misuse that image for fraudulent purposes. If you are concerned about privacy, you may use a placeholder image in place of the photo in module-related documents that are publicly visible.
Admin tP → Deliverables → Demo
[TEAM_ID][product Name].mp4
e.g.[TIC4002-F18-2][ContactsPlus].mp4 (other video formats are acceptable but use a format that works on all major OS'es).Here is an example:
Hi, welcome to the demo of our product FooBar. It is a product to ensure the user takes
frequent standing-breaks while working.
It is for someone who works at a PC, prefers typing, and wants to avoid prolonged periods
of sitting.
The user first sets the parameters such as frequency and targets, and then enters a
command to record the start of the sitting time, ... The app shows the length of the
sitting periods, and alerts the user if ...
...
Mr aaa
is not a realistic person namerealistic demo data. e.g at least 20 data items. Trying to demo a product using just 1-2 sample data creates a bad impression.Admin → tP → PE Overview
The upfront objective of the PE is to increase the rigor of project grading. Assessing most aspects of the project involves an element subjectivity. As the project counts for a large percentage of the final grade, it is not prudent to rely on evaluations of tutors alone as there can be significant variations between how different tutors assess projects. That is why we collect more data points via the PE so as to minimize the chance of your project being affected by evaluator-bias.
PE mainly evaluates your testing skills, done as the following two-parts:
The above two can lead to high-rigor, based on how well you achieve the objectives of testing, as opposed to indirect measures such as number of test casesoutcome-based evaluation of your testing skills. The alternative is to rely solely on other easy-to-measure metrics (e.g., the number of test cases, test coverage, test LoC etc.) which we don't think is right, given how important the testing aspect is. The ultimate objective of the PE is not even the higher rigor of grading. Because of the PE, you will realize that any bugs are very likely to be detected, which means you will work extra hard to avoid bugs; and THAT is the real benefit.
Problem: There is no way we can carry out the above-mentioned two-part evaluation at a high-level of rigor if using tutors as testers, or using an automated testing script. e.g., some tutors might not have the motivation to try hard enough to find bugs, and it will be hard to find tutors willing to spend many hours testing products so near to their own exams.
Solution: Get the two parts of the evaluation to feed each other by getting student to test each others' products.
The fact that you are testing products created by your classmates and objecting to bugs reported by your classmates can makes this a rather 'unpleasant' experience. You might feel like being pitted against each other, or as if you are forced to bring down each other. But as you read above, it is a necessary evil for this evaluation to be even possible. Given the actual goal is to get you to create products with very few bugs, we think switching off the 'collaborative learning' mode for just a few days is a price worth paying to achieve that goal. After all, the PE is an evaluation activity (not a learning activity) and happens after the regular learning period is over.
You are not taking marks from someone else -- at least, don't think of it that way. The point of contention is 'is this really a bug?' which is independent of the people involved. Furthermore, the reward for detecting a bug and the penalty for having a bug in your code are calculated independently.
Still, none of us likes it when others point out problems of our work. Some of us don't even like pointing out problems of others' work. But we just have to learn not to take bug reports personally. Another important lesson is to learn how to report bugs in a way that doesn't feel like you are attacking or trying to sabotage the dev team.
PE also evaluates aspects other than testing e.g., your product evaluation skills, effort estimation skills etc. When evaluating those aspects in particular, they not graded solely based on peer ratings. Rather, PE data are cross-validated with tutors' grades to identify cases that need further investigation. When peer inputs are used for grading, they are usually combined with tutors' grades with appropriate weight for each. In some cases ratings from team members are given a higher weight compared to ratings from other peers, if that is appropriate.
Grading:
Admin tP Grading → Notes on how marks are calculated for PE
severity.High
> severity.Medium
> severity.Low
> severity.VeryLow
type.FunctionalityBug
, type.DocumentationBug
, type.FeatureFlaw
) are counted for three different grade components. The penalty/credit can vary based on the bug type. Given that you are not told which type has a bigger impact on the grade, always choose the most suitable type for a bug rather than try to choose a type that benefits your grade.n
bugs found in your feature; it is a big feature consisting of lot of code → 4/5 marksn
bugs found in your feature; it is a small feature with a small amount of code → 1/5 marksAdmin → tP → PE-D/PE Preparation
Ensure that you have accepted the invitation to join the GitHub org used by the module. Go to https://github.com/nus-tic4002-AY2021S2 to accept the invitation.
Ensure you have access to a computer that is able to run module projects e.g. has the right Java version.
Download the latest CATcher and ensure you can run it on your computer. You should have done this when you smoke-tested CATcher earlier in the week.
If not using CATcher
Issues created for PE-D and PE need to be in a precise format for our grading scripts to work. Incorrectly-formatted responses will have to discarded. Therefore, you are not allowed to use the GitHub interface for PE-D and PE activities, unless you have obtained our permission first.
ped
pe
Bug Severity labels:
severity.VeryLow
: A flaw that is purely cosmetic and does not affect usage e.g., a typo/spacing/layout/color/font issues in the docs or the UI that doesn't affect usage.
Only cosmetic problems should have this label.severity.Low
: A flaw that is unlikely to affect normal operations of the product. Appears only in very rare situations and causes a minor inconvenience only.severity.Medium
: A flaw that causes occasional inconvenience to some users but they can continue to use the product.severity.High
: A flaw that affects most users and causes major problems for users. i.e., makes the product almost unusable for most users.When applying for documentation bugs, replace user with reader.
Type labels:
type.FunctionalityBug
: A functionality does not work as specified/expected.type.FeatureFlaw
: Some functionality missing from a feature delivered in v1.4 in a way that the feature becomes less useful to the intended target user for normal usage. i.e., the feature is not 'complete'. In other words, an acceptance-testing bug that falls within the scope of v1.4 features. These issues are counted against the product design aspect of the project.type.DocumentationBug
: A flaw in the documentation e.g., a missing step, a wrong instruction, typosHave a good screen grab tool with annotation features so that you can quickly take a screenshot of a bug, annotate it, and post in the issue tracker.
You can use Ctrl+V to paste a picture from the clipboard into a text box in a bug report.
[Optional] Have a good screen recording tool if you plan to use screen recording clips as part of your bug reports. Ensure that your screen recording tool can create small files as CATcher doesn't allow files bigger than 10Mb.
As the CATcher support for uploading screen recordings is new and limited, use it only if strictly necessary -- use screenshots for other cases.
Download the product to be tested.
Testing tips
Use easy-to-remember patterns in test data. For example, if you use 12345678
as a phone number while testing and it appears as 2345678
somewhere else in the UI, you can easily spot that the first digit has gone missing. But if you used a random number instead, detecting that bug won't be as easy. Similarly, if you use Alice Bee
, Benny Lee
, Charles Pereira
as test data (note how the names start with letters A, B, C), it will be easy to detect if one goes missing, or they appear in the incorrect order.
Go wide before you go deep. Do a light testing of all features first. That will give you a better idea of which features are likely to be more buggy. Spending equal time for all features or testing in the order the features appear in the UG is not always the best approach.
Admin tP → Practical Exam
The upfront objective of the PE is to increase the rigor of project grading. Assessing most aspects of the project involves an element subjectivity. As the project counts for a large percentage of the final grade, it is not prudent to rely on evaluations of tutors alone as there can be significant variations between how different tutors assess projects. That is why we collect more data points via the PE so as to minimize the chance of your project being affected by evaluator-bias.
PE mainly evaluates your testing skills, done as the following two-parts:
The above two can lead to high-rigor, based on how well you achieve the objectives of testing, as opposed to indirect measures such as number of test casesoutcome-based evaluation of your testing skills. The alternative is to rely solely on other easy-to-measure metrics (e.g., the number of test cases, test coverage, test LoC etc.) which we don't think is right, given how important the testing aspect is. The ultimate objective of the PE is not even the higher rigor of grading. Because of the PE, you will realize that any bugs are very likely to be detected, which means you will work extra hard to avoid bugs; and THAT is the real benefit.
Problem: There is no way we can carry out the above-mentioned two-part evaluation at a high-level of rigor if using tutors as testers, or using an automated testing script. e.g., some tutors might not have the motivation to try hard enough to find bugs, and it will be hard to find tutors willing to spend many hours testing products so near to their own exams.
Solution: Get the two parts of the evaluation to feed each other by getting student to test each others' products.
The fact that you are testing products created by your classmates and objecting to bugs reported by your classmates can makes this a rather 'unpleasant' experience. You might feel like being pitted against each other, or as if you are forced to bring down each other. But as you read above, it is a necessary evil for this evaluation to be even possible. Given the actual goal is to get you to create products with very few bugs, we think switching off the 'collaborative learning' mode for just a few days is a price worth paying to achieve that goal. After all, the PE is an evaluation activity (not a learning activity) and happens after the regular learning period is over.
You are not taking marks from someone else -- at least, don't think of it that way. The point of contention is 'is this really a bug?' which is independent of the people involved. Furthermore, the reward for detecting a bug and the penalty for having a bug in your code are calculated independently.
Still, none of us likes it when others point out problems of our work. Some of us don't even like pointing out problems of others' work. But we just have to learn not to take bug reports personally. Another important lesson is to learn how to report bugs in a way that doesn't feel like you are attacking or trying to sabotage the dev team.
PE also evaluates aspects other than testing e.g., your product evaluation skills, effort estimation skills etc. When evaluating those aspects in particular, they not graded solely based on peer ratings. Rather, PE data are cross-validated with tutors' grades to identify cases that need further investigation. When peer inputs are used for grading, they are usually combined with tutors' grades with appropriate weight for each. In some cases ratings from team members are given a higher weight compared to ratings from other peers, if that is appropriate.
Grading:
Admin tP Grading → Notes on how marks are calculated for PE
severity.High
> severity.Medium
> severity.Low
> severity.VeryLow
type.FunctionalityBug
, type.DocumentationBug
, type.FeatureFlaw
) are counted for three different grade components. The penalty/credit can vary based on the bug type. Given that you are not told which type has a bigger impact on the grade, always choose the most suitable type for a bug rather than try to choose a type that benefits your grade.n
bugs found in your feature; it is a big feature consisting of lot of code → 4/5 marksn
bugs found in your feature; it is a small feature with a small amount of code → 1/5 marksPE-D Preparation
Ensure that you have accepted the invitation to join the GitHub org used by the module. Go to https://github.com/nus-tic4002-AY2021S2 to accept the invitation.
Ensure you have access to a computer that is able to run module projects e.g. has the right Java version.
Download the latest CATcher and ensure you can run it on your computer. You should have done this when you smoke-tested CATcher earlier in the week.
If not using CATcher
Issues created for PE-D and PE need to be in a precise format for our grading scripts to work. Incorrectly-formatted responses will have to discarded. Therefore, you are not allowed to use the GitHub interface for PE-D and PE activities, unless you have obtained our permission first.
ped
pe
Bug Severity labels:
severity.VeryLow
: A flaw that is purely cosmetic and does not affect usage e.g., a typo/spacing/layout/color/font issues in the docs or the UI that doesn't affect usage.
Only cosmetic problems should have this label.severity.Low
: A flaw that is unlikely to affect normal operations of the product. Appears only in very rare situations and causes a minor inconvenience only.severity.Medium
: A flaw that causes occasional inconvenience to some users but they can continue to use the product.severity.High
: A flaw that affects most users and causes major problems for users. i.e., makes the product almost unusable for most users.When applying for documentation bugs, replace user with reader.
Type labels:
type.FunctionalityBug
: A functionality does not work as specified/expected.type.FeatureFlaw
: Some functionality missing from a feature delivered in v1.4 in a way that the feature becomes less useful to the intended target user for normal usage. i.e., the feature is not 'complete'. In other words, an acceptance-testing bug that falls within the scope of v1.4 features. These issues are counted against the product design aspect of the project.type.DocumentationBug
: A flaw in the documentation e.g., a missing step, a wrong instruction, typosHave a good screen grab tool with annotation features so that you can quickly take a screenshot of a bug, annotate it, and post in the issue tracker.
You can use Ctrl+V to paste a picture from the clipboard into a text box in a bug report.
[Optional] Have a good screen recording tool if you plan to use screen recording clips as part of your bug reports. Ensure that your screen recording tool can create small files as CATcher doesn't allow files bigger than 10Mb.
As the CATcher support for uploading screen recordings is new and limited, use it only if strictly necessary -- use screenshots for other cases.
Download the product to be tested.
Testing tips
Use easy-to-remember patterns in test data. For example, if you use 12345678
as a phone number while testing and it appears as 2345678
somewhere else in the UI, you can easily spot that the first digit has gone missing. But if you used a random number instead, detecting that bug won't be as easy. Similarly, if you use Alice Bee
, Benny Lee
, Charles Pereira
as test data (note how the names start with letters A, B, C), it will be easy to detect if one goes missing, or they appear in the incorrect order.
Go wide before you go deep. Do a light testing of all features first. That will give you a better idea of which features are likely to be more buggy. Spending equal time for all features or testing in the order the features appear in the UG is not always the best approach.
PE Phase 1 will conducted under exam conditions. We will be following the SoC's E-Exam SOP, combined with the deviations/refinements given below. Any non-compliance will be dealt with similar to a non-compliance in the final exam.
Bonus marks for high accuracy rates!
You will receive bonus marks if a high percentage (e.g., >70%) of your bugs are accepted as reported (i.e., the eventual type.*
and severity.*
of the bug match the values you chose initially and the bug is accepted by the team).
Test the product and report bugs as described below. You may report both product bugs and documentation bugs during this period.
_inner.zip
.java -version
command to ensure you are using Java 11.java -jar
command rather than double-clicking (reason: to ensure the jar file is using the same java version that you verified above). Use double-clicking as a last resort.https://{team-id}.github.io/tp2/UserGuide.html
.Admin tP Grading → Functionality Bugs
These are considered functionality bugs:
Behavior differs from the User Guide
A legitimate user behavior is not handled e.g. incorrect commands, extra parameters
Behavior is not specified and differs from normal expectations e.g. error message does not match the error
Admin tP Grading → Feature Flaws
These are considered feature flaws:
The feature does not solve the stated problem of the intended user i.e., the feature is 'incomplete'
Hard-to-test features
Features that don't fit well with the product
Features that are not optimized enough for fast-typists or target users
Admin tP Grading → Possible UG Bugs
These are considered UG bugs (if they hinder the reader):
Use of visuals
Use of examples:
Explanations:
Neatness/correctness:
Admin tP Grading → Functionality Bugs
These are considered functionality bugs:
Behavior differs from the User Guide
A legitimate user behavior is not handled e.g. incorrect commands, extra parameters
Behavior is not specified and differs from normal expectations e.g. error message does not match the error
Type.FeatureFlaw
. The dev team is allowed to reject bug reports framed as mere suggestions or/and lacking in a convincing justification as to why the omission of that functionality is problematic.Admin tP Grading → Feature Flaws
These are considered feature flaws:
The feature does not solve the stated problem of the intended user i.e., the feature is 'incomplete'
Hard-to-test features
Features that don't fit well with the product
Features that are not optimized enough for fast-typists or target users
Admin tP Grading → Possible UG Bugs
These are considered UG bugs (if they hinder the reader):
Use of visuals
Use of examples:
Explanations:
Neatness/correctness:
TIC4002 PE Dry run
TIC4002 PE
Issues created for PE-D and PE need to be in a precise format for our grading scripts to work. Incorrectly-formatted responses will have to discarded. Therefore, you are not allowed to use the GitHub interface for PE-D and PE activities, unless you have obtained our permission first.
ped
pe
severity.*
label to the bug report. Bug report without a severity label are considered severity.Low
(lower severity bugs earn lower credit)Bug Severity labels:
severity.VeryLow
: A flaw that is purely cosmetic and does not affect usage e.g., a typo/spacing/layout/color/font issues in the docs or the UI that doesn't affect usage.
Only cosmetic problems should have this label.severity.Low
: A flaw that is unlikely to affect normal operations of the product. Appears only in very rare situations and causes a minor inconvenience only.severity.Medium
: A flaw that causes occasional inconvenience to some users but they can continue to use the product.severity.High
: A flaw that affects most users and causes major problems for users. i.e., makes the product almost unusable for most users.When applying for documentation bugs, replace user with reader.
type.*
label to the issue.Type labels:
type.FunctionalityBug
: A functionality does not work as specified/expected.type.FeatureFlaw
: Some functionality missing from a feature delivered in v1.4 in a way that the feature becomes less useful to the intended target user for normal usage. i.e., the feature is not 'complete'. In other words, an acceptance-testing bug that falls within the scope of v1.4 features. These issues are counted against the product design aspect of the project.type.DocumentationBug
: A flaw in the documentation e.g., a missing step, a wrong instruction, typosAdmin tP Grading → Possible UG Bugs
These are considered UG bugs (if they hinder the reader):
Use of visuals
Use of examples:
Explanations:
Neatness/correctness:
Admin tP Grading → Possible DG Bugs
These are considered DG bugs (if they hinder the reader):
Those given as possible UG bugs ...
These are considered UG bugs (if they hinder the reader):
Use of visuals
Use of examples:
Explanations:
Neatness/correctness:
Architecture:
UML diagrams:
Code snippets:
Problems in User Stories. Examples:
Problems in Use Cases. Examples:
Problems in NFRs. Examples:
Problems in Glossary. Examples:
Evaluate based on the User Guide and the actual product behavior.
Criterion | Unable to judge | Low | Medium | High |
---|---|---|---|---|
target user |
Not specified | Clearly specified and narrowed down appropriately | ||
value proposition |
Not specified | The value to target user is low. App is not worth using | Some small group of target users might find the app worth using | Most of the target users are likely to find the app worth using |
optimized for target user |
Not enough focus for CLI users | Mostly CLI-based, but cumbersome to use most of the time | Feels like a fast typist can be more productive with the app, compared to an equivalent GUI app without a CLI | |
feature-fit |
Many of the features don't fit with others | Most features fit together but a few may be possible misfits | All features fit together to for a cohesive whole |
Evaluate based on fit-for-purpose, from the perspective of a target user.
For reference, the AB3 UG is here.
Evaluate based on fit-for-purpose from the perspective of a new team member trying to understand the product's internal design by reading the DG.
For reference, the AB3 DG is here.
0
..20
] e.g., if you give 8
, that means the team's effort is about 80% of that spent on creating AB3. We expect most typical teams to score near to 10
.
Effort
, if any.Deadline: Mon, Apr 19th 2359
This phase is for you to respond to the bug reports you received.
Bonus marks for high accuracy rates!
You will receive bonus marks if a high percentage (e.g., >80%) of bugs are accepted as triaged (i.e., the eventual type.*
, severity.*
, and response.*
of the bug match the ones you chose).
Duration: The review period will start around 1 day after the PE and will last for 2-3 days (exact times will be announced later). However, you are recommended to finish this task ASAP, to minimize cutting into your exam preparation work.
Bug reviewing is recommended to be done as a team as some of the decisions need team consensus.
Instructions for Reviewing Bug Reports
Penalty for a minor bug (e.g., an indicative value only; the actual value depends on the severity, type, and the number of assignees-0.15) is unlikely to make a difference in your final grade, especially given that the penalty applies only if you have more than a certain amount of bugs.
For example, in a typical case a developer might assigned 5+ severity.VeryLow
bugs before the penalty even starts affecting their marks.
Accordingly, we hope you'll accept bug reports graciously (rather than fight tooth-and-nail to reject every bug report received) if you think the bug is within the ballpark of 'reasonable'. Those minor bugs are really not worth stressing/fighting over.
Sync
button at the top to force-sync your view with the latest data from GitHub.
TIC4002 PE
. It will show all the bugs assigned to your team, divided into three sections:
Issues Pending Responses
- Issues that your team has not processed yet.Issues Responded
- Your job is to get all issues to this category.Faulty Issues
- e.g., Bugs marked as duplicates of each other, or causing circular duplicate relationships. Fix the problem given so that no issues remain in this category.You must use CATcher. You are strictly prohibited from editing PE bug reports using the GitHub Web interface as it can can render bug reports unprocessable by CATcher, sometimes in an irreversible ways, and can affect the entire class. Please contact the prof if you are unable to use CATcher for some reason.
A Duplicate of
tick box.type.*
and severity.*
from the original.response.Accepted
)Response Labels:
response.Accepted
: You accept it as a bug.response.NotInScope
: It is a valid issue but not something the team should be penalized for e.g., it was not related to features delivered in v1.4.response.Rejected
: What tester treated as a bug is in fact the expected and correct behavior (from the user's point of view), or the tester was mistaken in some other way.response.CannotReproduce
: You are unable to reproduce the behavior reported in the bug after multiple tries.response.IssueUnclear
: The issue description is not clear. Don't post comments asking the tester to give more info. The tester will not be able to see those comments because the bug reports are anonymous.Only the response.Accepted
bugs are counted against the dev team. While response.NotInScope
are not counted against the dev team, they can earn a small amount of consolation marks for the tester. The other three do not affect marks of either the dev team or the tester, except when calculating bonus marks for accuracy.
Type labels:
type.FunctionalityBug
: A functionality does not work as specified/expected.type.FeatureFlaw
: Some functionality missing from a feature delivered in v1.4 in a way that the feature becomes less useful to the intended target user for normal usage. i.e., the feature is not 'complete'. In other words, an acceptance-testing bug that falls within the scope of v1.4 features. These issues are counted against the product design aspect of the project.type.DocumentationBug
: A flaw in the documentation e.g., a missing step, a wrong instruction, typosBug Severity labels:
severity.VeryLow
: A flaw that is purely cosmetic and does not affect usage e.g., a typo/spacing/layout/color/font issues in the docs or the UI that doesn't affect usage.
Only cosmetic problems should have this label.severity.Low
: A flaw that is unlikely to affect normal operations of the product. Appears only in very rare situations and causes a minor inconvenience only.severity.Medium
: A flaw that causes occasional inconvenience to some users but they can continue to use the product.severity.High
: A flaw that affects most users and causes major problems for users. i.e., makes the product almost unusable for most users.When applying for documentation bugs, replace user with reader.
severity.VeryLow
) of type.FunctionalityBug
. Some exceptions are below:
Low
or Medium
.type.FunctionalityBug
or type.FeatureFlaw
. But if it the UG that needs to be updated, it is a type.DocumentationBug
.type.FeatureFlaw
, and cannot be categorized as response.NotInScope
.John Doe
and john doe
are likely to be the same person. Similarly, extra white space (e.g., the user typed an extra space between the two names) is unlikely to mean they are two different persons. Typically, it is best if you can give a warning in such near match cases so that the user can make the final decision. type.FeatureFlaw
bugs. However, detecting more complex cases of potential duplicates can be considered as NotInScope
especially if they are hard to implement and expected to be rare.1234 5678 (HP) 1111-3333 (Office)
-- blocking that input might not add any value but allowing it does.type.FeatureFlaw
bug.type.FeatureFlaw
bug too.type.FeatureFlaw
unless making it more specific will take a lot more effort, in which case there is a chance to argue it to be response.NotInScope
.type.FeatureFlaw
as it is expected that the input formats will be optimized to get things done fast. Some examples: using very long keywords when shorter ones do, or making keywords case-sensitive when there is no need for it, using hard to type special characters in the format when it is possible to avoid them.Low
or Medium
depending on how much inconvenience they cause to the reader.severity.VeryLow
type.DocumentationBug
bugs (even if it is in the actual UI) which carry a very tiny penalty.severity.Low
DocumentationBug
if the the NFR was unreasonable in the first place. Otherwise, it can be type.FeatureFlaw
bug.severity.VeryLow
).Assignees
field to assign the issue to that person(s). There is no need to actually fix the bug though. It's simply an indication/acceptance of responsibility. If there is no assignee, we will distribute the penalty for that bug (if any) equally among all team members e.g., if the penalty is -0.4 and there are 4 members, each member will be penalized -0.1.
As far as possible, choose the correct type.*
, severity.*
, response.*
, assignees, and duplicate status even for bugs you are not accepting. Reason: your non-acceptance may be rejected in a later phase, in which case we need to grade it as an accepted bug.
If a bug's 'duplicate' status was rejected later (i.e., the tester says it is not really a duplicate and the teaching team agrees with the tester), it will inherit the type/severity/assignees from the 'original' bug that it was claimed to be a duplicate of.
Justify your response. For all of the following cases, you must add a comment justifying your stance. Testers will get to respond to all those cases and will be double-checked by the teaching team in later phases.
Admin tP Grading → Grading bugs found in the PE
severity.High
> severity.Medium
> severity.Low
> severity.VeryLow
type.FunctionalityBug
, type.DocumentationBug
, type.FeatureFlaw
) are counted for three different grade components. The penalty/credit can vary based on the bug type. Given that you are not told which type has a bigger impact on the grade, always choose the most suitable type for a bug rather than try to choose a type that benefits your grade.n
bugs found in your feature; it is a big feature consisting of lot of code → 4/5 marksn
bugs found in your feature; it is a small feature with a small amount of code → 1/5 marksStart: Within 1 day after Phase 2 ends.
While you are waiting for Phase 3 to start, comments will be added to the bug reports in your /pe
repo, to indicate the response each received from the receiving team. Please do not edit any of those comments or reply to them via the GitHub interface. Doing so can invalidate them, in which case the grading script will assume that you agree with the dev team's response. Instead, wait till the start of the Phase 3 is announced, after which you should use CATcher to respond.
Deadline: Thu, Apr 22nd 2359
severity.High
and the team changed it to severity.Low
but now you think it should be severity.Medium
.Admin PE → Phase 2 → Additional Guidelines for Bug Triaging
severity.VeryLow
) of type.FunctionalityBug
. Some exceptions are below:
Low
or Medium
.type.FunctionalityBug
or type.FeatureFlaw
. But if it the UG that needs to be updated, it is a type.DocumentationBug
.type.FeatureFlaw
, and cannot be categorized as response.NotInScope
.John Doe
and john doe
are likely to be the same person. Similarly, extra white space (e.g., the user typed an extra space between the two names) is unlikely to mean they are two different persons. Typically, it is best if you can give a warning in such near match cases so that the user can make the final decision. type.FeatureFlaw
bugs. However, detecting more complex cases of potential duplicates can be considered as NotInScope
especially if they are hard to implement and expected to be rare.1234 5678 (HP) 1111-3333 (Office)
-- blocking that input might not add any value but allowing it does.type.FeatureFlaw
bug.type.FeatureFlaw
bug too.type.FeatureFlaw
unless making it more specific will take a lot more effort, in which case there is a chance to argue it to be response.NotInScope
.type.FeatureFlaw
as it is expected that the input formats will be optimized to get things done fast. Some examples: using very long keywords when shorter ones do, or making keywords case-sensitive when there is no need for it, using hard to type special characters in the format when it is possible to avoid them.Low
or Medium
depending on how much inconvenience they cause to the reader.severity.VeryLow
type.DocumentationBug
bugs (even if it is in the actual UI) which carry a very tiny penalty.severity.Low
DocumentationBug
if the the NFR was unreasonable in the first place. Otherwise, it can be type.FeatureFlaw
bug.severity.VeryLow
).Admin tP Grading → Grading bugs found in the PE
severity.High
> severity.Medium
> severity.Low
> severity.VeryLow
type.FunctionalityBug
, type.DocumentationBug
, type.FeatureFlaw
) are counted for three different grade components. The penalty/credit can vary based on the bug type. Given that you are not told which type has a bigger impact on the grade, always choose the most suitable type for a bug rather than try to choose a type that benefits your grade.n
bugs found in your feature; it is a big feature consisting of lot of code → 4/5 marksn
bugs found in your feature; it is a small feature with a small amount of code → 1/5 marksTIC4002 PE
).Issues Pending Responses
section:
I disagree
tick box and enter your justification for the disagreement, and click Save
.Save
without any other changes upon which the issue will move to the Issue Responded
section.You must use CATcher. You are strictly prohibited from editing PE bug reports using the GitHub Web interface as it can can render bug reports unprocessable by CATcher, sometimes in an irreversible ways, and can affect the entire class. Please contact the prof if you are unable to use CATcher for some reason.