COMP285 (Computer Aided Software Development)
Refactoring is where we improve our code quality while maintaining the same functionality. This yields better:
- Readability
- Testing
- OO Structure
- Re-usability
- Efficiency
- Extend-ability
Code Smell
These are issues with the code that aren’t bugs. They are bad coding practices:
- Duplicated Code
- We should use generalised classes that we can extend if we need similar, but different, functionality.
- Long Methods
- We should split long methods into functions to aid code reuse.
- Long Argument Lists
- Collections of data should be wrapped in a type.
- Data that required validation should have it’s own type so that invalid data can never be passed to a function.
- Large Classes
- Make sure we are using super-classes properly.
- Generalise functions (pull-up), or specialise (push-down).
- When generalising methods see if making an interface would be appropriate.
- Excessively Long Identifiers
- Excessively Short Identifiers
- Inappropriate Intimacy
- Where one class writes data directly into another classes attribute.
- We should avoid having any
public
attributes.
To test numeric functions we should use cases like so:
| Fcalc(x) - Ftrue(x) | < tolerance
We can calculate Ftrue
using an arbitrary precision maths package.
This is as there are inherent losses with floating point numbers so our calculated value may be off (within reason).
Tolerance
We can calculate the tolerance with the following function:
\[\text{Tolerance} \leq \text{Cn}(x) \times \text{ULP}\]
Where:
- $\text{ULP}$ - Unit of least precision which is dependant on the data type used.
- $\text{Cn}(x)$ - Function of the condition number where the greater the condition number, the greater the effect of loss in precision.
Types of Function Test
Golden Value
These are particular values which are proven mathematically such as:
- $\sin(\pi) = 0$
- $\log(1) = 0$
We can test these using the tolerance calculation above.
Identities
This is a case where we use multiple functions in conjunction to produce an output:
\[\sin(x)^2+\cos(x)^2=1\]
Using identities may hide compensated error (where the functions are equally wrong in the opposite direction).
Inverse Function Tests
These are similar to identities:
\[x = (\sqrt x)^2\]
Unlike identities, they test for accuracy as in the previous example - $\sin(x)=1$ and $\cos(x)=0$, while still passing the test.
Binary Number Represenatations
8-bit Float
Consider that we are storing a decimal number in an 8-bit float:
sign
|
exponent
|
significand
|
0
|
0
|
0
|
0
|
0
|
0
|
0
|
0
|
We can calculate the error when storing 0.1
:
This is stored perfectly.
Or calculate the error when storing 0.25
:
We cannot store 25 in the 3 bit significand so the closes digit is 3. This gives an error of 0.05.
Fixed Point Binary
We can also use a fixed point representation like so:
$2^2$ |
$2^1$ |
$2^0$ |
$2^{-1}$ |
$2^{-2}$ |
$2^{-3}$ |
$2^{-4}$ |
$2^{-5}$ |
1 |
0 |
1 |
0 |
0 |
1 |
1 |
0 |
This can also be written as:
\[1101\ .\ 0011_2\]
We can calculate the error of storing 0.1 in 8 bits:
\[0\ .\ 0001\ 1010_2 = 0.1015625\]
We have to round to fit this in 8 bits.
\[\vert0.1 - 0.1015625\vert = 1.5625{\times}10^{-3}\]
Rounding in Binary
- If the next digit is 0 then round down (store as truncated).
- If the next digit is 1 then round up.
These tools allow us to keep track of changes to code and roll back any part, of any file, to any time in history.
- SVN
- Git
- Perforce (Commercial)
Documentation
This module will look at SVN, the documentation can be found here:
For documentation on git see here:
Orthogonal Testing
This is where each range of data (not every combination of data) is tested. We are trying to test only a single attribute at a time.
Modes of Error
The mode of an error is how many parts of a function have errors in them. Consider that we have the following function that counts the days until a certain date:
countDaysTill(int day, int month, int year)
We could have bugs appear when we check different variables of the input:
- Bug in
year
- Single Mode Fault
- Buy in
day
and year
- Double Mode Fault
Exhaustive testing of some single mode faults is sometimes possible.
Visualising Orthogonal Testing
When there is a large amount of data to test we can visualise this as a 3D graph (for 3 variables). We can then use this to see where the errors occur on the input.
Statistical Testing
This is where we give a random input to the program and see if the results are correct.
This can be a good method when there are so many inputs that you can’t do other methods of testing.
Statistical Testing Example
Consider that a bug effects 0.01% of positions, evenly distributed, and we test 10,000 positions randomly. What is the chance that we miss the bug?
-
The chance of passing the bug for each test is:
\[1-(0.0001)=0.9999\]
-
Therefore the change of not finding the bug is:
\[0.9999^{10000}=0.36=36\%\]
We can also work this backwards, with logs, to find the defect density.
Testing with Identities
To ensure that we don’t have buggy tests we can test with identities like so:
\[\sqrt x \times \sqrt x = x\]
We can do this to ensure that our tests are as simple as possible.
Coding by Contract
This is the practice of constraining the input of a method:
- You can do this by throwing an exception if the input is outside the range.
- You can use this to reduce the amount of tests that you do.
We may want to use validation testing to ensure that exceptions are produced when they are supposed to.
Generating Useful Bug Reports
We should always include the following information so that our bug reports are useful:
- Date, Product Name, Platform
- Description
- Logs
- Version of Software
- How to recreate the bug.
- Testing shows the presence of errors; not their absence.
- Exhaustive testing is not practical in most cases.
- Test early and regularly to avoid bug masking and multiple defect relations.
- Errors are not evenly distributed:
- 20% of modules generally contain 80% of errors.
Pesticide Paradox
Unless tests change they often become invalid - as the functionality changes so should the tests:
- If we continue to use old tests, only buts associated with the old specification are removed.
Context Dependency
We should focus our testing on the most critical type of test, for example:
- Medical System - Safety Testing
- Website - Performance and Load Testing
Types of Testing
We can include the following types of tests:
- Verification (Against the Specification)
- Validation (Right for the Customer)
- Performance
- Security
- Usability
- Regulatory Testing
- Statistical Testing
Generating Reports
Ant has three types of formatter for the <junit>
task:
If we set usefile=true
then each test case will generate it’s own file.
We can also set printsummary=false
if we don’t want this output on the command line.
You can also use multiple formatters at the same time:
<target name="test-xml" depends="test-compile">
<junit haltonfailure="true" printsummary="false">
<classpath refid="test.classpath"/>
<formatter type="brief" usefile="false"/>
<formatter type="xml"/>
<test todir="${test.data.dir}" name="org.eclipseguide.persistence.FilePersistenceServicesTest"/>
</junit>
</target>
Generating HTML Test Reports
From the XML report we can generate HTML reports using XSLT. We can do this using the <junitreport>
task:
<junitreport todir="${test.data.dir}">
<fileset dir="${test.data.dir}">
<include name="TEST-*.xml"/>
</fileset>
<report format="frames" todir="${test.reports.dir}"/>
</junitreport>
This should be run after the <junit>
task.
This aggregates all TEST-*.xml
files into a HTML report that is human-readable.
Generating All Reports on Failure
By using haltonfailure="yes"
the tests will stop and not print reports if there is a test failure. We can use the following methods to generate our reports anyway:
<target name="test" depends="test-compile">
<junit printsummary="no" haltonfailure="no" errorProperty="test.failed" failureProperty="test.failed">
<classpath refid="test.classpath"/>
<formatter type="brief" usefile="false"/>
<formatter type="xml"/>
<batchtest todir="${test.data.dir}">
<fileset dir="${build.test.dir}" includes="**/*Test.class"/>
</batchtest>
</junit>
<junitreport todir="${test.data.dir}">
<fileset dir="${test.data.dir}">
<include name="TEST-*.xml"/>
</fileset>
<report format="frames" todir="${test.reports.dir}"/>
</junitreport>
<fail message="Tests failed. Check log and/or reports." if="test.failed"/>
</target>
This uses the failureProperty
and errorProperty
to change the default behaviours.
<batchtest>
We can run a set of test files instead of individually declaring them with <test>
:
<target name="test-batch" depends="test-compile">
<junit printsummary= "no" haltonfailure="no">
<classpath refid="test.classpath"/>
<formatter type="brief" usefile="false"/>
<formatter type="xml"/>
<batchtest todir="${test.data.dir}">
<fileset dir="${build.test.dir}" includes="**/*Test.class"/>
</batchtest>
</junit>
</target>
By naming all our test classes with the ending Test
, we can find them easily and put them in a fileset
.
Testing with setup()
& tearDown()
We use setUp()
and tearDown()
to ensure that the tests don’t interfere with each-other. We can implement this, in our testing class, like so:
package org.example.antbook.junit;
//import JUnit4 classes:
import static org.junit.Assert.*;
import org.junit.Test; //
import org.junit.Before; //import org.junit.* also would work
import org.junit.After; //
public class setUpTearDownTest{
@Before //Runs before each @Test method
public void setUp(){ System.out.println("setUp sets up a fixture"); }
@After //Runs after each @Test method
public void tearDown(){ System.out.println("tearDown releases fixture"); }
@Test
public void testA(){
System.out.println("testA runs");
assertTrue("MULTIPLICATION FAILED!!!", 4 == (2 * 2));
}
@Test //Each method annotated by @Test runs
public void testB(){
System.out.println("testB runs");
assertSame("ADDITION FAILED!!!", 4, 2 + 2);
}
@Test
public void SomeTestC(){
System.out.println("SomeTestC runs");
assertSame("ADDITION FAILED!!!", 5, 2 + 2);
}
}
Tests won’t run if there is not an @Test
annotation before them.
Running Tests
We can run tests from the command line like so:
java -cp build\test \
org.junit.runner.JUnitCore \
org.example.antbook.junit.setUpTearDownTest
You will need to have your code compiled before this.
Test Suites
We can run many JUnit test cases by grouping them into a suite:
package org.example.antbook;
import org.junit.runner.RunWith;
import org.junit.runners.Suite;
import org.junit.runners.Suite.SuiteClasses;
@RunWith(value=Suite.class)
@SuiteClasses(value=
{
org.example.antbook.junit.SimpleTest.class, org.example.antbook.junit.setUpTearDownTest.class, org.eclipseguide.persistence.FilePersistenceServicesTest.class
}
)
public class AllTests{}
Ant has its own method of grouping tests so we shouldn’t use this when using Ant.
<junit>
in Ant
Ant has it’s own task for running a selection of tests for a set of files:
<target name="test-brief" depends="test-compile">
<junit>
<classpath refid="test.classpath"/>
<test name="org.eclipseguide.persistence.FilePersistenceServicesTest"/>
<test name="org.example.antbook.junit.SimpleTest"/>
</junit>
</target>
We can provide more information and stop the build if the test fails using the following build file:
<target name="test-brief" depends="test-compile">
<junit haltonfailure="false" printsummary="true">
<classpath refid="test.classpath"/>
<test name="org.eclipseguide.persistence.FilePersistenceServicesTest"/>
<test name="org.example.antbook.junit.SimpleTest"/>
</junit>
</target>
You may not want to use haltonfailure
if you want to see all the tests that fail.
Directory Structure
Folder |
Description |
ch04 |
Base directory basedir= "." |
ch04\src |
Source directory ${src.dir} |
ch04\test |
Test directory ${src.test.dir} containing deeper JUnit test classes. |
ch04\build |
Build directory ${build.dir} |
ch04\build\classes |
For compiled source files ${build.classes.dir} |
ch04\build\test |
For compiled JUnit test cases ${build.test.dir} |
ch04\build\data |
For test reports in XML format. |
ch04\build\report |
For test reports in HTML format (new directories data and report to be considered later). |
JUnit Build File Structure
When adding testing to an Ant build we can use the following structure:
test-init
Target - Initialise the testing directory structure with <mkdir>
(the last three folders are automatically generated).
test-compile
Target - Compile the test code using <javac>
.
test
Target - Execute the tests with <junit>
or <java>
.
test-reports
- Use <junitreport>
and <report>
to generate test reports.
Creating Filesets
Filesets are a group of files represented like so:
<fileset dir="src"
includes="**/*.java"
id="source.fileset"/>
dir
- Mandatory attribute to denote a root directory for the fileset.
includes
- Which files from this directory to include.
id
- A reference which can be used to call the fileset.
We can also use the following syntax:
<fileset dir="lib">
<include name="*.jar"/>
</fileset>
exclude
can also be used to remove files. By default all the files in the dir
are included.
Some patterns are excluded by default you can view them at https://ant.apache.org/manual/dirtasks.html#defaultexcludes and disable this functionality by using:
<fileset dir"..." defaultexcludes="no"/>
Using Filesets
We can use filesets like so:
<copy todir="backup">
<fileset refid="source.fileset"/>
</copy>
javac
Task
We can refer to the javac
task manual to see how we can use compiler flags: https://ant.apache.org/manual/Tasks/javac.html
We can combine this with our knowledge of Ant properties to make the following build task:
<path id="compile.classpath">
<pathelement location="${lucine.jar}"/>
<pathelement location="${jtidy.jar}"/>
</path>
<javac destdir="${build.classes.dir}"
debug="${build.debug}"
srcdir="$(src.dir)"
includeAntRuntime="no"
>
<include name="**.*.java"/>
<classpath refid="compile.classpath"/>
</javac>
This can make large build files with lots of repetition easy to change.
path
We can use path
in place of location using the following syntax:
<classpath path="bulid/classes:lib/some.jar"/>
<classpath>
<pathelement path="bulid/classes:lib/some.jar"/>
</classpath>
Both ;
and :
are allowed as a separator.
Datatypes
The following datatypes are available in Ant:
- Paths - An ordered list of files and directories.
- Classpath is a variant of this.
- Filesets - A collection of files rooted from a specified directory.
- Patternsets - A collection of file matching patterns.
- Filtersets
Properties
This is a way of defining variables that you can use in your build:
- They are defined as key-value pairs.
- They are immutable.
Built-in Properties
You can see a list of system properties such as:
at the following link: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/
System.html#getProperties().
Storing Locations in Properties
To allow for processing of file-paths you should use the location
attribute instead of value
:
<property name="dir" location="somedir"/>
This will allow prepending of the basedir
and formatting the path for the OS.
Setting Properties
Setting Properties via Task
You can set, and use, properties by using the <property>
task like so:
<property name="build.debug" value="on"/>
<javac srcdir="src" debug="${build.debug}"/>
Setting Properties via Command-Line
You can set properties on the command line using the -D<property>=<value>
syntax:
$ ant -Dbuild.debug=off -f build.xml
You can also include a property file:
$ ant -propertyfile build.properties
-D
flags take precedence over the property file.
Setting Properties from File
You can create a file, such as build.properties
to hold your values in an ini
style:
build.debug=off
build.dir=build
output.dir=${build.dir}/output
This properties file can then be loaded using the following task:
<property file="build.properties"/>
Setting Properties from Environment Variables
We can store all the environment variables in a property called env
like so:
<property environment="env"/>
We can then access them using their name:
<echo message="The PATH is: ${env.PATH}">
Global Properties
There is no concept of scope for properties but if you want them to be available for multiple targets, you should assign them under <project>
:
<project name="test">
<property name="..." value="..."/>
<target>
...
</target>
</project>
This ensures they are executed before the target.
<available>
We can set a property when a resource is available by using this task:
<property name="project.jar" value="./dist/project.jar"/>
<available file="${project.jar}"
type="file"
property="project.jar.present"
/>
<echo message="file ${project.jar} is present=${project.jar.present}"/>
This method can work on:
- Classes in a Classpath
- Files & Directories
- Resource Files (
.jar
files)
The man page is here: https://ant.apache.org/manual/Tasks/available.html.
Conditional Execution
We can use if
and unless
to execute tasks depending on a property:
<target name="build-module-A" if="module-A-present"/>
<target name="build-module-B" unless="module-A-present"/>
- Unset properties are falsy.
- Everything else is truthy.
References
We can save several attribute definitions using a reference:
<path id="compile.classpath">
<pathelement location="${lucene.jar}"/>
<pathelement location="${tidy.jar}"/>
</path>
We can then use these pathelements
else where by using refid
:
<path id="test.classpath">
<path refid="compile.classpath"/>
<pathelement location="${junit.jar}"/>
<pathelement location="${build.dir}/classes"/>
<pathelement location="${build.dir}/test"/>
</path>
We can also use datatypes other than path
.
If you are going to reuse this datatype then you can name it using a property:
<project name="ref-classpath" basedir="dist">
<path id="test.classpath">
<pathelement location="build/classes"/>
<pathelement location="src"/>
</path>
<property name="path" refid="test.classpath"/>
<echo>path is ${path}</echo>
</project>
This makes a standalone datatype.
This will produce an output like so:
$ ant
Buildfile: /home/ben/ref-classpath/build.xml
[echo] path is /home/ben/ref-classpath/dist/build/classes:/home/ben/ref-classpath/dist/src
BUILD SUCCESSFUL
Total time: 0 seconds
Notice how this takes into account the basedir
.
Boundary Testing
This is where we test:
and on the value of a boundary.
Orthogonal Testing
Tests should only test a single possible bug.
For example we could have two tests:
- One if the input is too long.
- One if the input is too short.
We should never have a case where both tests fail at the same time.
These two cases are called partitions.
Exhaustive Testing
If it is possible then we should test all possible inputs (7 days of the week).
Standard Directory Names
Directory Name |
Description |
src |
Source Files |
build/classes or bin |
Intermediate Output |
dist |
Distributeable Files |
All files apart from src
are generated and can be deleted.
Packages
This is a method of grouping files together with their scope. The package is named using a reverse fully qualified name:
This will create a directory structure like so:
To let java know about the package we can declare it in our source files:
// src/uk/bweston/utils/Main.java
package uk.bweston.utils
public class Main {
public static void main(String args[]) {
// do stuff
}
}
The package declaration says where to put the file in the build/classes
folder.
We can compile packages like so:
javac -d ./build/classes src/uk/bweston/utils/Main.java
This saves the generated files in ./build/classes
.
Ant & Structured Build
Ant will use the package declaration to complete dependency checking. This saves time by not recompiling unchanged files.
This won’t check for changed parent or imported classes.
We can tell ant that source files depend on each other by using the <depend>
tag.
Structured build.xml
We can put this all together in the following build.xml
:
<?xml version="1.0" ?>
<project name="structured" default="archive">
<target name="init">
<mkdir dir="build/classes"/>
<mkdir dir="dist"/>
</target>
<target name="compile" depends="init">
<javac srcdir="src"
destdir="build/classes"
includeAntRuntime="no"/>
</target>
<target name="archive" depends="compile">
<jar destfile="dist/project.jar"
basedir="build/classes"/>
</target>
<target name="clean" depends="init">
<delete dir="build"/>
<delete dir="dist"/>
</target>
</project>
default
sets the default target.
clean
depents on init
as files must exist in order to delete them.
To run targets other than the default
we can name the target we want:
We can also name multiple targets in a space separated list.
You can have multiple dependencies on a target:
<target name="all" depends="archive,clean"/>
This will execute the dependencies in order.
Executing Using build.xml
We can write a target like so to run the compiled program:
<target name="execute" depends="compile">
<java
classname="uk.bweston.util.Main"
classpath="build/classes">
<arg value="a"/>
<arg value="b"/>
<arg file="."/>
</java>
</target>
This is the same as running:
java -cp build/classes uk.bweston.util.Main a b .
To compile our source code with Ant we need the source file:
public class Main {
public static void main(String args[]) {
for(int i=0;i<args.length;i++) {
System.out.println(args[i]);
}
}
}
This should be placed in a folder called src
to match the srcdir
variable in the build file.
and an Ant build.xml
:
<?xml version="1.0"?>
<project name="firstbuild" default="compile" >
<target name="compile">
<javac srcdir="src" includeAntRuntime="no"/>
<echo>compilation complete!</echo>
</target>
</project>
This places the .class
files in the src
directory.
Ant will compile all the files in the src
directory and sub-directories.
Ant XML
Ant expects a build file with the following hierachy:
<?xml version="1.0"?>
<project>
<target>
<task/>
</target>
</project>
We can use the syntax of <tagname/>
to make the tag self closing.
Tasks
We can reference the following link for the tasks available in Ant:
http://ant.apache.org/manual/index.html
Ant Terminology
In Ant, each build file contains one project. A large project may include:
- Smaller sub-projects.
- A master build file that can coordinate the builds of sub-projects.
Each Ant project contains multiple targets that represent stages in the build process. These could be:
- Compiling
- Testing
- Deploying to a Server
Targets can have dependencies so that you can control the order of operations.
Each target is composed of a number of tasks that complete the actual work.
Ant Example
The dependency tree of an Ant project may look something like the following:
flowchart TD
subgraph init
mkdir1[mkdir] --> mkdir2[mkdir]
end
subgraph compile
javac
end
subgraph doc
javadoc
end
subgraph deploy
jar --> ftp
end
init --> compile & doc
compile & doc --> deploy
This tree can be realised in the following XML that should be named build.xml
:
<?xml version="1.0" encoding="UTF-8"?>
<project name="OurProject" default="deploy">
<target name="init">
<mkdir dir="build/classes" />
<mkdir dir="dist" />
</target>
<target name="compile" depends="init">
<javac srcdir="src" destdir="build/classes" includeAntRuntime="no" />
</target>
<target name="doc" depends="init">
<javadoc destdir="build/classes" sourcepath="src" packagenames="org.*" />
</target>
<target name="deploy" depends="compile,doc">
<jar destfile="dist/project.jar" basedir="build/classes" />
<ftp server="${server.name}" userid="${ftp.username}" password="${ftp.password}">
<fileset dir="dist" />
</ftp>
</target>
</project>
The depends
attribute defines the tree.
Property Files
We can include property files with additional parameters like so:
$ ant -propertyfile ftp.properties
These files are structured like so:
server.name=ftp.texas.austin.building7.eblox.org
ftp.username=kingJon
ftp.password=password
Automated Testing
Automated testing is important as it allows us to easily test code quickly and repeatedly.
Unit Testing
This usually exercise all the methods in the public interface of an isolated, independent class.
- This verifies that a unit of code behaves as expected.
There is no strict definition for a unit, however it could be:
- A method in a class working in isolation.
- A flow of control when calling a method which covers a given set of paths.
- This is what you would use the cyclomatic count for.
- A class.
In object oriented programs, the unit test should be at least 1 class in isolation.
In Java you can use JUnit to write unit tests.
Integration Testing
Integration testing ensures that multiple units produce the correct output when they are working together.
You can also use JUnit to write integration tests.
Functional Testing
This ensures that the whole system behaves as expected.
This can also be called acceptance testing, to verify, for the customer, that the system is complete.
There is no universal acceptance testing tool as it depends heavily on what system you are developing.
Web Testing Frameworks
These are functional testing tools, specifically for web interfaces. You can:
- Make requests to an external website.
- Inspect the responses to ensure that they are correct.
There are several web testing frameworks:
- HTTPUnit
- Low level, simplistic, web API.
- Poor JavaScript support.
- HTMLUnit
- Better JavaScript support.
- Better document level support.
- JWebUnit
- A wrapper for HTMLUnit and Selenium.
These testing frameworks have the following issues:
There are two main performance testing frameworks for Java:
- JUnitPerf:
- Does unit performance testing.
- It decorates existing JUnit tests so that they fail if running times are exceeded.
- JMeter:
- Provides functional performance testing:
- Can measure web server response times.
- Is site agnostic.
- Can’t run JavaScript.
Continuous Integration
This is the automatic process of building the complete system for ever commit. This has the advantages of:
- Allows the customer and test to see the progress.
- Integration bugs are reduced.
- Tests are run frequently.
- Reduces integration pain by doing it all the time.
Ant
Ant is a build tool for Java that makes:
- Building
- Environment Customisation
- Testing
- Deployment
possible in a single command that can be automated.
Software development methodologies are a collection of procedures, techniques, principles and tools that help developers to build computer systems.
Traditional Methodologies
The main traditional approaches are:
These methodologies are very rigid:
- First complete a functional specification.
- Then the software is developed in several distinct, waterfall-like phases.
This has the following issues:
- Difficult to adapt to changing customer requirements.
- Design errors are:
- Hard to detect.
- Expensive to correct.
For more notes on the waterfall model see the link.
Agile Methodologies
Agility in software development means:
- Adaptability
- Ability to respond quickly to change in the environment.
- Eliminate surprises from changed requirements.
- Risk Reduction
- Less chance of validation errors.
Emphasises an iterative process:
- Build some well-defined set of features.
- Repeat with another set of features.
Value customer feedback:
- Quick feedback.
- Sometimes with a client on-site.
Code-centric:
Testing in Agile
The only way to validate software is through testing. Testing can be:
- Functional - Specific yes or no tests based on the functional specification.
- Non-functional - Stress testing, usability and security testing.
SCRUM
This is an agile approach where:
- Each iteration of software development called a sprint.
- Each sprint delivers working code or a partial product.
- Each phase requires a set of tests.
- Testing is integrated.
There are a few phases:
- Specification
- Development which can be:
- Specification
- Design
- Coding
- Each iteration tests:
- New functions
- All old functions for regression.
- Testing is extensive bu shouldn’t be burdensome:
- Automated testing is ideal.
Test Driven Development
This is putting testing first on the development process:
- Before implementing a piece of code, such as a Java method, start by writing down a test which this method should pass.
- The test is like a goal.
- First state a goal, the do the steps to acheive that goal.
Goals can be:
Tests should be written first so that it is based wholly on the specification and no assumptions are made.
eXtreme Programming
This method has the following essential practices:
- Testing
- How will you know if feature works if you don’t test.
- How do you know if a feature still works after you refactor.
- Should be automated.
- Everything that can break must have a test.
- Continuous Integration
- Means building and testing a complete copy of the system several times per day.
- This can take a significant amount of time if left until the end of the project.
- Refactoring
- This is a technique for restructuring the internal structure of the code without changing its external behaviour but adding new features.
- Enables developers to add features while keeping the code simple.
- Each refactoring transformation does little so less is likely to go wrong.
- Relies on testing to ensure you don’t break anything.
- Planning Game
- Discussing the scope of the current iteration and priority of features.
- 40-hour Work Week
- Small Releases
- Simple Design
- Pair Programming
- A driver writes the code and an observer reviews the code.
- Collective Ownership
- No crucial dependence on one developer.
- On-Site Customer
- Metaphor
- A common language for developers and customers.
- Coding Standards
- Everyone writes code in the same standard to keep it clean and help legibility.
Common Principles
KISS:
YAGNI:
- You ain’t’ gonna need it
- So don’t:
- Add functions not in the specification.
- Add too much future proofing.
These principles may discourage code flexibility and re-use.
Issues of Agile Methodologies
One issue with allowing the requirements to change is that:
- It is hard to develop a schedule.
In-Circuit Emulator (ICE)
This does all the functions of a software debugger but at the machine instruction level.
- ICE replaces the CPU in the target motherboard.
- Very useful for embedded debugging:
ICE Features
- Breakpoint before instruction execution.
- Breakpoint on complex conditions:
- Write particular data value to a memory location.
- Write and read from I/O.
- Hardware interrupts.
There are some differences to software debuggers:
- Faster
- Can break on a full range of hardware conditions at the CPU level.
ICE Tracing
There are two different types of trace that the ICE can do:
Logic Analyser
This tool probes an existing CPU to read the signals that are coming to and from the CPU. It has the following features:
- Can take a snapshot of code execution and include it with a trace of the code.
- Can Run the code at full speed.
- Doesn’t require debug code added.
- Can debug race conditions that are speed sensitive.
- Can monitor hardware and software interaction.
- Can decompile the code to the original if the source code is loaded into the target.
This is just an oscilloscope with many channels and features.
You can’t step through code using this method.
Kernel Mode Soft Debugging
This has the same functions as ICE but uses debug features of the CPU and driver loaded into the OS.
This can be used to debug device drivers.
Profiler
This tool calculates how much:
- Time is used by different parts of your code.
- Memory is used by different parts of code.
This can be useful for:
- Optimising code for speed.
- Optimising code for memory footprint.
Logger
This records all the activities of a program. It can have the following use cases:
- Security Logging
- Financial Logging
- Debug Logging
- Entry and exit to methods.
This has to be coded into the program with statements like so:
trace ('could not find login', Logger::DEBUG);
Testing Frameworks
These are an important part of test driven development.
We can use the following unit testing software:
These both:
- Allow for testing code and production of test reports.
- Have interactions in many editors.
The following are some examples of stress testing tools:
- Mysqlslap
- Sends a heavy load to a MySQL server.
- Website Load Testing Tools:
- Many options such as:
- Can simulate complex transactions and not just random traffic.
Benefits of Stress Testing
- Can help in optimising the performance of your software.
- Can determine the maximum user load.
- Can determine what happens when the system reaches heavy load.
Reverse Engineering
This is the process of taking binaries, or object code and converting it to source code.
This can be used to:
- Determine how a product works.
- Determing is code is infringing IPR.
- Breaking into systems and copy protection.
In order to have niceties such as variable and function names, the program must have debugging symbols.
ORM Relational Management
This allows you to save objects directly to a database without having to write MySQL statements.
This has the following benefits:
- Saves Programming Time
- Reduces MySQL Errors
- Improves Performance:
- Code can include performance enhances such as sharding.
- Allows validation of parameters such as database table names.
Example of ORM with Hibernate
You can represent a table like so:
public class Stock implements java.io.Serializable {
private Integer stockId;
private String stockCode;
private String stockName;
private StockDetail stockDetail;
//constructor & getter and setter methods
}
This must implement serializable
so that it can be stored in a backing store and sent over the network.
You can then write the data for the table in the following format:
<hibernate-mapping>
<class name="com.mkyong.stock.Stock" table="stock" catalog="mkyongdb">
<id name="stockId" type="java.lang.Integer">
<column name="STOCK_ID" />
<generator class="identity" />
</id>
<property name="stockCode" type="string">
<column name="STOCK_CODE" length="10" not-null="true" unique="true" />
</property>
<property name="stockName" type="string">
<column name="STOCK_NAME" length="20" not-null="true" unique="true" />
</property>
<one-to-one name="stockDetail" class="com.mkyong.stock.StockDetail" cascade="save-update" />
</class>
</hibernate-mapping>
Bug Fixing with Case Support
Without CASE there are limited records and the visibility of the bug fixing process is low. We can use a method like the following to take advantage of CASE tools:
- Tester finds a bug and records it on a bug management tool and assigns the bug to a programmer.
- The bug management tool sends an email to the programmer.
- The programmer logs into the bug management tool and accepts/rejects the bug and adds comments.
- The programmer pulls the latest code from a source control system.
- The programmer fixes and test the bug using a debugger.
- The programmer commits the code with a useful comment that links to the bug ID.
- The team leader downloads the source code and makes a new build.
Computer-aided software engineering is the application of computer-assisted tools and methods in software development to ensure a high-quality and defect-free software.
Software tools are used for the following reasons:
- Productivity Cost
- Accuracy
- Quality
- Safety
When CASE is used properly:
- You will become a better software engineer.
- You will work better with others.
- Your code will be tested more.
- You will do things you wouldn’t normally bother to do.
- Your life in software engineering will be slightly less stressful.
CASE tools can be:
- General Purpose:
- Specialist:
Evolution of CASE
- Machine Language
- Assembly Language
- Still no checks
- All abstraction in the programmer’s head.
- High Level Language Compilers
- Abstractions are part of the language.
- Code is capable of:
- Being structured.
- Being tested as part of compilation.
-
Interpreters
Programs can be:
- Source Code Based
- Fast turnaround time.
- No pre-compile check.
- Only checks running code.
- Intermediary Code Based
- Slower turnaround time.
- Supports pre-compile checks.
- Allow for multiple languages to be converted to the same intermediate language.
- Machine Language
- Assembler
- Compiler
- Interpreter
- IDEs
- Refactoring
- Tool Integration
- Debuggers
- Generally focused on specific programming languages.
- Logic Analysers
- Profiler
- View where time is spend running your code.
- Logger
- Testing Framework
- Stress & Security Testers
- Project Scheduler
- Bug Management
- Document source code version control.
- Wikis
- Cost estimation tools.
- UML Editors
- Code Generators
- Reverse Engineering
- Database ORM Tools
- Allow you to test on database object data.