Difference between revisions of "Developer Area/Dev Team Ideas"
From Mahara Wiki
(→End user documentation: Kristina will be moving end user docs out of the wiki)
(→Selenium tests: selenium tests are working now)
|Line 75:||Line 75:|
Revision as of 13:52, 24 May 2011
Current wishlist ideas from the Dev Team, these may be implemented at some point as time allows.
Note: We should port stuff to here from the devteamideas page on the catalyst wiki - N
- 1 Documentation
- 2 Testing framework
- 3 Major reworking
- 3.1 Reworking of DML/Database abstraction
- 3.2 New MNET library
- 3.3 Theme subsystem work
- 3.5 Language subsystem work
- 3.6 Maharactl
- 3.7 Web services api
- 3.8 Making forums into an artefact plugin
- 3.9 Profile section reworking
- 3.10 Upgrades to bundled third party libraries
- Developers like PHPDoc documentation, we should provide this on mahara.org (they'll be too big to ship with Mahara itself)
- We should probably generate it per major version, suggestion: api.mahara.org/1.0, api.mahara.org/1.1... what about api.mahara.org/stable and api.mahara.org/unstable as symlinks?
- This will require that we go through the files and change the @phpdoc tags until the structure is generated properly, and of course we will have to document all the functions that need it!
- Functions like the pieform callbacks probably shouldn't appear in the docs
- How and when shall they be generated? From git.mahara.org as a nightly cronjob?
We have none, we want some. API docs are helpful but aren't exactly task based.
Question: how do we separate them out by version (for when APIs change)?
Things to cover:
- Theme inheritance, including how to write an entirely new theme
- Pieform templates
Things to cover:
- Translation system basics - including why files are all over the place, how to find them, how to translate (aka editing php files)
- David's adminlang tool
- How to get language packs upstream (tie in with mentioning language based groups on mahara.org?)
- Git commands for when you do get a pack upstream
The tutorial method might not work here because they're so big.. but maybe it will.
Probably a low priority thing to document, pending search subsystem rewrite.
Database schema docs
- There are tools available to automatically generate a schema from a postgres database, we should investigate using those to provide schema docs
- We might need to add comments to each table and field (xmldb supports this)
- Not sure where they would be hosted? Probably generated in a similar way to the api docs - perhaps they should go on api.mahara.org?
As Mahara gets bigger, it's going to be harder to make changes without breaking behaviour. Automatic testing helps improve confidence that nothing has been broken by a change.
Testing a webapp is tricky, luckily there are many tools to help. We can use selenium tests for testing the frontend, and unit tests for core libraries. But whatever we do, we need to set up a good framework first.
We're looking to draw on some Moodle experience here to help set up a framework. Penny and Nigel had an IRC meeting with Nico Connault at around 1100 GMT on Dec 6 to see what information he could give us.
We decided on the following
- We need to identify a stragegy for creating test data:
- Moodle currently has a full generator (which can be used to populate the database for demo purposes as well), and each test tells it which data it needs before the test runs, and cleans it up aftewards. It also keeps a full schema copy with a different dbprefix, and when you try and run a unit test, it checks to see if the schema needs updating (very similar to the main install/upgrade strategy).
- Another strategy is to ship an SQL dump.
- We need to identify a few basic low hanging fruit areas in Mahara that we can easily write tests for. This shouldn't be a large area of the codebase that would require a lot of refactoring to get it to a point we can write elegant tests, and it needent even be an area that we place highest priority on for having tests for, but a simple, well contained area that lets us get our hands dirty with a test framework to see how it works and can be used in the rest of the system. Some potential cases are:
- Adding/removing/moving block instances within a view - View and BlockInstance are both objects and this code is quite well abstracted
- We should expect that in order to be able to write clean tests to cover a lot of Mahara, we will need to refactor a lot of code. A prime example of this is lib/groups.php which we should refactor into a class and smaller, more self contained methods, splitting out sql-generation and data-fetching methods from functional ones, as we write tests for it.
Various subsystems of Mahara may or will need a rewrite over time.
Reworking of DML/Database abstraction
DML, and ADODB under it, have severe API and architecture flaws that hurt and continue to hurt Mahara.
A new database API should:
- Utilise PDO - this is what will be the standard way of accessing databases through PHP into the future
- Work with whatever test framework we come up with
- Not hide SQL
This needs to be discussed in more detail
New MNET library
Neil wrote a nice modern MNET library. Drupal has integration with it. Moodle might get it in Moodle 2.0, so Mahara should look to use it too.
A good thing about this is that the new library can test itself and all MNET work can be focused in one place.
Theme subsystem work
- Reworking theme support to modularise CSS/JS files
- Cut down size of default theme
- Make certain parts easier to override
- Correct caching of static files
- The file artefact plugin absolutely requires it
- The blog artefact plugin absolutely requires it
Language subsystem work
The current system is modelled off of Moodle's - PHP files containing language strings. It works OK, but is very inflexible about how it allows variable interpolation.
Nigel thinks we should investigate the possibility of moving to the open-source-software standard gettext method of translation. Failing that, we should look at how to provide better ways of turning data from the application into language strings. Note that this is not a simple problem of providing flexible string interpolation - please read this article for more information about translation system issues and potential solutions.
Investigate David Mudrak's language translator
We should look at it and see how it works, what it provides, and if it's good we should advertise it/help with maintainership etc.
Maharactl is a hypothetical script that can drive a Mahara website.
The first prototype was a perl script using perl modules to provide an API to the database. We would probably need a solution that utilised existing code if this idea was to work properly.
This depends on sane core APIs, and may need the core to be decoupled from the web based interface before this is even remotely possible.
Web services api
See Maharactl section - need core APIs.
Note: We have an XMLRPC dispatcher in place which is used for MNET.
Making forums into an artefact plugin
The concept of interaction plugins was born before the work in Mahara 1.1 to make shared group artefacts.
Now we have such artefacts, it makes sense to implement forums as an artefact plugin, which can only be used in groups. This is opposed to something like the file artefact plugin, where both users and groups have file sections. (Note - there is no API to say whether a given artefact type can be used in a group or not - it's up to the artefact plugin itself to export the appropriate pages and code to do this).
Under such a model, groups might own posts and discussions but users would get republish rights on their own posts and discussions they had participated in. This would allow users to embed forum discussions and posts into views.
Profile section reworking
From an e-mail discussion:
> > o Profile section reworking (make profile data NOT artefacts)
> why do you want to do that? i sell that all the time (everything is an
> artefact) and people really like it.
Well I think this comes about after discussions I've had with RichardM.
Basically, back when views were done as view templates, everything as an
artefact was good and made sense because that's how things got into views.
But the blocktype concept does a great job of abstracting that. You can see
blocks we have now that have 0, 1 or more than one artefact in them.
Given that, the concept of an "artefact" begins to fade a little - at least
from a user's point of view. We used to say "an artefact is something that
can be put in a view" - but that statement does not tell the whole story
any more. Many artefacts can only be put in views in certain ways (e.g.
profile fields). Some have a special blocktype for them but many do not.
Plenty of things that are _not_ artefacts can be put in views.
However, "artefact" still makes sense in the mahara codebase. It's a bit
like Drupal's "node". Lots of things in mahara are artefacts, and thus are
provided by artefact plugins. It makes sense, for example, that forum posts
and discussions are artefacts even. I think it would be great if we can
refactor forum to be an artefact plugin so that posts and discussions can
be put in views.
Anyways, that's a really long way of telling the background to why I think
profile data should not be artefacts. At the end of the day, they're really
one tiny bit of metadata that goes with a user's profile - they're not an
"evidence of learning" in the same way blogs, files, forum posts are. They
can only be put into views in a couple of blocktypes right now and arguably
a bunch of them shouldn't be able to be put in views at all. For example,
why have "first name" and "last name" in the profileinfo blocktype? "Name"
makes more sense - but people know their name and can put it in anyway...
similar arguments apply for many of the other profile fields.