Wednesday, 26 May 2010

Statistical Evaluation

The WRN project team is currently looking in some depth at evaluating our project activities. We are concerned with gathering both non-numerical qualitative data to analyse about our activities (stories/opinions and narratives from our users) alongside some more quantitative statistical measures about repositories and their use across Wales.

Surprisingly, the collection of the statistical aspect of our evaluation data - something we originally envisaged as being the quick and easy stuff to generate - has proved to be quite problematic. Establishing a base line set of measures has been difficult with varying data coming out of everyone's systems and a lack of consistency in obtaining measures for central recording purposes. Even the most basic measure of all, i.e. how many deposits are recorded each quarter in each repository, can be difficult to obtain and we are only just managing to make this measure something we accurately record in 100% of the repositories across Wales.

So, while we hear lots of stories about the power of statistics and the help they can offer in making a case for a repository, it seems that we still have some work to do to convince people it is worth the effort of setting up robust statistical measures. We thought we'd try and address this by providing information about a selection of basic options open to most repository managers. The following information, from the Digital Repositories InfoKit, provides an overview of some of the most commonly employed methods of collecting statistics:

Any WRN partner interested in reviewing their statistics and collection methods, or needing assistance in setting up any of the tools mentioned here, should contact the project team via the usual email at

Monday, 17 May 2010

CRIS Event Cafe Society Write Up - Group 4: Data Quality

At the JISC/ARMA Repositories and CRIS event 'Learning How to Play Nicely' held at the Rose Bowl, Leeds Met University on Friday 7th May the afternoon was dedicated to a cafe society discussion session. Four topics were explored by delegates and over the course of four blog posts we are disseminating the facilitator reports from each session.

Please use the comment option below to contribute or comment on these discussion topics.

Group 4 - Data Quality
Facilitator: Simon Kerridge, ARMA

The issue to be discussed was Data Quality and it was framed as “How do we ensure data quality in our systems? What are the best methods for getting data out of legacy systems?” however a number of related issues also cropped up in the discussions

The time was split into four 30 minute slots with delegates attending as many times as they liked. Some issued were identified on many occasions and others less often, most are presented.

Unique Identifiers - (for many, perhaps all data items) was considered to be a big issue. Examples included:
• PersonId: usually not a single one is used in an institution; the various systems (eg HR, CRIS, IT, PGR and others) generally used different ids. Moreover the HR system, which might seem like the obvious primary source, might have multiple entries for the same person (if they had more than one contract), but worse, only usually had entries for paid staff – there are many examples of unpaid people involved in research.
• FunderId: many expressed problems with de-duplicating similar looking funders. It was thought that the funders themselves could/should provide a unique reference

Authority Lists
• Even if an institution could de-duplicate all their own data and use a single id internally, it was likely that other institutions would not use the same system and so exchange of data would be problematic. This could be resolved by an agreed independent authority (for example staff HESAid). But one does not exist for (for example) Funders. This was thought to be something that would be extremely useful.
• A national policy on national data (eg FunderId) was seen as desirable
• Scopus / WoS / Pubmed were seen as possible partial authority lists for publications (and authors) but they contain differing information and do not cover the whole spectrum – and indeed not worth using in some subject areas

Data Quality
• Many places have a feedback loop (eg monthly show academic staff what has been added to their profile).
• Use carrots and sticks, eg only allow publications from the IR/CRIS to be used on internal promotions or for the annual report
• One stick method that was generally liked was the Norwegian system where in order to receive public funding for a research project a prerequisite was that all of the authors publications (where possible) had to be submitted to an open access repository
• Good enough is good enough
• Data should be re-used where possible, but only where it is appropriate; sometimes systems can be developed organically to meet too many requirements and end up not doing any of them well
• Try to think about potential future use of data and collect what you might need – but don’t go overboard. For example one institution has additional classification for all publications using the library of congress system, but so far has not used that meta data
• Have processes in place to check data quality on input and as a secondary check to ‘approve’ the data – one institution has a ‘checked by Carol’ flag!
• In general self-archive was not approved of due to the lack of quality and copyright checking
• There is some good software available for data quality checking against publications (using Scopus / WoS / PubMed data) and for data aggregation
• One institution uses Lieberstein string comparison to help identify possible duplicate entries
• The RAE / REF was seen as a good driver for increasing data and data quality
• Periodic data maintenance and cleansing is essential, but often not undertaken – data quality is unsexy!

Data Sharing
• Authority lists would make this much easier – surely some work can be done in this area?
• Two institutions recounted the issues of doing a joint submission to the RAE and the data fusion issues. It simplified a later choice of IR, the second institution simply plumped for the same as the first

Parallel Systems
• Many reported using parallel systems within their institutions as the data in the (normally) central system was simply not trusted by all the users.

• It was universally agreed that problems tended to occur where an issue was not given a high enough priority by the institution. For example, if a DVC took an interest in the quality of data in the IR then resources were made available to improve the processes and data quality.

Legacy Systems
• Often resources were made available for moving data from a legacy system to a new one
• However this was often seen as solving data quality issues, whereas in reality it is an ongoing issue, but often not resourced as such

Primary Data Source
• It was agreed that there is not one system for all an institutions data needs. Indeed that might not be desirable as individual systems tend to meet different requirements.
• However it should be known where the primary data resides, understanding that for a single record (eg information about staff) this might not all be in one system

Summary (the facilitators view of the discussions)
Overall the discussions were very open and positive. Many participants took away some ideas for use in their own institutions. Most were also sure that they would not find it easy to get the resource required to do a proper job in improving their data quality. Some systems were reportedly working very well, other systems were not. In general the former were the result of new developments whereas the latter tended to be systems that have been in use for a while. Hopefully this is the result of better new technology being used to support processes; however it seems likely that the reason is more to do with system being neglected once they are seen as being embedded and working.

CRIS Event Cafe Society Write Up - Group 3: Stakeholder Engagement

At the JISC/ARMA Repositories and CRIS event 'Learning How to Play Nicely' held at the Rose Bowl, Leeds Met University on Friday 7th May the afternoon was dedicated to a cafe society discussion session. Four topics were explored by delegates and over the course of four blog posts we are disseminating the facilitator reports from each session.

Please use the comment option below to contribute or comment on these discussion topics.

Group 3 - Stakeholder Engagement
Facilitator: William J Nixon, Glasgow University

The afternoon session of the “Repositories and CRIS” was an opportunity to bring Research Office and repository staff together across a range of topics and to draw lessons from other institutions, raise issues and share experiences. The focus of the discussion was “Stakeholder Engagement with the questions: “Who are the main stakeholders and how do we engage them? What do academics think?”. Over the sessions the focus was with researchers, research office and repository staff – but we acknowledged that there were many other stakeholders for our systems. These include funding bodies, University management, JISC, HEFC and RMAS amongst others.

The café approach to these sessions enabled attendees to stay for as long as they wanted, to move on to other sessions, and in some cases to return. Many of the initial attendees stayed across the first two sessions. The sessions had a good mix of research office and repository staff, both attending and contributing.

Key themes
• Key stakeholders – who are they?
• Workflows- what comes first the CRIS or IR?
• Carrots and sticks

Key stakeholders
In each session, there was an opportunity for staff to identify themselves as either research office or repository staff which was a useful starting point.

The initial discussions in each of the sessions, some of which overlapped considered who are the key stakeholders, with a particular focus on academic staff. It became clear very quickly that it was insufficient to talk just about researchers as a homogenous group and there was some discussion around unpacking them, guided not by discipline or research itself but by the nature of their funding and the length of their post, so we identified
• Researchers
• Contract staff
• PhD staff

These roles have created a shifting landscape not only for researchers themselves and their work/funding but for the CRIS/IR staff who support them.
The discussions here were then around how much do these staff know about, or are aware of the CRIS or the repository, in order to set a baseline for engagement. It was felt, certainly for IRs that there was still insufficient awareness of these -“invisible services”.

One approach which some institutions have begun to do is to provide build in sessions on the IR and CRIS as part of new research staff’s training. This opportunity to embed this information into existing courses was felt to be very valuable

At other institutions IR staff have been invited to be involved in Research Staff meetings and conferences.

Other Library and research office staff were recognised as stakeholders and as these CRIS and IR services have matured beyond a “project set-up” it is also necessary to inform and to engage them.

Some institutions have worked to inform and update their subject Librarian staff to act as advocates for the IR and for open access; others though preferred to manage this through the smaller repository team who they felt were better able to answer the range of queries which academic colleagues would ask. These include copyright, versioning issues and funder compliance.

Workflows and scope- what comes first the CRIS or IR?
There was some discussion, particularly around researchers and their publications about what should come first, a record in the CRIS and then as appropriate fed through to an IR, or should a publication just be deposited or entered into the IR. A third option was an additional publications database which was not part of the CRIS or the IR.

In some institutions the CRIS is or will be used to store the publication data while the repository is only used to hold the full text. A key challenge for one institution was the move to a CRIS for managing its publication with the expectation that research staff would manage their own publications. This was in contrast to the mediated service which the Library had provided [but was felt to be unsustainable in the longer term]

Questions here ranged around: who would manage the import of this data, its management (“clean-up”) and its acceptance/review. We also considered acceptable turnaround times for managing any review of the data before it became live – and how that could that be used to support engagement with staff.

Different workflows and staff resources were also covered. These ranged from self-deposit/submission to a wholly mediated service just done by the Library for the IR. This seemed to be less of an issue for data for the CRIS.

The need to engage with departmental administrative staff as a stakeholder group was identified here as one solution for this issue. These staff have the local knowledge and many are in departments dealing with publications, the CRIS and web pages.

Working with them for the IR (and CRIS) is a good way forward. Some institutions have taken this forward and provide training and support for these staff for the IR in a similar fashion to that provided for the CRIS.

Repository staff in particular also had concerns about the focus on bibliographic data for their CRIS or their IR if was a mix of full text and bibliographic data, if the importance of the need for full text was lost.

The comment was also made that the “repository is a set of services” not just an entity in itself – and one which can take on a range of roles including digital preservation, research assessment and open access.

Different institutions approached this differently and it was felt that there was no right answer or one size fits all, different institutions and the needs of different stakeholders will dictate the workflows but the need to engage staff at all levels is crucial. It was felt that this was most effective when the CRIS and IR could demonstrate valued added services [“carrots”].

Carrots and sticks
There was a considerable amount of discussions around the “carrots and sticks” for depositing material into the repository, or dealing with it in a CRIS. Did these help or hinder the engagement with stakeholders? Some of this flowed from the concerns over the sustainability of the mediated approach to deposit, the range of content which may be accepted to the IR and its public availability [a need for a dark archive?].

Carrots (and value adds):
• Increased visibility in Google
• Re-use of content in the IR (or CRIS) in personal websites etc
• The inclusion of citation data from Google Scholar, Scopus, Web of Knowledge
• Business intelligence opportunities
• Inherent value of discovery/availability
• Adding value to the research agenda

• Publications polices
• Funder mandates
• Professional development and review documentation

Final comments
This was a dynamic and rolling discussion throughout the afternoon with good mix of repository and research staff across a wide range of stakeholder and engagement issues. This short report provides a flavour of the key themes which emerged and were explored across the 4 30 minute sessions. In addition to those already detailed other issues raised included questions about research data be held, when and by who.

It was clear the research office and repository staff are engaging with a wide range of stakeholders in a variety of different ways, with varying degrees of success. Increased co-operation, co-ordination and a shared understanding of the work each group is doing.

CRIS Event Cafe Society Write Up - Group 2: DIY v. Commercial Solutions

At the JISC/ARMA Repositories and CRIS event 'Learning How to Play Nicely' held at the Rose Bowl, Leeds Met University on Friday 7th May the afternoon was dedicated to a cafe society discussion session. Four topics were explored by delegates and over the course of four blog posts we are disseminating the facilitator reports from each session.

Please use the comment option below to contribute or comment on these discussion topics.

Group 2 - DIY v. Commercial Solutions
Facilitator: Anna Clements, EuroCRIS

Format of discussion
Introduction from each member explaining what systems/s had at moment – IR, CRIS or both ; whether DIY or commercial and whether considering going commercial if not already. Most had an IR but very view had a CRIS. Then discussed criteria to consider and other issues to think about when choosing DIY v Commercial – not in priority order:

Institutional requirements - Differ depending on size and particularly how research active the institution is [or would like to be] i.e. DIY may be fine for smaller, less research intensive institutions but larger, more research intensive institutions may find it easier to justify investment in commercial solution

Cost - Need to include total cost i.e. cost for in-house development and maintenance over lifetime of systems needs to be included. Senior managers often think a DIY system is ‘free’ as don’t see cost of internal resource.
Need to consider total cost across sector if we are all reinventing the wheel – one commercial product estimates they have spent at least 12 man years developing their product; also consider benefits in collaborative approach to development where several Institutions working with a commercial supplier to build/improve product collectively and therefore share costs and benefit from better overall product

Control/Scope Creep - Two views on this :
1. DIY allows full control and so get exactly what you want – whereas commercial may deliver 75%
2. DIY ends in continual scope creep as difficult to say no internally – whereas with commercial products boundaries are clearer

Link to internal systems - Is DIY better here ? … but issue more is that there should be a buffer between each system and the CRIS e.g. at St Andrews have a data warehouse which acts as a data broker between the source systems [e.g.. Human Resources, Registry, Finance] and the CRIS. If change made in source system then can reconfigure the views in the data warehouse to match the new source system but leave them unchanged as far as CRIS concerned. If this doesn’t exist then have problem of reconfiguring links whether DIY or commercial -> for latter, therefore, important that architecture of any commercial solution can cope with sync changes without major rewrites -> include this in your tender requirements.

Understand your data - Related to point above. DIY or commercial you need to understand what data you have in which systems at the Institutional level; which is the golden source where there are multiple and what keys/ids you can use to related data together when pull it all into the CRIS – this could mean considerable investigation, data tidying and work to review/improve data flows and related procedures to ensure good quality data going forward. At St Andrews we have found that such work leading on from the CRIS is beginning to feed through in an overall improvement in information management at the University. CRIS is ideal for this because so many stakeholders within University are involved [Researchers, Schools, Library, HR, Registry, Finance, Research Policy/Management, Senior Management] and NEED to be involved whether as users of the CRIS or as data providers for the CRIS. One benefit of commercial system could be that it insists on better quality data via business rules, such as always having a primary key (!) than a DIY solution.

Product coverage - Be clear what each commercial product offers compared to what your requirements are. An example being whether you are looking just for a publications management system or a full-blown CRIS with links to students, staff, projects, events and activities

Switching from DIY to commercial - At least two institutions are trying a simple DIY solution first to find out what exactly is needed … with a view to switching to commercial product later. Disadvantage of this approach is that may then be difficult to persuade senior management to, as they see it, throw away the internal investment, at a later date.

Open source CRIS? - Question was asked that perhaps there is a third way ;) - not commercial ; not DIY alone ; but DIY together i.e. an open source solution. Why hasn’t that been done? Possible reasons that no academic interest in pursuing this [unlike for open access]; CRIS seen as a management information tool , as Finance or HR, rather than a tool for individual academics.

CRIS Event Cafe Society Write Up - Group 1: Drivers

At the JISC/ARMA Repositories and CRIS event 'Learning How to Play Nicely' held at the Rose Bowl, Leeds Met University on Friday 7th May the afternoon was dedicated to a cafe society discussion session. Four topics were explored by delegates and over the course of four blog posts we are disseminating the facilitator reports from each session.

Please use the comment option below to contribute or comment on these discussion topics.

Group 1 - Drivers
Facilitator: Andy Mc Gregor, JISC

This session was designed to explore the issues that are driving the development of research management systems, processes and policies in universities.
This document reports on the issues raised during that session by the many people who joined in over the 2 hour course of the discussion.
During the session we looked at the drivers, then considered the ways that institutions were choosing to address those issues and finally used these approaches to develop a rough and ready action plan for institutions wishing to look at research management.

REF – the Research Excellence Framework was a clear priority for many of those present.

Efficiencies – many people felt that a joined up and embedded research management system would stop effort being duplicated and make some tasks much easier than they are at present freeing staff time to be spent on other tasks.

Funding – a good research management system could help institutions understand, monitor and manage research funding more effectively and enable it to target bids for funding in a more managed way.

Funder mandates – many funders are mandating the storage of research outputs and research data, a research management system could help institutions comply with such mandates.

Legal compliance – a research management system could help institutions manage compliance with data protection and freedom of information requirements in a more efficient and joined up way, greatly reducing staff time that needs to be spent on these tasks.

Business information – the information held by a research management system could provide valuable information about the operation of the institution such as identifying successful research clusters, or areas for potential collaboration. This would enable the institution to provide more focused support to researchers.

Business processes – the research management system could help institutions refine some of the processes and workflows for research and administrative tasks. This would make it easier for researchers to manage the administrative part of their research. It could also make it easier for researchers to fulfil obligations to funders and could support a more effective link between institutional and funder information and systems.

Benefiting research – a research management system could use the information about the institutions research to provide useful services to researchers. This could be something like a directory of expertise or a service to explore research happening in other institutions.

Open access – open access to research outputs can provide greater access to the literature for a researcher as well as enabling a greater number of people to access their research outputs. While this is an important driver, to some extent it is a result of some of the other drivers.

Collaboration (communities of practice) – a well managed research management system could help support researchers in finding suitable people to collaborate with and support the identification of communities of practice. This is an area where research management systems could link effectively with virtual research environments.

Knowledge exchange – having details of an institutions research on an easy to use website could help with knowledge exchange with business and with other nations.

In thinking about ways to address these drivers it is important to focus on the key reasons that an institution needs to implement a research management system. There is a danger that focusing too closely on one specific driver could produce a system that is only good for that particular purpose and does not meet the wider needs of the institution. This is especially serious when thinking about the REF as specifying a solution too closely aligned to the ref may produce a system that is not suitable for future research assessment purposes.

While it was clear from the discussion that the impetus for the development or revamp of research management in institutions was coming from senior managers, it was also clear that it was members of the research office, library, and IT departments of institutions that were steering the specific nature of the implementation in each institution.

Responding to Drivers
Once the drivers were identified, the group moved on to discussing how the drivers could be addressed and what tasks were important in setting up a research management system. To help structure this session and to ensure that the tasks were grounded in the reality of the institutional setting we categorised each task into three cost categories: tasks that would not require extra funding and could be accomplished with existing resources, tasks that would cost a moderate amount of money (e.g. £10,000-£50,000) or tasks that would cost in excess of £100,000.

No cost tasks

Building relationships – it was clear from the whole day that building effective relationships was a key success criteria in developing a research management system in an institution. Effective relationships between senior managers, researchers, research managers, librarians, IT, and other relevant systems are an essential early task that can be achieved without any extra resources. However, maintaining those relationships may take a lot of time and effort and therefore may need some extra resources.

Embedding the system in the institutional processes – to ensure successful uptake of any system, a number of people suggested that the system needed to be embedded in the institutional processes that affect researchers such as assessment and promotion. The group disagreed on whether this was a no cost or moderate cost task with some people feeling that the relationship building and advocacy/training that would be required would push this into the moderate cost bracket. However it was also noted that once the initial hump of getting the system embedded into institutional practice was surmounted then it could make complying with institutional requirements easier and quicker for researchers and therefore lower institutional costs.

Moderate cost tasks

Planning – obviously there is a fairly large planning overhead for implementing a research management system in an institution. This often involves a range of staff and is quite time consuming and so comes at a cost to an institution.
Publicity and advocacy – it is highly likely that any new research management system would require researchers to change their working practices, therefore significant advocacy and publicity would be required to make sure researchers were aware of the system and how it would affect and help them. This is a resource intensive process in terms of staff effort and some materials costs therefore it would require a moderate amount of resources dedicated to it.

Training – a related task to publicity and advocacy is training of researchers and administrators in the use of the system.

Understanding institutional requirements and systems – before an effective research management system can be designed a good understanding of institutional requirements, systems, existing processes and people involved must be developed. This will involve a range of departments and roles and could be quite time consuming but it is an essential step in designing a system that will fulfil institutional requirements.

User consultation – just as it’s important to understand institutional requirements and systems it is also vital to understand the needs and current practices of the people who will end up using this system. This is important in making sure the system meets their needs but it is also important in getting early buy in from users and in managing their expectations. This is a very important part of the planning and implementation process and the group concurred that this was worth dedicating a decent amount of resources to.

Developer time – this is essential if institutions choose to build a home grown system However it is also important if institutions choose to buy a system in as developer time will be needed to ensure the system links well with other institutional systems. This doesn’t come cheap and can be a significant commitment. One group member reported that they had been told that their research management system would require 400 hours of developer time, which would probably push it into the high cost bracket.

Data entry and quality checking – It is important not to underestimate the cost of data entry into the new system, both in terms of set up and in terms of ongoing cost. Even if data is bought in or cheap data entry effort is procured then there will still be an associated cost in quality checking that needs to be supported.

High cost tasks

This category was slightly more speculative than the others as many people in the group did not expect to receive high levels of funding.

Build systems – A number of people believed that this amount of money would enable their institution to build a system that could give their institution competitive advantage over rival institutions. However a note of caution was sounded here in that there may not be a competitive advantage in building your own system and building your own system may unnecessarily duplicate effort occurring in other institutions and in fact there may be advantages to collaborating with other institutions to build an open source system. Competitive advantage is more likely to be realised through the effective embedding of the system and the way it is used rather than building a unique system.

Best of breed products – given this amount of money a number of people suggested the best way it could be used was to buy best of breed products.

Staff – getting the staffing resource correct for any research management system was identified as a key success criteria and a concern for many of the group members. They were concerned with ensuring that the right staff were employed to implement a system and that those staff were then sustained by the institution where required.

An institutional scale data review - this was a scaled up version of the institutional requirements task mentioned under the moderate costs heading. Many group members felt that a really thorough review of an institutional requirement, the data that would be managed by any system and the requirements for managing that data was a step they would ideally like to take before designing a system. Many felt that CERIF could help here.

Action plan
The final part of the session was spent discussing a possible action plan. The following headings were as far as we got. They are listed in chronological order:
1. Relationships – build relationships with all relevant stakeholders.

2. Feasibility – understand the system’s users, the high level requirements for the system and identify a rough cost. (

3. Define institutional need and sustainability and get buy in from senior managers.

4. Produce a plan

5. Consult with users to gather requirements (this would need to start with a stakeholder analysis)

6. Analyse requirements gathered and report back to users with outline specification (it is probably desirable to make this process iterative and to continue the iterations throughout the building process).

7. Produce specification

8. Decide how to proceed and then move to tender or building process

9. Build it

10. Embed it (this process really started with the user consultation and needs to continue throughout the project). This will include training where appropriate.
11. Communication - this is likely to run throughout the project and have two processes:
a. Communicating over the tasks in the project with the relevant stakeholders
b. Wider dissemination and communication related to embedding the system through advocacy, traning etc.

12. Sustainability handover – this needs to include:
a. Built in review process for the software (perhaps every 4 years)
b. Ongoing support including technical and managerial.

Wednesday, 12 May 2010

CRIS event blog write-ups

Richard Jones, Head of Repository Systems at Symplectic Ltd. and a member of the JISC Sonex working group has created two blog posts about our JISC/ARMA Repositories and CRIS event 'Learning How to Play Nicely' held at the Rose Bowl, Leeds Met University last Friday 7th May. Richard attended the event as an exhibitor of Symplectic.

Tuesday, 11 May 2010

Learning How to Play Nicely- Presentations online

Many thanks to our delegates, speakers and exhibitors for making last Friday's (7th May) JISC/ ARMA Repositories and CRIS event 'Learning how to play nicely' such a success.

For those unable to attend the event at the Rose Bowl, Leeds Metropolitan University, or for those who would like to recap, the presentations from the day along with some recorded sessions are now available from our website at

Further outputs from the day will be made available shortly- Watch this space!

Monday, 10 May 2010

Gregynog Repositories Stream - Programme now available

We have now announced the detailed programme for the forthcoming repositories stream at the 2010 Gregynog Colloquium. As you will see we have a detailed programme in place with plenty of variety on offer.

Tuesday 8th June 2010

15.30-17.00 WRN Business Meeting

Wednesday 9th June 2010

9.15 - 10.00 The power of mandates, Sue Hodges, University of Salford

10.00 - 10.30 Publications Management System at Swansea University - Alex Roberts, Swansea University

10.30 - 11.00 Research Management System at the University of Glamorgan - Leanne Beevers & Neil Williams, Glamorgan University

11.00 - 11.30 Tea

11.30 - 12.00 Developing a repository, caring, sharing and living the dream – Misha Jepson, Glyndwr University

12.00 - 12.30 Encouraging Author self – deposit at Cardiff - Tracey Andrews & Scott Hill, Cardiff University

12.30 - 13.00 Using statistics as an advocacy tool Nicky Cashman, Aberystwyth University

13.00 - 14.00 Lunch

2.00 - 2.30 Repository Advocacy: The theory - WRN staff

2.30 - 3.30 Advocacy Café Society session

3 tables will be laid out each with a facilitator and a topic to discuss, participants are moved on to a new topic every 15 minutes with a 15 minute slot at the end to feedback and present findings. Suggested topics:
A)What are the main obstacles to gathering content in your repository?
B)What are the main misconceptions your stakeholders have when it comes to your repository?
C)Put yourself in the shoes of an objector and outline the main arguments against having a repository?

3.30 - 4.00 Tea

4.00 - 5.00 Advocacy in Action: Workshop/exercise. Participants are asked to work in groups to produce some broad brush repository promotional materials.

As in previous years the WRN will be sponsoring places at the colloquium for up to 2 participants per partner institution. Further details have been sent out via the usual mailing list.

We looking forward to seeing you there!

New article published

Earlier this year I was asked to write an article for Program: electronic library and information systems about the ongoing work of the Welsh Repository Network. I am pleased to say that this has now been published:

Knowles, J. (2010), Collaboration nation: the building of the Welsh Repository Network, Program: electronic library and information systems, 44(2), 98-108

Link to published article

Link to final author version in Cadair

Happy reading!