Welcome to Constantpad v1.1!
This pad text is synchronized as you type, so that everyone viewing this page sees the same text. This allows you to collaborate seamlessly on documents!
Guru of usability!
http://useit.com
(beurk)
Current Stable version (AAA):
=======================
http://activearchives.org/aaa
Listing awful: order unpredictable
A long listing, visually a lot to digest.
Not resortable, filterable
URL bar, used also for search bar (search a term, searching a tag...)
uncategorized tags -> huge unusable list
Text as an interface: shortening the loop between reading and writing
eg. drag and drop tags: types the code for you
Inter
e
sting confusion between reading and writing
What's our welcome text? How do you invite people to rework the material?
Idea of
an infinite
canvas
-> confusing at some point
the video was the center of the canvas, now that it is not resource-centric, what is the center of the canvas?
How to avoid content being "vomitted"?
Oral Site version
=============
http://oralsite2.stdin.fr/
too much boxes -> too much information -> how to read?
all the boxes are at the same level.
Looking at Cinematic mode:
How to deal with reading at different paces?
Sequence
different speed of reading: when there's a big block of text, the timeline would pause, reader would have to click "resume" to play the rest of the timeline
smil: attempt for timing HTML pages:
http://en.wikipedia.org/wiki/Synchronized_Multimedia_Integration_Language
2 kinds of basic building blocks: "sequence" and "parrallel"
Footnotes:
With sarma we had this idea of stacks, piles
inline timecodes
Myriam: Need to go beyond the informational / annotational. Importance of how an interview is perceived. The feeling of too closed, demands the viewer to experience the work in a particular way. Provide a switch to the "source".
Example: Ability to hold a text while listening to an audio
Example: Slow-loading image helped in combination to listening to a voice.
Improve perception.
eg: a sound interview is messy, how to publish it? Live montage (cinematic mode), but still giving out the sources (spatial mode)
In the cinematic mode, landmarks (and other procedures), you can bypass the constraint of the linearity.
Linearity visually expressed by the timeline. What if you had a constallation of annotation and your sequence would be a path through the many possible?
What text to display on top of a spoken words? Transcripts, subtitles? Comments
How does an image change your perception of the spoken words?
Joost: "It is clear how it should be!"
a pile of materials (texts, images...)
(1)
selection of materials on a timeline
stop and go
is important (video controls)
Ability to flip in high speed through the full timeline
Maybe "Cinematic" mode is problematic.
"Landscape"?
alex: parallel with Travelogue/Atlas: one path is one way, but other paths are possible.
"visite guidée"
path
s
. One draw a path in the map
There are many possible linear paths through the material.
search tags: automatic paths we can edit to have our personal path
a scrubber to flip through the playlist as if the playlist of different videos were one big movie
AAA tag view interesting because it proposes many ways to enter the document.
Depends on your goal, separate views/tools/modes ?
* View/tool to do selection
* View/tool to do sequencing
Myriam: my interest is not to make an archive, but to engage with texts, create an evironment for expanded texts
joost: use AA for presentations, courses
Myriam: what's different from a powerpoint then?
peter: students can edit, augment content during your presentation
Edepot for archipelproject
===================
==
can you make a flickr interface for a media archive?
give a very simple interface for an overview of media
not just images, but videos, documents from outer pages
Tagging vs filtering vs clustering
Eddie Elliot thesis: WGAS: Watch/Grab/Arrange/See:
http://mf.media.mit.edu/pubs/thesis/eddieMS.pdf
filtering/grouping/ordering/sequencing P.71
Uses:
* Teaching: making a collection
Is it a centralized platform, or a personal writing space.
Annotation an URI. The ressource doesn't exists (eg. a video for which you don't have the rights) but you agree on the URI and annotate it.
How to show visually the relation between a ressource and annotation?
<!> Automatic attribution of image/material based on its license.
Canvas reordering, annotation filtering (in addtion to layers)