October 20, 2020
A mixture of media
the construction of space -- 2D-3D?
it is nothing spectacular and how to look at that
why/how it is not-spectular:
- amateur camera
- not a lot of images
dataset - what it's not anymore and what it *is*
e.g.: Lena is a hallmark of CV for many reasons, but one is that it wasn't easy to produce the images in digital format
automatisms and alignments
the role of the camera as an object in computer vision - it encodes a series of assumptions about what vision is.
regularised data
FRONTAL face dataset
the device and how to address it
other thread: what does it mean to actually look at these images, as they are not made to be looked at -- they are made to process
-> lots of time looking at the shelves: whose series of books
ecology of software can be "read" through the books in the spot where it's made operational. And the daily life of the lab and its inscription in the academic bureaucracy etc
practices that resonate : because of POV, distance, ...
- epic kitchens + first person vision (subjective gaze)
- overflow between domestic and professional environment/gestures
- scface https://www.scface.org/
interest in the testing of cameras relying on their default mode(amateur use, systematic by default): vernacular/pop practice used to regularize objects -- that then becomes the reference to very highly sophisticated research
'to use amateur devices is an important decision'
faces made available through popular photography. You need bad pictures
layering of (algorithmic) assumptions
panorama - continuity - spatiality
scale and delegation - distant unctionaries
it is not the same with more; it matters who is looking in what way / under what conditions.
scaling up to the universe. Small scale infusions into any picture
from the specific to the universal
preparing for high tech using low tech -- hallucination
the in-between unknowns and how they are produced
technical preparations for the world to world
modesty of projects - projections
things that are being done without clear intentions
openness in the datapractice; a sneaky moment / relaying a certain gaze.
not responsible? (in the sense of making decissions without responding to the paradigm because it hadn't started yet?)
or responsible in relation to something else: e-g.: classic photography
how to be response-able in relation to paradigms (regimes?) to come
behave as if ...
the 257th category -- 'noise': that which the algo should not be concerned with = randomly cut modern art-photography. Art as the images that create trouble with the baseline.
difference between LiDAR and photography to construct volumetrics (ref self driving cars)
October 9, 2020 / Conversation with Nicolas Malevé
Book announcement: http://www.data-browser.net/db09.html
Chapter: Parametric Unknowns: Hypercomputation Between the Probable and the Possible
https://possiblebodies.constantvzw.org/book/index.php?title=Parametric_unknowns
Genealogy of this conversation:
Call with Nicolas from Tarragona 24/07/2017 (scroll down on this pad): https://pad.constantvzw.org/p/possiblebodies.nicolas
Optimisation workshop case: https://pad.constantvzw.org/p/possiblebodies.optimisation + http://jararocha.blogspot.com/2019/01/optimization-and-its-miscontents.html
Conversation with Maria Dada: https://pad.constantvzw.org/p/possiblebodies.publication.mariadada
Conversation with Phil + updates: https://pad.constantvzw.org/p/possiblebodies.publication.phil-langley
Deadline: end October 2020
Volume: 3000+ words or other format/size/... (visual essay?)
Functionaries of the camera: taking on the lab; using each other's faces?
http://functionariesofthecamera.net/volumetric/export/
http://www.vision.caltech.edu/archive.html
common objects in context (COCO)-> common faces in the lab
stitched-together images
"If you borrow, please leave a note" http://functionariesofthecamera.net/volumetric/export/009-inscriptions-2.jpg
Stanford, 1990s
moment in which they had digital cameras, but not internet sharing
ref.: Lena, in the lab; the culture of the lab
the kitchen of the lab, no need for public scrutiny.
for peers, not considered having any other presence/worlding power
photography as a leveller (industrial continuum?)
these are not thought of as representations, but as ... (what?) presentation/test? -- just material for testing the algorithm
not yet CV-as-we-know-it
casual, amateur, family contexts/techniques as a training
sort-of-scientific vocabulary
wanting to make it more coherent than it is ... fitting the unknown
filling the gaps, projections, hallicunations, "blind-spots"
the work of guessing; modes of relating to the unknown
you can try to apply this term "laboratory life" - ... or describe the lab as a series of prectices that are to some degree open-ended
finding usefulness -- retrospective justifications.
usefulness was technically not there atm, but was found afterwards
miss the uncertainty of relations it has with other things
indeterminacy / uncertainty /
lab as a texture of relations with other things
the detected goal of the lab (e.g.: train a device for x) cohabits with other relationalities
uncertainty -> possible / probable?
it's a pracitce that's not contrained by the scientific rigour you would expect
"The ubiquity of efficient operations is deeply damaging in the way it gradually depletes the world of all possibility for engagement, interporousness and lively potential."
looking backwards, what exactly depletes; what makes it damaging?
also depletion happens gradually backwards: an erasure of lively potential by the engineering apparatus
under-optimize! / sub-optimize! (Brian Massumi): logic of war where you want to optimize and be in control: avoid any surprise and want all your operations to have a defined goal, a target, and be able to measure.
targets, goals, measurability
new culture of war promoted by different agencies in the US: the opposite.
much more damage and uncertainty Trump follows this playbook (but does not invent it)
(e.g.: Fei Fei Li's embracement of error margins for industrial sharpening?)
sloppiness of moments is at the same time geerative (interporous moments) but also dangerous. Retrofitting and sedimentation is almost more violent.
Alan Blackwell (codebook, with Geoff?) is Cambridge professor, but also in the belly of the beast.
creating categories, creating race. The racism of creating race.
scrutiny of the amount of for example racism in the algo
datasets are even worst for that: something that's liminal and outsourced
liminal, peripheral, not at the center, out of scrutiny
what is interchangeable
livelyness in/of the laboratory
there is no hiding; straightforward in accepting de-optimisation as a strategy
optimising otherwise; not for the lively potential
wordnet is horrible!
reasons of fei fei li to choose it in the first place: it is free, it is large, it functions in a certain mode of representation. Language is non-syntaxical; every word is indexical. Ambiguity = synonym
never because they think for it in descriptive terms?
(no coming to terms with the worlding of choices)
Apples are red, leaves are green, branches are brown, sky is blue and the ground is yellow.
Apples are red, leaves are green, branches are brown, sky is blue and the ground is yellow.
Mangoes are red, leaves are blue, branches are green, sky is black and the ground is yellow.
Almonds are blue, leaves are red, branches are black, sky is blue and the ground is white.
Mangoes are black, leaves are white, branches are yellow, sky is red and the ground is white.
Fugitives are blue, branches are red, sky is yellow, leaves are black and the ground is white.
---
An attempt to understand the way 3D operates with POV, from mapping from the distance/outside to the generation of the paronamic. Features, stitching, and the production of seamlessness. Hallucinatory techniques that cover the in-between.
From the original invitation e-mail: 2019-09-16
- we are very much hoping you would be interested in writing a contribution for a chapter currently named "Parametric unknowns: Hypercomputation between the probable and the possible". We have come back to our notes from the conversation we had already a while ago about Computer Vision and Volumetrics, and later we were thinking a lot of you again, while discussing 'Panoramic Unknowns' in the context of Optimisation and its discontents, a session Seda a.o. organised during last Transmediale (http://jararocha.blogspot.com/2019/01/optimization-and-its-miscontents.html). If you would be interested to think with us on this, your contribution
- could have the form of a text, or a visual essay, or a combination of text and images... or anything you consider opportune for the
- occasion, given the fact that the medium will be a wiki & paper publication
///////////////////////////////////////////////////
Notes for a conversation with Nicolas
http://sicv.activearchives.org/logbook/
http://sicv.activearchives.org/share/ways_of_seeing/Composed/yolox2-composed-episode-3-1-01.mp4
http://activearchives.org/wiki/Machine_Seeing_Ways_of_Seeing
https://unthinking.photography/themes/machine-vision/the-cat-sits-on-the-bed-pedagogies-of-vision-in-human-and-machine-learning
- We are generally interested in the work of contours, dissection, segmentation and boundary making in volumetric imaging. First of all we are wondering why you started to be interested in contours? What is/was the trigger to look in that specific direction and/or to that specific practice of contouring?
- We talked a bit about Slicer on the phone, and it continues to be an interesting software for us to look at. Here are some notes taken during the last weeks: http://pad.constantvzw.org/p/possiblebodies.slicer For example, Slicer apparently begun at the MIT Artificial Intelligence Laboratory http://www.csail.mit.edu/ and the Surgical Planning Laboratory, at The Brigham and Women's Hospital, Harvard medical school https://www.slicer.org/wiki/Slicer3:Acknowledgements -> we would like to check with you on your reading of the connections between the MIT AI lab, and the history of Computer Vision?
- We wonder about the operations of labeling, naming, ontologies (at this moment, a lot through the lense of pathalogical anatomy) in relation to segmentation or how ordering and boundary-making are connected (in digital practices).
- In general, if you have any understanding of the workings, histories and cultures of 'segmentation' algorithms such as GrowCut, watershed methods and Marching Cubes that all seem to combine 2D-to-3D metaphors and techniques? If you have any thoughts on this, we would be very happy to speak about this.
- It seems that biomedical imaging industries are making more and more use of 'machine learning', and much of the learnings are (still) from the Visible Human Project but also from living patients. But also in other areas, machines are learning about body (images): thinking of Michael's sicv post on (Not) Safe For Work for example.
How do you sense that machine learning is learning about bodies, and how does it produce body imaginaries and/or more direct bodily affections? Or to paraphrase your own words, how can we think productively about the fact that a generation of humans and algorithms are learning together to look at bodies? ;)
- Anything else that we did not manage to grasp but that you would be interested in speaking about with us?
-----------------------
From separating background and foreground to segmentation --
"Background and foreground, scraping and structured data. Computer Vision algorithms employed as interlocutors, to explore alternative interpretations, different orderings and seeing through other eyes of digital and digitised collections."
http://diversions.constantvzw.org/etherdump/erasing_the_background.diff.html
http://docs.opencv.org/trunk/d1/dc5/tutorial_background_subtraction.html
In medical imaging: defining organ boundaries, visually discerning anatomical elements or sometimes anomalies.
"if you don't have this reference you have to make the algorithm learn what counts as a difference: have a large amount of footage that teaches the algorithm what makes a difference" (from: diversions notes)
Specific histories/practices of those techniques in CV (Computer Vision)? --> GrowCut algorithm, watershed algorithm,
Trying to understand https://en.wikipedia.org/wiki/Marching_cubes and relation to contour, segmentation
Possible links to anatomy (animal dissection?)
on machine learning:
---
Conversation 24/07/2017
NM: relationship to computer vision -- what's the model of vision that is used in CV. How do they come to talk about vision.
reading through the history of how in computer science vision is being studied
came to an experiment in CalTech in 2007, where they show images to people for less than 400ms and ask them a description of what they saw and actually in 27ms many things happen like going from vision of abstract shapes and forms to very detailed understanding of nr ofunits, specific objects, etc. they have a theory that vision is hierarchical: temporary goes through a taxonomy -- interested not only in the theory but also in the way they did the experiment.
people are asked to try machines and explain
FS are these images always "photographic" images? --
NM - Yes, surprisingly -- this is not seen as an issue. In psychology this depth is not so much of a problem, but flat images stand for seeing -- people were shown not only photographic images but mostly drawings. -- from the internet, somehow more "real" --
in 2007 they said those images were more real than others -- never use "reality" as a word.
FS so your looking at these ...
NM they considered that if you look at images coming from a search engine you are looking at images that are un-biased by the average
(...)
NM sift features https://es.wikipedia.org/wiki/Scale-invariant_feature_transform // by patterns, but babies don't know how to choose, so (...)
there is something very well equipped in human brain that make it understand in terms of lines.--
the "vision community" claim there is a part of vision called "early vision" -> what you see, if you're exposed to visual stimulus, this part of vision is said not to be accessible by consciousness
[related to gestalt?]
JR: is this related to subliminal? -> well, this is not unconscious, but non conscious. Pre-conscious.
this is the blackboxofvision -- what computer science is trying to model for computer vision: give the same tool as this part of human vision.
experiment of the frog that they exposed to all kinds of equipment with electrical signals and showed that a frog is able to detect a fly before the info passes to the brain
cognition without consciousness
so first you need to anchor contours
very interesting things by Katherine Hayles -- interest on the non-conscious. If you look at this moment of 'vision', the complexity of what it is to "see" is reduced, but on the other hand it is nice to look into vision outside of the paradigm of consciousness, where many other ways of being in the world and looking at it can accessed. A possiblility for imaging through the eyes of other species.
if you talk about contours in terms of computation it'll be important to show where, contextualize computer vision.
FS we wanted to talk to you about contours. It came because somehow it seemed that the hightened attention in CV for contour that meets the heightened attention for segmentation and dissection in anatomy.
in the clinical situation, where the work of drawing boundaries around organs or tissues or anomalies -separate units- seems to happen in a blurry bunch of zones
it was extreme to see how the separation of organs is a collaboration between algorithms and anatomical projections.
jump from non-conscious vision, but
wondering ...
NM how you can go from the interpretation of the pattern detector to a sign or a meaning. where the pattern detector says there is a probability and this is matching the tumor or x, it does not tell you there is a tumor. and then how the tumor has been correlated with a sufficient numer of patterns.
if you do pattern detection you can do (...)
You can craft your own model in pattern detection. in computer sciences you can. and if that work is narrow enough... but when it has different variations t becomes incredibly complex.
then you have machine learning. Here the modelling does not come from computer science but from who did the training of the software.
number of segments is what is known
the accountability/responsibility is very hard to place/locate
in machine learning they were not using this to detect tumors or for situations where people can die, but they'd build trainingsets through amazon's mechanical turk -- when applying it in medicine you will want to include a very well trained team to develop the training set.
FS At the hospital we visited, it seemed there's 24h of imaging happening. But they seem to be very clear that in the backup, or treating as accumulative, the kept using the patient as the comparison field. the folders are organized by patient. it's in research where inter-person (and inter-species?) comparison happens.
NM i've been looking at contours from the Kurenniemi project. That was my first -- to not show image due to privacy issues etc, so looked at descriptions and extracted features, algorithmically.
also wanted to work with image materials from pdf where there were drawings and text. That was the first contact with contours: asking how many horizontal lines etc.
FS so a big amount of horizontal lines would be a text
NM
yes
& Hough lines -- a pattern algorithm // estimate wether points or pixels where a line is a continous between different points. Most of the times lines are understood as continuous contrasts -- so you look for the most continous series of points. In drawn lines that works, but at the level of pictures you might have gaps.
histericity??¿?
perception and muscles when tracing a line, when there is anticipation and also constrain. So of you look at a series of pixels, the computer cannot preview any constraint, so you need to introduce a constraint. a restriction of the movement of the line. [freedom of movement/channels in 3D/robotics?]
Norbert Wiener was working with the american army doing the systems to defence against airplanes attacks -- they had this problem of finding the probability for when they shot a target, what'll be the next attack. So the moment they shoot, and the time will vary..
not to train trajectory, but to narrow very much the possible next step by calculating (curve?)
finding these techniques and taking into account the trajectory of the airplane and also by shooting, they discovered also the feedback loop.
anything that happens in the environment you need to include in the loop, as it informs it.
for the tracing of the line, when the trajectory of the plane is always related to what it previously did -- this is the concept of histericity: when you do something to a material, it will keep to a trace of this action. A certain persistance. Not knows what will happen next but is also a temporary memory ofapressure you exerced to the material.
so the algorithm of this line also has some of this temporal issue.
the next point that will be added to the line will get more and more (...)
FS if we talk in terms of possible and probable...
NM yes, it's probability. because it is a discrete (...) so continuity is not part of it. Everything is central. It is side by side.
FS what would be continuity outside of the discrete
NM when you do imaging you have a bitmap, but nothing tells you of the pages or one pixel and the other are continuous.
FS i'm trying to transpose this to the troubles i had with the instantaneous generation of continuous-whole-3D model from a block of slices (at the hospital), which is in a way planes but lines in space.
...you talk about the beauty of the line looking forward and back. But in 3D I can only think of uninteresting probabilities.
NM lines is really quite related to the idea of making a sort of translation of something that happens in the world to a 2D space.
but i've got the impression that in your project there is something about 3D. Lines are not there to separate 2 surfaces, but also defining/connecting a 3D model.
JR -- something on [registration -- intra-calibration or intra-comparing]
(...)
NM when this process of calibration happens, the program that makes it has a sort of what a human body in 3D might be?
FS we are not entirely sure
NM what i discovered when looking at how the lines are being traced, you need to (...)¿??
FS from looking at slicer and the VHP i suspect that the VHP is the reference.
NM it's different if you have an expectation for what direction a line is going, than drawing a line in the wild
FS everytime we ask people if there are any set models, the answer is a no, but it does not seem you can import different models -- or what happens when you look at a horse through a MRI?
JR to put it more politically; how to point at the oneness of humanness in its finitude / humanness that is embedded in the algorithm.
FS also because there's always only one person in the MRI tube, but when you ask if there could be more, then they come up with jokes. Also the folders of images are ordered person by person
FS we would like to sit with one or two people, to really ask.
NM so...who to ask? :P
FS it's clear that research is very much tainted by the VHP dataset
NM for instance the question of whole ... It might be you never train anything youself at a clinical level, never actually feel like you are feeding a machine learning system but still you are part of the educating process, if only by using and validating the datasets.
[a missing conversation on ... temporality]
Contact at some point curator Katrina Sluis at the photographers gallery (Digital Programme) + https://unthinking.photography