to do:
------------------------------
jose informs
accepted
shepherded 
rejected

jaap henk: contact shepherds using iwpe mailing system
jose contacts the shepherd for jaap henk's paper

jose and seda will discuss mail formulation for out of scope papers

shepherded papers need to submit by the 28th of February
shepherd final review expected on the 2nd March.
camera ready due 3rd of March

best paper candidates will have to be discussed based on shepherding results!
deirdre, norman and frank: on the 3rd camera ready and send them a batch of three papers
to be decided by 7th of march




Review Committee:


best paper candidacy depends on the shepherding outcome!

20: Data tags: (shepherding , accept, Rigo) BP --> Jaap-Henk notifies

14: From vulnerability to privacy harms: (accept: shepherding, Joris) BP Jaap-Henk notifies

23: Automated Methodology for Extracting access control rules: accept without shepherding (they have to respond to reviewers) --> Jose notifies

11:A Critical Analysis of Privacy Design Strategies (shepherding: reviewer with strong reject) --> This is Jaap-Henk paper. Jose will contact Shepherd and communicate decision to authors

18: Obstacles to Transparency in Privacy Engineering (accept without shepheerding, check editorial guidelines when preparing the Camera Ready version) --> Jose notifies

4: Optimal Mechanisms for Differential Privacy (shepherding: Ero, privacy engineering practicality questions) Jaap-Henk notifies

10: Compliance MOnitoring of Third-Party Applications (no shepherding, but address comments of the reviewers) --> Jose notifies

2: Privacy Risk Analysis Based on System Control Structures: (shepherding Kim Wuyts) Jaap-Henk notifies

12: From Privacy Impact Assessment to Social Impact Assessment (shepherding Jennifer King) Jaap-Henk notifies


REJECT!!!!! --> Jose notifies

9: People to People (P2P) Privacy in Mobile Devices: 
    22
    7
    29
    8
    6
    13: Designing Design Patterns for Privacy by Design in Smart Grids




Out of Scope Papers: (out of scope, but potentially we may need to ask for additional privacy engineering section or something)

17: The prom problem: amusing paper but privacy engineering?  (maybe, the paper is not squarely in the scope, definitely not for best paper)

24: An active genomic data recovery attack: 
    In abstract authors write: "Our work shows that more realistic attack scenarios must be considered in the design of genetic security systems." It  would be great if the authors could discuss this in their paper in some depth

5: Two factor authentication protocol:  (lots of reservations, it could be that with experts the paper with have been rejected, if shepherding it needs to be Rob Cunnungham)

15: Reasoning about Privacy Properties of Architectures Supporting Group Authentication and Application 

    30: Position: On Private Disclosure of Information in Health Tele-monitoring

the issue is that they need to put it into the context of privacy engineering systems
a pure protocol or an idea is not in scope
formal verification of a system by itself is not in scope, there needs to be some sort of reflection of practicality or some effort to systematize or generalize the results. one shot solutions where methods, techniques and tools are applied without further reflection on that process or evaluation thereof is therefore more appropriate for venues where solutions are presented without a focus on engineering aspects.
How do we attract papers on privacy engineering on infrastructures? What is the engineering part? Design patterns is much more in scope. 
I would have liked this paper to how it can be introduced into standardization, or getting a protocol and talk about how to fit this into that.
Make standardization papers explicit in the call next year. And, formulate the paper we are asking for.








++++++++++++++++++++++++++++++++



 Program Comittee:

     1) confirm with all previous members, whether they still want to join us (except those who didn't respond last year)



 Reflections on IWPE 2015

  -What went well:    

      ·Audience/Community -> Number and profile (experts from both academy and industry)           

   ·Shepherding process -> It's a great idea, although it requires extra work for PC -> Members/Authors should know in advance, and plan the deadlines            

- discussion: the discussion sessions at the end were very useful for this budding field, was commended by the ieee organizers         
-What could have been better:        

·Submission -> Although we got 18 submissions and it was our first time we should aim for more (quantity) and better (quality) manuscripts             

·Organization -> I feel we had much pressure during the review/decision process, and I am afraid I am responsible for that as did not foresee some of the issues we had            

-Calendar was very tight -> I suggest advancing some deadlines, so that the review/shepherding process is lighter and we can accomodate delays             
-Unexpected deadlines came about e.g. best paper award, shepherding process, etc. -> We should update our calendar to consider them in advance            

·Organizing & Program Committees -> Some members were absent and was quite difficult to contact them            

 -It would be nice to have someone local in the organizing committee as a backup (in case we are not able to attend)             
-There is a bit of heterogenity among PC members -> We might consider reorganizing the PC              
·Discussion -> I think audience missed some more interaction and time for debate -> A panel on hot-topics might have been of high interest + Birds of a Feather meetings         
-What topics might be considered:    
    ·Privacy & Usability    
    ·Privacy & Metrics    
    ·Privacy & Big Data   
     ·Privacy & IoT    
     ·Data privacy (not as focused on the process but on the data)    
     .Privacy and Risk Management       


    This is for Birds of a Feather, but it could also be for a panel at the workshop:

Magazine Paper:
Potential Outline (Jose will turn into an outline in a shared document)
  1) a short intro to privacy engineering including our motivation for organizing IWPE  
  2) a background section on the privacy engineering landscape and its current status and limitations (though this might be difficult),     
    -Research topics -> Workshop topics   


  3) an overview of the papers presented and how they can contribute to face (some of) the limitations detected     
 4) some conclusion that summarizes and analyses the discussions hold during the workshop and the hot-topics arisen (I think the notes we took will be very useful for this)     
5) Finalizing with an overview of promising research directions in the field.    



Suggestions of names for reviewing paper:
*    Deirdre
    Helen?
 *   Tara
 *   Rob
 *   Susan 
    Dawn
 *   Norman
*  Claudia  


MILESTONES:
    17 July: all sections have hooks for content
    22 July: All sections have readable content
    29 July: First draft (Jose sections in good shape)
    14 August: Second draft (Seda sections in good shape)


--------------------------------------------------  NOTES FROM IWPE 15 ------------------------- ------------------------- 

2012 i was traveling around the us interviewing researchers in computer science about their conceptions of privacy and privacy technologies
my university at the time did not have an irb, so i was consulting with a sociologist, who said that i could take it easy with consent forms
that i should start my interviews with what i was intending to do, how the interview would be organized, promise them that i would use the interviews only anonymously and also provide them with an opportunity to give me feedback when i published the work.
41 of my 42 interviewees were fine with the set up: for whatever reason they trusted me
until i met this one professor who basically made me turn my tape off: without a consent form, she wouldn't have it
she came with two of the phds, it was one of the best conversations, she understod both law and technology, and the value of having a diversity of approaches to engineering privacy, as well as the tensions between these approaches.
this professor was deirdre mulligan.
her vast experience, her ability move between advocacy and research, as well as her elastic mind is impressive and inspiring.
given that i am not surprise to hear that she will be the PI in a new Center for Long-Term Cybersecurity, supported by the William and Flora Hewlett foundation
Deirdre K. Mulligan is an Associate Professor in the School of Information at UC Berkeley, 
co-Director of the Berkeley Center for Law & Technology, 
Chair of the Board of Directors of the Center for Democracy and Technology, 
a Fellow at the Electronic Frontier Foundation, 
and Policy lead for the NSF-funded TRUST Science and Technology Center. 
and, she is also currently chairing the organizing committee of a series of visioning workshops on privacy by design held by the computing community consortium.
Mulligan’s current research agenda focuses on information privacy and security, including exploring users' conceptions of privacy in the online environment and their relation to existing theories of privacy. Her comparative study of privacy practices in large corporations in five countries, conducted with UC Berkeley Law Prof. Kenneth Bamberger, will be published by MIT press this fall.


!!! PRIVACY: PLURAL, CONTEXTUAL< CONTESTABLE BUT NOT UNWORKABLE


nobody knows what they are talking about
that is true
it is contestable
the contestableness of privacy is part of its generivity
that is part of its work in the world: just like democracy, or art, part of their job is to provide a language to have those conversations
how can we live in a world where we have plurality and maintain it, we have contextual deployment
how do we decide which 2-3 privacies are the right one
we want to be able to do our work
you may design code: i write laws: if you want to figure out what the threat model is
it is a core thing: if we don't pick the right definition of privacy, why bother

goal: develop a model fo analyzing and categorizin privacy concerns and harms to connect with existing theory, and capture those that existing theories do not adequately address

starting point:
building on existing pluralist and contextualist approaches to privacy


what is privacy protecting against: picture of two people sleeping together
private activity
private space
nobody knew we were together

all reasonable positions about what privacy is for.
they all have different designs



ACTIONS:
decided to do study
collected emotional content
analyzed emotional content
displayed in manipulative way
collected information on reaction
analyzed information on reaction
decided to publish


offender
from whom were they going to protect
and who did they thing was going to provide protection to them

it could be seen as an interference with decisional autonomy
interfering with my mental state: they are getting into the secure part of my brain
misrepresentation/distortion: making me look like i am more depressed then i really am
information loss: learning about manipulability: did your friends have an impact on your emotions, loss of control over information
violation of expectations: informed consent
essential to research: where did they come from, research means you are a lab rat

ethical obligations of one research did not bleed over with the other activity

time is very important in law
we look at prospective surveillance different than retrospective
time has multiple dimensions: right to be forgotten (how long should information be available)
synchronous vs. asynchronous: if we are sharing an experience, i will not ask you to close your eyes, but google glass brings in a different temporality


can we go back and look at the privacy policies of websites and see what subspace of what you are showing that they are covering?

today companies are increasingly talking about lawful access as a case towards which they want to protect your privacy
it has not been something that has been addressed in detail in a privacy policy
which has mostly looked at third parties
the sophistication fo privacy seetings in social networks has changed
those are some examples
you can also use it to look at the technical affordance: what it is supposed to do and what it actually does


patterns: for google to invade my privacy is a pattern
d: that is a privacy issue
people frame it as one
question often is: that is the price of free
the conversation about that is getting more sophisticated
you can do a better job: you can sample
you can reduce the privacy cost


collin and i have another paper: i don't have the slides for it here
map theories of privacy against each other
given you an example
people will often talk about us and europe
us: individual liberty
eu: they care about liberty but also about dignity
the justification: dignity is not necessarily a freedom to act
you can have the same objective: but different justifications
individual liberty was important, but it is important for their self-development

with harms, you can unpack privacy: let me design things 
if you don't make the next leap, people are concerned about this harm, because they think privacy is supposed to protect individual liberty and the justification is self-development or autonomy
you may draw narrow buckets without understanding this higher level objective
the goal is to put those harms and map them back to theory: because theory canbe used for design
especially in the case of architecture: you need broad classes of things
not just what people view as harmful
what justification

q: i work for a data protection authroity, we have a specific view on privacy
what ways do we have to act: who is in charge of improving privacy
do we need better laws, better law enforcement, better enforcement in companies, users decide
which one?

i think we need all of them: law is an incredibly important tool for clarifying what our goal should be
in germany, the law is more prescriptive
the job of the dp people is to figure out what this means
who you situate as the responsible person matters a lot: in most cases, it is the person in the company


oracle attack: 
privacy threat modeling: adam shostack
i think he decided it was not as useful as he thought it was going to be
ietf: privacy considerations document
one of the things you find is a framing of lawful access as a potential threat
in security: if you have the right to access: we might be concerned about insider abuse, but we would be interested in what you did with the information
in privacy: you may have lawful access, doesn't violate security, it would be consistent, but it violates privacy
there is

ietf/infrastructure privacy:
analytic is desgined to work at different levels
you can use it to interrogate a policy
a system, little less so
where in the stack to address privacy: great example is w3c
nick was on the DNT working group
a lot of the conversation about what is required by the standard, vs. which things are advised, and which are implementation decisions are important and really hard
maintaining flexibility in the level of infrastructure important
putting in the hooks, so that you can build the resources on top
if you don't want to think of all of the models, you will have blind spots
some of their use case generation
drm tech
xmrl:
it was incredibly valuable to do the use cases
it was a hot contested issue
in the privacy area you don't get enough use cases
as people more engaged we will have more use cases
which will give us a sense of how much you can push dwn
i would take a conservative, david clarke tussle, minimalist approach
building things that are privacy protective

nsa guy: we have privacy issues for years
do you think there are differences when you move online

the defaults are very different: you have to work really hard not to leave a trail
unless you run your own network, you have some relationship with someone who knows who you are
the data is standardized: we can make sense of it in different ways
now that all that stuff is coming back to the physical world
as things are more embedded: the privacy concerns are going to be all part of our physical

there can be cases: where law enforcement has a warrant
it is not a violation of security policy 
it is a violation of privacy
but i have my own security policy, and it is still a violation of my security policy

guy:
bibliographic question
the framework you are mentioning
do you have a paper




jose presentation:
pripare:

support action: reuse what is available in the state of the art
the presentation will put the pieces together and show how we are trying to combine them


aprraoch: that takes privacy into account during the whoel software and sstems engineering process
integrate: best practices:
        method that has demonstrated the results superior to other means
        you may know of different best practices: development processes, incremental, iterative processes are preferred in rapid development
        risks assessment and management
        checklists for domains with a matured community of practice

requirements cheat sheets:
        common criteria
        domain specific heuristics
        architectural styles:
        design patterns: singleton facade..., privacy design patterns

best practices are scattered:
        privacy impact assessment: new dp regulation requirement
        some practices for privacy and management
        cnil: the french data protection authority guidelines
        linddun: providing a methodology for privacy risk management
        you have different privacy patterns: nick doty, collected some of them
        you also have loads of pets
        there are so many of them, it is difficult to know which one to use.


even if there are people with expertise in the privacy domain
it is difficult to find guidelines to support them in the development process
the goal is to start from high level principles and transform them to system implementation

several phases:
analysis
design
implementation
verification
release
maintenance
decommission

also none technicl: organizationl privacy processes, and privacy awareness

analysis approaches:

        requirements analysis
        accessibility: privacy high level principles that can be decomposed in different layers


HOW DO YOU MAKE SURE THAT USER REQUIREMENTS ARE RESPECTED?
        especially compliance approaches take an organization centric perspective
                this is good: it can work to make organizations transparent
                but how does it deal with the requirements of users that may be in conflict with the organizations interests, which may be their business model, their architecture, inference model etc.


privacy requirements:
        user centricity
        user identifiability/anonymity

QUESTION: WHAT ARE THE ROLES OF TRUST MODELS? HOW DO YOU DEAL WITH THE TRUST OR DISTRUST OF DIFFERENT USER COMMUNITIES OR PEOPLE AFFECTED BY THE SYSTEM (THEY MAY NOT BE USERS)
QUESTION: WHAT IS THE DEFINITION/DEFINITIONS OF PRIVACY THAT YOU ARE USING? HOW DO YOU DEAL WITH THE CONTESTABILITY AND PLURALITY FOR EXAMPLE THAT DEIRDRE WAS TALKING ABOUT?

once architecture s determined, we can move into the detailed design
we can then chose privacy controls that better with privacy requirements
there we want to work with a catalog of privacy controls

privacy control is a rliable impelementable way to meet privacy requirements


privacypatterns.eu: patterns catalog

this is work in progress: we want to move towards systematic engineering: from a craft to an engineering practice

q: i am intrigued by the idea of optimizing the trade offs with the different requirements
how do you do this? do you have an example?


meiko:
interesting to see these approaches being developed
we come to a point where you come to good solutions
verification phase:        verification metrics
                                one  of the project partners: inria has some models for verifying privacy properties
                                you have all the constraints, you can formally check if it fulfills tall the requirements

architectural design:
        formalization languages for privacy requirements
        is there a formalization language in development? is this feasible?

        jose: it is a feasible goal, but it is not for common developers: there are some case tools
        capriv: daniel le mateyer
        he has two or three papers: describing how these formal languages apply to the description of the requirements

nick:
        i am particularly curious about 
        best practices
        patterns and privacy controls
        i am struggling with this myself
        how much of the work needs to b prescriptive and descriptive
        do we need in privacy engineering, best practices, things that you should do
        or, should we be saying these are common techniques: and designers can choose between them


risk assessment phase: do you look at risks associated with not getting the privacy policy right
trustworthienss: do what it is supposed to do and nothing more.
what the system is not supposed to do from an attacker point of view?

do you do verification after implementation: isn't that a huge economic cost: don't you want to do something earlier

jose: regarding threats: 
the german federal goverment has a catalogue with i think all of these risks


meiko:
thomas: xml security specifications
protection goals for privacy engineering:
mentions andreas pfitzmann


confidentiality we know:
multiparty computations etc. all for confidentiality
integrity: data cannot be modified in an unauthorized or undetected manner
no unauthorized entities
neither the data nor the process: you want reliability, you want to know that the system works as it was intended

implementation: crypto
hash values, access control enforcement
watchdogs, canaries
two-man rules

avilability: 
services are provided in a comprehensible manner
close to 100percent availabiliy
accessibility, responsiveness

load balancers, redundant components


privacy protection goals:
similar to CIA in the privacy arean

unlinkability:
        data minimization
        necessity and need to know
        purpose bunding
        separation of power
        unobservability
        undetectability

data avoidance
access control
generalization: anonymization
abstraction
derivation
separation/isolation
avoidance of identifiers


this of it as a fence
they should be separated by something in between

transparency:
        is defined as the property that all privacy relevant data processing, including the legal, technical and organizational setting, can be understood and reconstructed at any time

logging and reporting
user notifications
documentation
status dashboards
privacy policies
transparency service for personal data

transparency as the magnifying glass: look at arbitrary parts and look, what concerns me there
the ability of an individual to check out what is going on
also the data protection authorities

intervenability:
        self determination
        user controls
        rectfication or erasure of data
        notice and choice
        consent withdrawal
        claim lodging/dispute raising
        process interruption

configuration menu
help desks
stop button for processes
break lass
alert procedures
system snapshots
manual override of automated decisions
external supervisory authorities

intervenability: is a remote control
every type of activity you can do, is intervenability

confidentiality - availability:
you can't optimize them at the same time: they are mutually exclusive
integrity: intervenability: user can do arbitrary changes to data
they are two extremes of the axis

unlinkability - transparency:
no linkability data: full linkability of data


the six pointed star:
these three axis, you can designsystems in terms of these axes
you can discuss it with cs, cryptographers, law makers
everyone will have an intuitive understanding

we use it in practice very often!


DATA MINIMIZATION QUESTION:
health
cybersecurity


WATERFALL MODEL:
WHEN DO YOU APPLY THIS?


q: these are mutually exclusive? what do you do?
you cannot have a system that optimizes all of them
some techniques will allow you to foster 2-3 at once, but all of them
you always have to see for your particular data, which of these protection goals are important adn how to weigh them correctly


q: i want to relate this to the theory piece
there is a lot of interest in the technical community for usage based controls
the drm for data
which is heavy on intervenability and transparency
it feels very good if you want people to be able to control their data
if your concern is surveillance
you are much more interested is unlinkability, but that makes transparency and intervenability difficult
but it depends on your threat model
do you relate these choices to broader substantive objectives?
some of the work in voting systems: you need to be able to audit.
if you don't attach it to the integrity of the vote
you want secrecy of the ballot, but you need to think of the poll book
you need to give people principles so that they can make good choices.
how do you do that?

estimating privacy conditions:
        this methodology works for each of these levels
        you can talk about pets
        or if you discuss with a user...

thomas: 3 more dimensions:
maintainability
agility
manageability: to understand what happens in a system
those turns into trade offs with these goals, too!
that is a consideration from a practical deployment perspective is very important

a: you need to bring in time


nick:
who doesn't love a good ontological debate
but i want to talk about the opposition
whether these are the correct 6
you are making bold claims about the tensions between them
in the cia triangle, sometimes ofidentiality decreases availablity
sometimes integrity decreases availablity: sometimes you will be dropping a connection to make sure something is not meddled
integrity and intervenability: data quality principles in fipps, the data needs to be correct
intervenability: the fact that i cano go to a sysstem and change it, it increases integrity
maybe they can work together


a: integrity, is from cia
integrity can also be correctness
in that perspective, intervenability allows you a situation which is beneficial of integrity


WHERE DO YOU DEAL WITH RISKS?
IS THE IDEA TO MAKE THESE SYSTEMS COMPARABLE?

rainer hoerbe:

we want to cross the sector
federated identity management
we go beyond contexts: 

FIM related privacy risks:
        observability: metadata
        linkability: data quality thing, federated identity management, you improve data quality, but linkability is the high risk
        impersonation: because of weakness in sso mechanism

if you don't have FIM
        you are reusing identifying attributes
        just having the birthdate and first and last name
        within 8.5 million
        5 people are not unique?


lawyers and engineers sitting at one table with business people: common understandings are difficult
technical folks: you bring up many problems with FIM
this is a strong argument not to do anything
we have the issue of ip addresses
why should we do something about unobservability
isps are logging ip addresses anyways

discussion about balancing technical and business requirements


approach to understand quirements:
what people expect: if you invest here in more complext ad off the shelf 
you need to justify with legal frameworks?


take privacy by design rules and link it to architectural requirements:
limited linkability: conditional unlinktability: you can open it up if there is a court order
you need a clearing service.


late binding/federated credentials:
you are privacy compliant, but you will induce incompliance somewhere else

pairwise identifiers, but people don't look at the attributes
constrained linking:
        there are many ways to link data in different privacy domains
        there are technical options to constrain those links temporarily
        and to make it short term and for one transaction: no permanent linkage in those domains


thomas: can you put it within the framework of the two talks
making particular design choices in a particular context
what are the hardest trade-offs
where you were really trading privacy princples against each other
how did you make those decisions
in my experience: it is privacy vs. feasibliity
strong controls are business wise not so feasible
you cannot start with the green feet approach: companies have products and implementation
you cannot have a completely new operational stack
if you don't have log files, if you don't track ip addresses, it is a big head ache for operations
it is doable..

meiko:
people are seen as a limitation on what can be done
you just need to know what you do
if you do, you need a legal basis for that



GENERAL COMMENTS:        
MAKES ME THINK THAT WE SHOULD HAVE MORE CASE STUDIES ABOUT HOW PRIVACY DECISIONS DO AND DON'T GET MADE
FIM: WHY SHOULD WE WORRY ABOUT OBSERVABILITY WHEN ISPS HAVE ALL THE DATA? A RACE TO THE BOTTOM ARGUMENT
ALSO TO UNDERSTAND BETTER THE TRADE OFFS WHICH THE PRIVACY GOALS PAPER RAISED AS A DISCUSSION POINT


A LOT OF EMPHASIS ON MAPPING: HOW DO WE MAP PRIVACY PRINCIPLES, NORMATIVE UNDERSTANDING OF PRIVACY TO (PRIVACY) ARCHITECTURE OR MECHANISMS 


ISO: the opener: a seperate party, if you open a partially anonymous party, you need two parties to join
there are technical implementations of that
the log servers in the different security domain
it needs to be solved legally and also on the political level
how this segregated harvest can manage
surveillance law: the surveilled people have to be informed about the surveillance
these are organizational and legal controls


END OF SESSION DISCUSSION:


Q: you show that there is kind of a trade off between transparency and unlinkability
it is not really, at least in the example you gave it is not a trade off
in the case you gave, i don't want your data to be linkable from one service to another

a: if i want my data to be available to others that is availability and not transparency
you want to think about, one of the parts that we discused in the symposium a lot
you want to have isolation between two machines: that is unlinkability
you know what the hypervisor does
and you can get more transparency about how the cloud works
there is more perseonal information
government data in cloud systems: we got them not to move it to amazon, but even if you have their own, on the cloud you mix data that should be separated
if there was an investigation, then they get all 5 systems on the same physical server


there are muable and immutable policies:
mandatory or discretionary
intervenability and integrity: maybe they merge in a discretionary policies

a: 
there are laws
that mandate some of the protection gals
this is mandatory by law
for those parts the policies already reflect these ideas
there are cases that are optional
you can discuss with the policy guys
whether it makes sense to do this or that based on the protection goals,  if the law is optional

guy: transparency, is it a realistic goal in terms of what is being done today by companies
the complexity of the level of analysis being done by big companies
i am not sure that anyone, even authorities, has a good overview of what is happening
do you think the user himself/themself get a good grasp of this, isn't this a lost cause

a: even companies don't know what they are doing
of course, having to explain that to my grandmother would be a problem even if you are in the field
you can do basic boundaries
is my data moved to us and china

guy: if you encrypt data, you can't just listen to the endpoints of the data

ahmed's question:
panel: share your presentations
if we think about government and companies
one is surveillance and government
and the other is everybody else

we should work with everybody else to improve the situation

rainer:
there are some checks and balances
the snowden revelations were a wake up call
in the commercial field and private sector
the winner takes all model will change the landscape so fast
we cannot rely easily on existing privacy law


meiko:
i am careful about the surveillance part
what we have works well with schleswig holstein
it doesn't work perfectly, but it is based on other types of conflict

jose:
i like the principles, all parties should abide by them


trust:

jose:
trust could be thought about an axis
for defining the architecture
you can put on noone, that is a user-centric architecture
you can put it on the network provider, then there is a network-centric architecture

rainer:
our paper was developed from a pragmatic point of view
to identify the key pain issues
we pointed out to segregation and control, and not to trust one provider


meiko:
we are in the verify business not in the trust business


nick:
contestability
we are going to continue debating privacy
there is a good chance the the debate on privacy will continue
i am curious about your extensibility story: how can you accommodate changing or different views of privacy

jose:
i am not sure
probably what we have seen is that even though we started with european privacy principles
when we are decomposing into the different layers
we realized that there should be some commonalities in legislation
so that you can adapt the privacy principles
you can still reuse the lower levels

meiko:
it provides a vocabulary to start discussing
we want to look at things from different stakeholders point of view
and the huge research in privacy by design
we have a certification scheme
there is a lot of interest on that from the market
pia: that also can be performed

rainer:
we were trying to get a single coherent story to our potential buyers of our model
we don't have a pluralistic view
but we deviated
we sold the same model in different areas
we found that there is certain kind of trust implicit
because of government parties
so we removed certain controls
we used it also in a supply chain
a strong player
who was not intrested in the privacy model but compliance
we already adopted it


deirdre:
observation + example
privacy rules in the e-nvironment
there was a focus on confidentiality of messages and content
identity: somebody would know it somewhere
content is now shared and there is a focus on identity
but you look at what you can infer from all the content
even if we look at the 6 pointed star
we can assess that the semantics of data speaks on the front end
today it is all about machine learning
data is meaning is going to come through its use
can we determine linkability
do we know what this data is? is it about ethnicity, maybe it is, because it is a proxy for it
so, the extensibility, evolvability becomes really important
building off of next question.


final remarks:

jose:
i liked the presentations
and to learn
and how they could feed the landscape of privacy engineering

meiko:
i am happy to see that privacy engineering is starting to be accepted by research communities around the world
sp has lots of security but little privacy
we have discussion in germany for a long time, but didn't feel this way elsewhere
i hope to get more people in getting more people in this area
snowden contributed a lot

rainer:
i hope that our pratical approach in federated idm helped how in a specific area privacy engineering can be deployed
if you think this is feasible , and fits your work, let us know!


thomas:
we are moving from qualitative exercise of privacy to an engineering practice
similar protection goals from different people
think about the decisions between them
the challenges are cut out for all of us!!!




QUESTION: WHAT ARE THE ROLES OF TRUST MODELS? HOW DO YOU DEAL WITH THE TRUST OR DISTRUST OF DIFFERENT USER COMMUNITIES OR PEOPLE AFFECTED BY THE SYSTEM (THEY MAY NOT BE USERS)
QUESTION: WHAT IS THE DEFINITION/DEFINITIONS OF PRIVACY THAT YOU ARE USING? HOW DO YOU DEAL WITH THE CONTESTABILITY AND PLURALITY FOR EXAMPLE THAT DEIRDRE WAS TALKING ABOUT?

HOW DO YOU MAKE SURE THAT USER REQUIREMENTS ARE RESPECTED?

WHEN SHOULD YOUR METHODS BE USED?

EVALUATION OF ENGINEERING METHODS/EVALUATION OF PRIVACY:
METRICS: WE NEED RESEARCH ON METHODS THAT CAN ASSIST US IN MAKING QUALITATIVE EVALUATIONS OF PRIVACY ENGINEERING METHODS


WE WILL SHARE PRESENTATIONS




ian oliver:
holds over 40 patents
semantic technologies and privacy

five years i have been trying to communicate with lawyers
i failed
they started listening to me know
i was invited to an iapp conference in london
and actually wanted to hear what an engineer had to say about privacy

i am a software engineer by training
mathematician by training
trying to solve this why lawyers come at this with lots of problems
i developed techniques for engineers: i don't care about the lawyers

experiences and some lessons:
things i have learned from the safety critical domain


what do we need to make this a proper discipline?
where do we take good and (relevant) ideas?

we are still dominated by the privacy lawyers


what safety critical:
nokia research
we were building fault tolerant systems
all mobile networks are inherently fault tolerant: probably to the degree of aircrafts
i learned about aviation and avionics design
i did a project for two years going into medicine
when i started doing that: i thought mobile tech is pretty boring


privacy is going to kill someone at some point
how do we keep the lawyers from doing that
they rely on proven technologies
airbus: uses pl1
it is a forty year old programming language: it works
proven tools
if you go into the safety critical domain: look at the patterns of communication between people
how do they pick technology and communicate about it

an analogy:
the sterile field:
this is a brilliant analogy:
we are shifting around pieces of information
and we don't want certain pieces of information leaving certain parts of the system

picture of an operating room:
you have a large number of actors
with clear roles and jobs with clear communiation pattern
because of all of this, they kill very few people (in the operation room)

long time working with data flow models:
you can use the same modeling technique in this kind of environment
by looking at material flow, you can then use it to minimize what gets to the patient

each one of the individual points and look at what kinds of material protects the next staage:
the main point is a circulating nurse
she does all the nasty jobs around the system
she protects the inner core people from the dirty people sitting out here

strict protocols prevent contamination:
what are these protocols
what are the risks

you can also use these models to prevent this?
access control/segregation

right here is an advertisement company, it is the same problem

we can draw an aanlogy from these fields
surgery side: material, our side: information
material flow, we have network connection
roles, protocols, contamination, risk, measurement and metrics

that was the start
so i came up with a formal method
for data flows
pii is a dreadful term


we as engineers can't work with pii
when is the last time you looked at a database and said this column is pii
lawyers like to think of location as pii
so we spent a lot of time to come up with a classification of information that sits inbetween

location information contains a device id
i don't know of that is latlong
for start we need to discuss if this is pii
we developed a whole bunch of categorizations
and, we also looked at categorizations of where data was coming from
then we started annotating and building a reasonable structure on the top

when you look at your data flow method: you can look at where all the location information is flowing


and now we can reason about a system
we can also do interesting things: if we know what information is flowing here, and tell the user what the privacy policy should be.

we can also divide them into models
and reason further over data flows that cros boundaries
we can talk about security and architectural layers
jurisdictional layers: is the information flowing security from europe to the us


this particular crash: required five pilots
sesenas are more complex
this was the first crash (1935) where they didn't blame the pilot
they crashed at a show and they asked the question: why did it crash
in the end it was a simple problem:
a lever in the plane, that didn't work well.


so, they invented something called a checklist: that is like being a privacy engineer
no additional pilot training: but a change in culture
you need to change the culture
first checklist for an aircraft had five items. after doing that, that type of airfract never crashed again

preventing central line infections:
peter provonost
90percent of the deaths were caused by a tube that was put into the mouth
the second most effective medicine since penicilin:
        check list
        wash hands
        wear sterile clothing
        clean patient's skin
        avoid veins in arm and leg
        check line for infection

devolved responsibility:
         all give power to stop procedure in case of non-compliance, e.g., nurses cross-check doctors
        no impact on process
        tool improvements:
                dedicated packs for central line equipment including sterile clothin, drapes, soaps, etc.
                placement of equimpment next to each patent (readiness)

checklists in surgery:
atul gawande et al.
check list manifesto
surgeons get a single sheet of paper
this is probably saving accidents in 10 percent of the cases
that is a pretty good figure
it is not complex
he realized that it needs to be independently processed: and no tick boxes!
it is a real checklist: one has to challenge another, and the other has to come with a response
there is nothing here that says whether you can or cannot do the surgery
it tells you which stage you are in
so, emphasis on communication
devolved responsibility

privacy audit checklist:
inspited by the WHO Surgical Safety Checklist

when we startd reviewing they were almost adhoc:
we decided to try and standardize that with data flow techniques
imagine yourself as doctors
communicate and build your team like a surgeon


before the reviews: we needed to know something about the system
what was the scope of the audit
if you have a privacy policy is a very different audit
from comliance with regulation
those of you do privacy reviews: did you check if they already did a security review? in most cases, not.
who is the rand d team
what kind of information are you collecting
how about location data 
you never ask if they are collecting pii: because they always say no.
if they say, the lawyers get involved.
then we would ask if they are collecting location details
health data, child related data
where is the system being deployed, within the houe, or going abroad

surgery room:
you have a role: you can only come out of the role if you are called out

think about your roles: there are days in which you want the privacy officers to become the software engineering expert
the person leading the review should not be doing the checlists
you empower the person: you take a student to do the checklist
how many of you have gone up to the cto and say you are doing this wrong
the main thing they realized was you have to devolve responsibility
poor student asking the surgeon: your name your job, it opened up communication
how many of you think you could do better if you could speak up
we picked arbitrary times in that flow
we told the software engineers
build it and on this day we will come and check: we made our process independent of theirs


big public failures: 
what do all safety critical system developments have in common?
snowden didn't change anything


culture and communication:
aviation safety culture
medical safety culture
engineering safety culture
software engineering and safety critical systems

we haven't gotten better because the technology has gotten better, but our attitude towards technology has changed
engineering safety culture: how we build bridges and cars


exercise left for reader:
how do we change the culture of the privacy an software communities to enable privacy engineering to emerge as a real engineering discipline?

deirdre:
engineers are joining the conversation in a much more robust way is happening because lawyers lost grasp
i mentioned that i spent looking at different countries
different legal systems leads to different structures in companies
france: low level lawyer
germany: interdisciplinary team
some of it has to do with the kinds of regulations and incentives
germany: lots of obligations on firms: breach notification got them way more access to technical resources
if you want the safet culture and not compliance

i think legal choices are part of it


ian: if i compare the us and eu lawyers, they are very different
how do we communicate with the engineers

the move to resilience is interesting:
we need to keep the system running 
resilience in aviation: what can you do to keep the plane from crashing
then it became keeping the plane flying
many areas have moved to resiliency
resiliency at the expense of security: we have a problem
if it starts replacing privacy and security


collection vs. usage:
there has ben a lot of emphasis on collection
data minimization
legal community notices: you are sending a gps coordinates
we are getting all sorts of data with it
servers have log
lawyers were shocked
the move to usage: as i read the law
privacy laws are all about usage
the systems are built in reverse
we have a lot of points of collection
mobile, the app, third parties
if you look at the organization of the company: the end dataset is a db accessible to everybody
the usage is blown over: which is opposite to how the law is written
they need to align: i don't know how to do that
to change a company so that differnt parts of the company no longer to talk to each other
unless you go down military style separation and classification

what is resilience in privacy:
you can take a step back and say did we loose the debate when they posted things to facebook
resiliency has been a property on top of a system with the basic engineering and security properties of the system
chemical plant:
        it exploded
        and it made an impact on safety engineering
        they concentrated on how to build a safe 10 years making a safe plant
        and how do we make it so that it is fail safe
        that seems to be a trend in many areas


nick:
        distributed cognition in cockpits
        sociology of communication paper
        they put two airplane pilots in a simluator
        and record their conversations
        90 seconds of their conversations
        the pilot screws up three times
        the result of the paper made me think about responsibility
        they used the system, the cockpit itself to improve the communication

i am curious about whether we can do something in the artefact and technology to make it easier to communicate or to review.
        we don't have good tools
a:        we used a data flow model
        it is the right tool for modeling
        but not for communications

        the best tool i found so far is the checklist, but the culture that comes with that is something else
        the aviation people have good numbers on this
        what we should do
        if there is a privacy breach: we should take a random engineer and random lawyer and shoot them


eve maler:
        extending the power of consent with user managed access
        a standard architecture for asynchronous centralizable, internet-scalable content

        privacy goals vs. reality:
        aspirations:
                fipps and privacy by design speak of a larger privacy
                we can talk about dcisional autonomy and freedom
                when we look at metrics: we got statistics form the iapp salary survey
                the most mature companies and governments do the way by being best in compliance
                in post snowden era
                people know: they are sick of things: they might be ready for a change
                they are cynical
                i think the culture is changig
                people have gotten used to sso and social ...

        in ordinary online contexts the consent experience is dire:
        we have opt outs
        at least it gives me some control
        opt ins: not actual choice
        then we have the cookie directive thing
        the privacy experience has been reduced to this

        the tension is going to get worsewith the internet of things
        smart socks
        it will not get easier
        the tension grows, not because of the fear
        but from the greater desire to share information
        inflight ad: cpap
        it shares of data: the vision of seletive sharing: a positive vision
        consent models: not dictated by consent models

        oauth:
                for scoped api sharing
        share button:
                it is scoped and it is proactive
                you use it for collaborative editing
                the purpose of use: 
                        consent at doctor's office: you fill that out for reasons of your own
                share button is a form of consent button: they draw outside of the lines

        consent requirements that freshen up

        choice
        relevance
        granularity: can you dictate the purpose of use
        scalability: iot
        automation: iot needs automation, consent in a machine readable fashion
        reciprocity: it is peer to peer


        how do classic and new consent mechanisms do against the requirements?

        use managed access: builds on oauth
        and enable a centralized dashboard
        after you hit share buttons from different places


        doctors will not buy this because of liability:
        binding obligations


meiko:
        companies need internal management
        we need to bring this information to the software development stage



guy zyskind:
        decentralizing privacy: using blockchain to protect personal data

the problem of protecting personal data:
        data are stored centrally
        there are no trusted third parties

        user perspective: 
        security breaches
        users don't own their data
        users can't audit

service perspective:
        cost (compliance, security audits, hiring cs phd)
        brand reputation: they want a simple way to do it
        so it is important for companies as well


you have an app: registry service for first handshake
from that point on the user looses control
everything is in a central server and out of the users reach

how about instead of centralizing everyhthing
the user and service can trust a network that has cryptographically guaranteed security properties
blockchain: technologically model the user in the center
and he can define who can access his data and in what ways
distributed hast table as storage
with privacy preserving data mining techniques

blockchain of bitcoin and use it for privacy
overly simplistic

bitcoin is...


goal: construct a public time stamped log of all valid transactions without using TTPs


when a user installs a mobile app, she can control and audit what data are stored and how they are used. access should be revokable.


early in the design of the protocol:
UX studies
first thing that they learned: people were confused by the centralization of policies
and the apis being separate from the policy
today it would be easier: there is a separate box comes up


available, but how do you push: how do you make sure that they pick up on the data: notice that there is something new!

privacy technologies:
what is your wish for them to change in terms of liability?

eva: i have observed the nstic situation
there has been a lot of edge work


surveilance, privacy and infrastructure

nick doty:
reviewing for privacy in internet and web standard setting
ietf and w3c
i will explain the data: it is early research for me
based on this data: i want to introduce the history of security and privacy reviews
reactions to snowden: experimental study
trends: in standard settings

many people have not seen a standard:
html5: w3c recommendations


documents: that is what a standard is, how do you make one


privacy specific standards: on the application layer, p3p, dnt
tools: to make it more systematic, for conducting reviews
at w3c there is work on a self-review questionnair: similar to the check lists from earlier
i am working on a fp document but also one on the privacy impact assessment approach

how did we react to snowden


here are some questions i have, and i am curious about your questions:
what tools are effective and can be systematic in the standards setting environment
what can we learn about consideration of values (privacy, security, accessibility, freedom of expression) in multistakeholder groups?


q: there was somebody else in our organization that followed this. the dnt was fun to watch
there was one standard about the collection of data from a light sensor
there was a little piece of work: let's do a privacy review. the trouble was that it was so simple
the discussion never went anywhere.
if the data from a light sensor is that pii or not
and it missed all the piece of context around
at that point, i left 
i realized, we were so naive about what we should go and review
the discussion was not whether it is about pii or not
it was really about what the hell are we talking about
how has the inclusion of a privacy considerations section driven how we think about privacy and how we should review things
i wish i had more time


a few good things:
all the deliberative process is out in the public
you can go read that 
wide ranging debates
i remember this debate, i think we have a similar finding: asking the question is x pii does not seem to be very productive
there is always a debate about it
it is an important thing, but whether it was pii was a distraction
what we have learned from that is getting a common set of questions
the questions we got in that debate
was about correllation


comments:
very enjoyable presentation
i want to say that snowden: when we talk about snowden

meiko:
great to see the quantity of work
what i would be interested in is the quality
from our point of view
if nobody found a problem does not mean that there is no problem
if people talked about it doesn't mean that there was a conclusion
we have seen the failures of standardization at w3c
things that did not work as planned


a: how do we know that any standard is any good

q: is there some quality to it, at least some consensus

in terms of success criteria: there is a sense of derth on that
adoption is a classic one
interoperability
interoperable implementations is a basic requirement
in a way that is a simplistic measure

one thing we may want to do is to follow up after you have done a review

q: how do we do privacy engineering at the level of standards
we know what pieces don't work by themselves
saying you have to have a security consideration without guidance and leadership doesn't work
it didn't work for us and i think it doesn't work in general
in previous paper: expertise
we noticed that there is increased integration of expertise in the standards
people have backgrounds in both areas: that seems to be a trend
it is hard to know if this is causing better outcomes
in terms of tools: i am not sure how to evaluate the effectiveness of tools
a lot of the tools was showing

RFC 3552, 6973
a lot of this is glossary
it would be interesting to find out if fingerprinting means the same thing for a different group


gina fisk:
i am a systems and cybersecurity person, we have our own vocabulary
a system that grabs data from across sites
some privacy principles that we are using and is guiding our work
to keep the network traffic as private as necessary

privacy principles for sharing 

cyber attacks are costing a ton of money
but noone wants to share data
there are organizational risks: everyone will be labeled as inseuce
all this network data has private information in it
we are looking at network data, it could contain medical data
governor of new mexico: all of her purchases were leaked

we have human subjects requirements: pii must stay private
we also have open data issues
sharing all this data brings both privacy and organizational risks
and they are all complicated 
we are balancing both

retrospective picture of what happens:
retro-future
a system that allow controlled information sharing across and within organizations

we will balance privacy, risk and the ability to recover from the cyberattacks
there are three privacy principles that guide our work
        the principle of least disclosure
        principle of qualitative evaluation: we have to bring in policy and legal side
        principle of forward progress: not becoming paralyzed by these things, in the government we get paralyzed by these things, we can't share data and put their heads in the stand

principle of least disclosure
        we disclose the minimum amount of information that is sufficient to achieve the object
        internal disclosure: collecnt data, even if it has not been released
        privacy balance: privacy of the person asking and providing data
        inquiry specific release: access to the data must be moderated and limited

the engineering approach for least disclosure:
        steganography
        minimal requisite fidelity...
        fisk et al. introduced 
        in communications: the minimum degree of singal fidentility that is acceptable to end users but destructive to covert communications


data confinement:
        i keep my data and require people to ask questions

query management:
        moderated queries
        poker queries: queries that minimize what information you disclose when making a query

qualitative evaluation:
        we have legal constraints
        IRB
        technical limitations:
                these is no single fix for privacy

principle of forward progress:
        organizations must not become paralyzed by least disclosure and qualitative evaluation

controlled disclosue: rate limiting responses
data aging:

retro future


q:
what's the difference between cybersecurity and security data?
a: what makes data cyber is something pcap, netflow
they could be regular today is cyber tomorrow


ian:
how much of what you are doing here: how much did you have to do this because of snowden?
a: i see a lot of changes with working with the government, we have new procedures in place
this has not directly affected but the gradual trust relationship, would depending on how ..
if you are another country, you are the government, we can graduate them in any way we want

q: you are being very proactive about it?
how many times are we on the nyt frontpage, how many times do we have to testify
we take these seriously


nick: i think we are seeing people saying technical measures post snowden

thomas:
the examples you showewd with minimum fidelity:
to what degree have you looked at running machine learning over those datasets for intrusion signatures

a: we are using machine learning is to categorize malware families
look at what they are trying to do
q: the question is then, how do you think about minimum fidelity, if your customer has to look at a broad dataset
a: we are just trying to do that, when we define the trust relationships and data transfer
we are just starting to implement that
when you have the three things: 


q: how does your work talk to each other
nick: i am interested in the irb and technical limitations
how we can use that more broadly

we get vendors saying you have this solution
we need to have a human in the loop
that is the idea of the principle
no matter what we do, we are going to have sanity checks

nick: that might be similar to the broadening view in standards, who hasn't looked at it yet
a: that is why we want to publish in this venue

deirdre:
have you used public health as a model
that is a lot of emphasis on minimal datasets
are there particular things you are taking as an inspiration

we have a similar project on biosurveillance
they are looking at data records: clusters of flu outbreaks
they are using the same principles

ian moderator:
a comparison of secure two party computation frameworks:
jan henrik
what is secure two party communication

motivating scenario: genetic testing
how do you do this in a privacy preserving way
you can send your genome
there are data leaks
that could expose you to identification, discrimination
higher rates in your health insurance
the other way around: companies can send us the database and we can do the matching ourselves

how do we do it
secure two party computation
we can do the matching in the middle
nobody has to share the data
rigorous privacy protection, the output of the computation could reveal something: so you can combine
and theory tells us that any efficiently computable functionality

garbled circuits:
2-3 talks at the conference
where do the main overheads occur


model
use stc as blackbox
if there are so many frameworks, why don't we see this in practice

processing overheads
        crypto ops
        data blow up
        memory
communication overheads
        interaction
        data blow up

development and usability
        language support
        abstractions 
        documentation

if i am a privacy engineer: which one should i use
        dependable benchmarks and comparison?
        the benchmarks are not comparable.
        we did dependable comparison of stc

benchmarks
        basci operations
        advanced operations

        some fo the stc frameworks did not implement the functionality we wanted to test
        i want to highlight the results

garbled circuits and homomorphic encryption, which one should i use


the answer is: a hybrid backend, agnostic frontend
as a privacy engineer that uses stc as a framework, i want an agnostic framework

if i have to implement a new functionality, how well does it work:
        CBMC performs best

first conclusion is that GC is more promising than HE
improvements on the lower bounds on circuit size
determining the performance of gc
more research on hybrid approaches: still need to be investigated

how can we guidethe inexperiences stc developers
a lot of the code is research grade: functionality does not work out of the box
so there is still a lot to be done

q: any intuition about what turns into a fast stc
CBC draws a lot of performance from optimizing the garbled circuit
they tried that for a couple of examples, but not exhaustively for anything you can do

thomas:
what was the data set, the operation complexity


eva:
development has a lot to do with ecosystems of deployment
it was fiddly to implement, hard to debug, they didn't see the point
you had to implement security, that was friction with the application
there is some value of application of being an application developer
and that is why oauth2 happened: which took away the need for crypto
now we are adding it back
have you thought about adding dribs and drabs
what would be the value of implementing this, where is the value

a:you need to look for use cases that you cannot do right now: and see if you can do it with stc
q: is it not interesting for calculations of an identity attribute
you can add it where it is valuable at the margin
a: i am a big fan
one application that i have seen is in bitcoin: you can split the key that controls the bitcoin and collaborate on the control of your bitcoin funds

q: i am one of the authors of the garbled circuit authors. this is an invaluable part of the process
a: wireless networks: is that fair, we are not there yet
experiments we have done, there is no way we can handle communications circuits with billions of gates
in cloud service, in that kind of application, some of those 

a: quick answer, we looked at wired and wireless. it is fair, not looking at wireless gives a disadvantage for homomorphic encrytion
i dont think he is really dead


tor experimentation tools:


collecTOR:
collecting consensus documents
this is valuable information for researchers
it allows the recreation of the status of the network in the past


experimentation is mandatory to maintain the current state of the network and that anonymity is preserved into the future
tor: it is open source software
you can download the software, make some modifications in order to measure some statistics
you need a server connected to the internet
extend the software for statistics
realistic environment: the users are there

we are talking about privacy: there may be some concerns
it is not a good idea to experiment with a life tor network
we only have access to the network we are contributing
no control over the experiment

limited to deployed network
results cannot be reproduced
we cannot know what users are transferring through the network at tat time
we may threaten users' anonymity
not recommended


what we need is an alternative to the live tor network
we define some requrements:
        realism: the experimentation environment needs to be realistic
        flexibility and control
        safety
        scalability

we come up with a categorization of approaches previously and currently used to experiment with the tor network

analytical and theoretical modeling: it is the basis for simulation and emulation (you need a model of the system that you are trying to emulate, you need to verify the theoretical model)
private tor networks: limited in scalability
a technique used quite widely is the use of overlay networks, like planetlab: it is an overlay network for researchers to perform experiments
it has been used 
it has limitations: scalability
cannot be reproduced: we cannot control what they are doing with the current network

which leads us to simulation and emulation: we compared these
the basic comparison is:
        abstract model of the system, assumptions for simcplicity
        virtual time
emulation:
        little or no assumptions, all operations performed
        real time

simulators: shadow, torps, cogs
emulators: experimenttor

evaluation metrics:
        experiment characteristics

        tool characteristics


GENERAL COMMENTS:        
MAKES ME THINK THAT WE SHOULD HAVE MORE CASE STUDIES ABOUT HOW PRIVACY DECISIONS DO AND DON'T GET MADE
FIM: WHY SHOULD WE WORRY ABOUT OBSERVABILITY WHEN ISPS HAVE ALL THE DATA? A RACE TO THE BOTTOM ARGUMENT
ALSO TO UNDERSTAND BETTER THE TRADE OFFS WHICH THE PRIVACY GOALS PAPER RAISED AS A DISCUSSION POINT

QUESTION: WHAT ARE THE ROLES OF TRUST MODELS? HOW DO YOU DEAL WITH THE TRUST OR DISTRUST OF DIFFERENT USER COMMUNITIES OR PEOPLE AFFECTED BY THE SYSTEM (THEY MAY NOT BE USERS)
QUESTION: WHAT IS THE DEFINITION/DEFINITIONS OF PRIVACY THAT YOU ARE USING? HOW DO YOU DEAL WITH THE CONTESTABILITY AND PLURALITY FOR EXAMPLE THAT DEIRDRE WAS TALKING ABOUT?

HOW DO YOU MAKE SURE THAT USER REQUIREMENTS ARE RESPECTED?

WHEN SHOULD YOUR METHODS BE USED?

EVALUATION OF ENGINEERING METHODS/EVALUATION OF PRIVACY:
METRICS: WE NEED RESEARCH ON METHODS THAT CAN ASSIST US IN MAKING QUALITATIVE EVALUATIONS OF PRIVACY ENGINEERING METHODS


A LOT OF EMPHASIS ON MAPPING: HOW DO WE MAP PRIVACY PRINCIPLES, NORMATIVE UNDERSTANDING OF PRIVACY TO (PRIVACY) ARCHITECTURE OR MECHANISMS