Recycling from the original FET proposal

In general (as a FET proposal) much more specific and product centred than we have been discussing - seems like it is a subset of what we want to do, mostly specific to VPLs.

Feature list for a language

Important VPL features (”Simple things should be simple. Complex things should be possible.” -A. Kay)

  • multiparadigm VPL (eg. capable of dataflow, message passing, onion skinning, grids)
  • introspection (eg. language level inspection of active constructs)
  • metaprogramming (eg. programs that write programs)
  • distributed (eg. cluster or GRID based)
  • high level as well as low level constructs

Important editor features (“Things should be as simple as possible, but no simpler.” - A. Einstein)

  • uses graphical metaphors and tools
  • supports interactive, incremental development
  • uses language features for debugging and testing
  • capable of editing textual representations if required
  • multiple views / resolutions
  • automatic layout / presentation (n<10^6 nodes)
  • flexible / extensible

Perhaps it's useful to minimise the distinction between language and IDE?

This is stuff which we haven't focused/elaborated on:

Scalability

“One of the major reasons for the lack of adoption has been the scalability of visual programs, as there has been little (if any) successful work in scaling the well documented benefits achievable with small systems (on the level of 1000s of lines of C code), to systems on the scale of a kernel (Linux is approx. 2×10^6 lines of C) or an operating system (Redhat 7 is approx. 3×10^7 lines of text based code). While ‘lines of code’ is not necessarily an effective measure of a program (or system’s) complexity it does give an indication as to the relative scale of such code-bases.”

(Automatic) Interfaces with external libraries

“For any language to be of general use currently (and in the future), it requires interfaces to enable interoperation with commonly used libraries and hardware in heterogeneous computer systems.”

“This design would incorporate ideas from existing, large scale networked software distribution projects, such as CPAN, (the Comprehensive Perl Archive Network), CTAN (for TeX), Debian (with over 100,000 software packages running on 11 architectures). While it is beyond the scope of the project to establish or administer such a network, it is a necessary step for future adoption of the language, so warrants some investigation. We anticipate that extending existing software would provide the required features.”

Different application areas
  • Prototyping: “The major strengths that the Qfwfq system has over existing software systems are in rapid

prototyping and design. Currently RAD (rapid application development) and RSP (rapid systems prototyping) tools are in use in every major industry and are becoming a more important part of the workflow.”

  • Media industry
  • Distributed computing
  • Sensor networks
  • GIS (geographical information systems)
  • HEP (high energy physics)

Brainstorming

Range of interesting technologies

  • Tangible interface
  • Haptic feedback
  • Touchscreens
  • Keyboard/Mouse

Towards Potential concreteness

Spreadsheets → Programmable matter, use cases:

Tangible programming for searching open data
  • Human augmented realtime search (over batch mode search)
  • Is a change in emphasis (human interaction over raw speed) worth the trade off?
  • How can this be realised?
    • Interfaces
    • Suitible algorithms
    • Suitible visualisations
  • Going further - realtime modification of (eg. search) algorithms while they are running
    • Human shaping/adapting an algorithm as it searches
    • Connection to livecoding
    • Collaboration - allowing new levels of engagement with massive datasets by multiple parties
    • As a form of conflict resolution
    • Performative aspect - Hans Rosling style, but where the algorithms are open to criticism/flexible, more compatible with spirit of open data
    • Public access to data and interpretation
    • Fiddle with the interpretations for yourself?
    • Online/accessible/cheap tech very important here
"Getting a feel" for large datasets
  • Often cited as a problem by many professionals in different areas
    • “Used to be able to look at the numbers in a text editor/excel, now I can't”
  • Already have an intimate knowledge of their data, this is lost with size/“when it goes into the computer”
  • Why is this a problem?
    • Leads to a loss of control/understanding
    • Need to understand implications of algorithms - perhaps before data is collected
  • How can it be solved?
    • Psychologically (what is missing?)
    • Technically
  • How domain specific does the solution need to be, is it generalisable?
Understanding of time
  • A problem in social networks
  • Implications in other areas
  • Use of music
  • Human time vs computer time
    • Moving between these for tangibility
Finding appropriate ways of programming with a limited interface
  • Making phones/tablets hackable
    • Lack of tools in this area is a fundamental problem
    • Leads to specialisation of programming, reduces accessibility massively
    • Focus on Aakash for example
  • Text editors (even with nice features) suck on touch screens
    • Perhaps “making text geometric” (scratch) is not enough either
  • Does this actually require rethink of programming down to language level?
    • New language axioms or just representations of eg map/reduce/filter/for
    • Amorphous programming, focus on parallel, spatially arranged small programs robust to fuzziness
      • Is this more suitable framework for less discrete/“gestural” interfaces?
      • Multiple levels of process resolution (parallel with overall marshaling - scatter/gather approach?)
Novel approaches to creativity
  • Games as learning environments - well researched area
    • Game world as “safe space” for experimentation/creativity
    • Games as ways for people to see things from different perspectives
  • “Game programming” as solid existing basis for creative learning
  • Current examples lack integration of programming into the game world itself - treated as separate “layer”
    • When programming “invades” gameworld currently, it's a hack - minecraft/little big planet CPUs
    • We can make this hack a feature - designed in from the start
  • Algorithms as world, processes as agents = very visible/tangible programming model
    • As a solution to algorithmic malleability
      • Easy to see whats going wrong and where
      • It's realtime
  • Games as environments filled with interacting agents (incl humans)
    • Human level of understanding, rather than machine
    • Current languages abstract machine process into human level metaphor (for/while loops etc → assembler)
    • Next languages need to also abstract machine time to human understanding?
    • Remove the write, compile, run cycle - programming as interaction (see above)
    • Debugging techniques

Initial 2011 reset

Core motivations

Reasons to reinvigorate qfwfq from a FoAM perspective.

Cross/inter disciplinarity between fields

As an approach to connecting diverse fields (biology, architecture, physics, media art) with some specific common problems:

  • The increase of data volume and complexity and the requirement for designing processes (programming) to deal with it all. Alex says: This is close to the core aims of the OAK group in Sheffield too I think
  • The rise of computational solutions to problems in general has left some areas behind - eg. not all areas of biology have easy access to bioinformatics departments.
  • Not having enough of an understanding of the processes carried out on a data set can lead to problematic interpretations.

Currently the approach to a solution is a myriad of domain specific tools, languages and environments - is there a way to design tools and practices that can cross these domains? The project needs at least two distant fields or application areas involved to prove this.

One approach is applying lessons learned in education, graphics and games design with visual programming and applying them in a more general way.

Why is this needed? The Importance of a Code Literate Culture

“the code literate of our society are mostly white men” … “code written today is not representative of our society” http://rarlindseysmash.com/index.php?n=1309736919

With the introduction of algorithms into every part of our lives, diversification of programming is an important goal in itself (in which this could be seen as a case study):

  • Is the lack of diversity in programmers a self perpetuating situation?
  • Is the specialisation of information technology into separate fields problematic?
  • What is it about programming languages or its culture that is problematic for some?

Alex says: I think we need to find more data on this, will have a look around. Looking at this news item hints at a general problem of non-engagement rather than of lack of diversity. Interest in computing subjects in the UK has plummetted: http://www.bbc.co.uk/news/education-11011564 – If you look at the gender disparity though you see that computing actually has above average gender equality, which surprised me. In bioinformatics I think programmers are mostly female, right? Perhaps the problem isn't so much diversity of programmers but of lack of programmers and lack of interest in computation in general. That said there is clearly lack of diversity in those who write programming languages, having strong lineage to brusque white men on military funds. So is it a problem for our project if we're all white men?

Aims/Unique selling points

Our aims are to design a tool/language/environment that crosses disciplines by:

  • Covering multiple levels of abstraction
  • Embodying multiple forms of representation

And will prove it with the evaluation of 2 (or more) use cases in diverse fields.

Alex says: I have a feeling that the aim of covering multiple levels of abstraction could be at odds with the aim of non-domain specificity. Are lower levels of abstraction necessarily domain specific?

Possible Methodologies

Measurement of success by the use of workshops with individuals from the target fields. They could be given problems (perhaps outside of their field) to solve, initially studying ways in which their approaches differ - later applying the developed software/tool/process and studying the results.

Previous qfwfq/vapour feedback

Good bits:

  • Scalability/differing levels of abstraction (still?) original
  • Use of real application
  • Mixed reality needs sound development methods
  • Broad range of partners (CS, HCI, M/AR)

Missing bits:

  • Why such performance is not currently achieved
  • ST approach vague
  • Measurable goals and targets needed
  • Concrete info on technical approach (proof of concept demonstrator)
  • Generalisation of results needed
  • Detail existing approaches, advantages/limitations
  • Underestimated human resources
  • Impact to application needs justification
  • Scientific impact need justification
Questions

Why are we better placed to tackle this than CS or bioinformatics or architecture departments? Some way to present diversity as strength?

Summary

By 2020 computer interfaces will have become embedded into the human environment, following continued research and development in pervasive, ubiquitous and cyber-physical systems. Furthermore datasets will have continued to grow in size and complexity across fields of research and practice, surpassing the practical limits of current end-user programming tools such as tabular spreadsheets. These limits have already been broken in Biology, which has spawned the new field of Bioinformatics in reaction, and other fields are undergoing the same process of change (which ones? Digital Humanities? Computational Musicology? …).

Growing datasets allow greater insights, but only where practitioners take a computational approach, dealing with data at a higher level of abstraction than is currently conventional. In other words, practitioners need to become end-user programmers, working with data by describing higher order operations over them.

The opportunity lies then in developing novel end-user programming environments which take advantage of new modes of embodied HCI; environments designed for end-users outside of traditional computer science and software engineering contexts. To date embedded interfaces have largely been considered in terms of analog interactions; direct manipulation through tangible, touchscreen and gestural interfaces. However the present proposal is that new developments in interaction design may also be applied to higher order interactions, allowing professionals to deal better with upscaling of data complexity. Approaches to this have already been partially explored in the literature (cite the Self language, visual programming languages/tabletop interfaces), but await interaction technology which is already emerging.

This opportunity centres around the integration of formal programming languages with visuospatial perception, cognition and gesture. This may seem uncanny but is analogous to human modes of natural communication, for example the integration between prosodic and linguistic aspects of speech. (Cite cognitive linguistics, conceptual metaphor and dual coding theory)

The fundamental scientific problem to solve in order to implement the technology and get the benefits by 2020 lies in the mapping between the abstractions of formal language and the embodied interactions which emerging technologies provide. Some steps towards this goal are already well developed in object oriented and visual programming, but need to be extended and applied in the changing contexts of emerging technologies.

Our research questions are: How can linguistic interfaces be integrated with emerging, embodied modes of human-computer interaction? How can we apply these hybrid interfaces to create novel approaches to the design of environments for end user programmers? How can this benefit the cross-disciplinary requirement to understand and process large datasets?

The design of new environments for higher order interaction needs to be led by the needs of end user programmers from the start, through brainstorming and workflow analysis, leading to workshops and experiments to explore and evaluate the design prototypes which result. As part of this process, assumptions in the design of programming language environments need to be enumerated and reconsidered in the light of emerging technologies.

The needs of end user programmers differ strongly from the computer scientists and professional programmers who generally lead the design of programming languages. Escaping from established norms in software development, to establish a novel approach to the very different and changing requirements of end-user programming will therefore be a major challenge. By taking a scientifically rigorous and agile approach to the design, with close involvement of prospective end users, we offer a radical alternative to the lone visionary or massive crawling consensus approach to programming language design, towards radical, high impact ends.

There are risks in taking a cross-domain approach, as research into Visual Programming and Tangible Interfaces have previously only seen success in particular specialised domains. However the possibilities for finding commonalities in problems surrounding information processing across domains brings promise of huge returns.

  • project_qfwfq_notes.txt
  • Last modified: 2011-11-10 16:13
  • by davegriffiths
  • Currently locked by: 3.133.156.193