Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial import of some writing #97

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Initial import of some writing #97

wants to merge 2 commits into from

Conversation

ajbouh
Copy link
Owner

@ajbouh ajbouh commented Feb 5, 2024

These were being worked on in Google Docs, moving into markdown makes collaboration more explicit and as easier to refine.

These were being worked on in Google Docs, moving into markdown
makes collaboration more explicit and as easier to refine.
A3. ?

- M0.5. Eliminate language barriers by translating between any number of supported languages in real-time, across speech and text. [A2] [A3]
A2. ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A2: Enable access to a greater library of educational content and collaborators from institutions that do not share the student's native language

Example Context: Columbia Business School has a requirement for graduate students to do international seminars. The courses are taught by CBS staff in English, but take place in countries where the native language is not English. By being able to access M0.5 capabilities with Bridge, students would have a more comprehensive exposure to the business ecosystem for their seminars.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a great example. Can you be more specific about a situation that you imagine you might use bridge in to satisfy this educational requirement?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Examples:

  • As a student, I can talk to Bridge about where my upcoming International Seminar is located and engage with Bridge in English, but have Bridge pull resources from local institutions /recent research related to my course topic and translate them to give me a broader perspective on the material. This will help me identify new collaborators to meet with when I travel, and more background understanding of the domain.

  • As a student, Bridge can provide me with a personalized history of the location that I'm traveling to and the institutions that I'm visiting while I am on my international seminar, tailored to the topics that are covered in my class.

Scenario:

My international seminar is in Munich, Germany, and is on the topic of Branding and Marketing. We are visiting several German companies and will also have (English) classroom instruction on the topic. I'd like Bridge to synthesize branding/marketing/news about the companies we're visiting from English and German sources, as well as historical information about Munich's role in German business and economy.


- M0.8. Cast wirelessly to nearby displays. [?Jeff] [A1] [A2] [A3]
A1. ?
A2. ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A2. [Aspirational] - A shared classroom or institutional Substrate system that has access to student and professor lecture materials would significantly improve operational management for classes.

Example context: Switching materials on days where students are participating in sharing out what they've learned (for example, presenting a project) requires hot-swapping devices, classroom IT support, or a professor/TA to consolidate all of the presentations into a single converted format. Casting capabilities from a system that knows the collaborators in the class and has access to their most recent work would streamline this process and reduce overhead time in classes.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you give a more specific example of a situation where this might happen in your current course?


- M0.9. Help draft comparative analyses of ideas shared in both documents and live interactions. [A1] [A2]
A1. ?
A2. ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A2. [Aspirational] As a graduate student, many of my classes require a final paper to demonstrate what we learned in the course and to create our own novel assessments or proposals of new, related ideas. The documents that support the writing of these papers are a combination of supplemental readings (books, articles), verbal notes and slides from lectures, insights from linking in-course knowledge to other concepts and topics that I've learned before. With a Substrate system, I would like to be able to use the system's knowledge of all of the materials that I'm consuming related to the course, so that as I dynamically build an outline and start filling in my final paper, it can relate this back to the other materials and help me with developing out related ideas/synthesis and citation.


- M0.10. Allow you to build and contribute new functionality others can immediately adopt. [?Jeff] [A1] [A2]
A1. ?
A2. ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In one of my most recent classes, my professor was developing GPTs/agents that acted as companion instructors for the class to engage with. The goal was not inherently to have these agents teach net new concepts, but instead reinforce through repeated practice the ways that we could apply the learning. This allowed the teacher to focus on the live learning, and students had access to a more individualized system based on his pedagogy.

Copy link
Owner Author

@ajbouh ajbouh Feb 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am suddenly dramatically less worried about my kids growing up in a state with Very Bad Public Schools.

# Substrate

An Open Architecture for Intelligence
(or maybe ... An Open Architecture for Intelligent Systems?)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Intelligence" could well be interpreted as the kind of activity performed by The Intelligence Community — and in fact, the first search result for this phrase as is links to the defense sector. But it's less of a mouthful than "Intelligent Systems", which for sure will be interpreted to mean something much more fundamental/general than we mean here (e.g., slime molds, starling murmurations, human beings). I think we're better off leaning toward the "spies are cool" region of semantic space than the "physicists and mathematicians arguing about what intelligence means" region.

- M0.4. Keep an at-hand, ever-present record of conversation, tasks, and specific contributions. This captures good ideas as they happen, foster new connections between them, and helps maintain understanding between collaborators. [Matt] [Liv] [A1] [A2] [A3]
A1. Feature suggestions, observations,
A2. ?
A3. ?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A3. Map, track, and attribute ideas as they emerge and evolve in employee-employee correspondence. I spoke about how bad companies are at this with researcher Lauren Klein: https://complexity.simplecast.com/episodes/70 ...one big takeaway from that conversation is that we could be using techniques from the digital humanities to dramatically improve the circumstances for innovation in organizations, as well as happiness in the workplace. It's not just about encouraging the production of good ideas but also ensuring people are fairly recognized and compensated. This could neutralize one of the most widespread forms of organizational pathology, which is people stealing each other's credit and/or becoming obsessive about preventing this to a degree that harms collaboration.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which techniques from the digital humanities?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Topic modeling. This kind of visualization is something I would salivate over as a near-future Substrate feature for this specific use case:
https://medium.com/@power.up1163/visualizing-topic-models-with-topicwizard-ee5b4428405e

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you give a short example in screenplay/narrative form that shows how this might work in any of our specific settings?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A1. At a glance, members of a software development team can see a network map of updates proposed by collaborators without having to scan a huge linear document and check for modifications one by one. Being able to see the "overhead" view of the project draws talent to the problems that need fixing quicker and more efficiently and creates "attention basins" by comparing different-but-related codebases that can attract talent from other projects into areas where they may be able to make significant updates.
A2. Grad students in a biology lab manage to successfully petition to reorder their names in a major scientific publication based on Substrate's records of lab work and meetings. Consequently, the students who contributed the most to that paper are awarded research opportunities for which they might otherwise have been passed over. (Note: citation bias is a HUGE issue — https://physics.aps.org/articles/v16/15 — and addressing this issue at the headwaters reduces even more pernicious problems downstream.)
A3. It's time for annual bonuses and thanks to Substrate tracking workplace meetings it was easy for leadership to run a topic model on the best ideas that have come out of working groups this year. As it happens, three of the best ideas came from low-level staff, who were granted season-changing bonuses for being unsung sources of major innovations. (And one of the best ideas had been claimed by someone else, which resolved some major resentment in the office. Justice!)

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's develop A3 a bit more. Take a look at the script that Ryan put together for the student scenario. Let's make a similar one for A3.

Let's start with a specific background and an outline of features to include, which I'll send over shortly.

- M0.9. Help draft comparative analyses of ideas shared in both documents and live interactions. [A1] [A2]
A1. ?
A2. ?

Copy link
Collaborator

@venturealtruist venturealtruist Feb 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A3. If the business in question is a research enterprise, this is going to dramatically accelerate distributed search of blind spots in the space of possible hypotheses/experiment designs. I spoke to James Evans about doing this (https://complexity.simplecast.com/episodes/55) — in his recent work he has been treating "What questions will scientists be asking in a year?" as a token prediction challenge and finding truly novel opportunities by superimposing ideas from different fields. Deploying the machine as a kind of naïve outsider, because many if not most scientific breakthroughs come from people with little expertise in the field where they made their breakthrough...contrasting ideas from, for instance, a ecology paper and an economics paper can reveal opportunities to apply models in new domains, or reveal where the barrier of technical language has obscured a deeper unifying insight independently discovered in multiple areas. You can also "go hipster" and reverse the polarity of the system's suggestions to identify blind spots common to both human and machine and make a bet on finding something totally unexpected.

Finding negative space in our knowledge graph — formalizing multi-/inter-meta-disciplinary investigation — might be the most promising new frontier for scientific discovery: https://arxiv.org/pdf/2306.01495.pdf

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the above, imagine you could task a computer to bring a concrete set of questions to each encounter (written or verbal). What would those questions be?

An example question might be:

  • Which parts of this work might still be important in a year? In 5 years? 50 years?

Always answering the same question(s) for every task might get tedious. Feel free to give a context-specific question or a sequence of interrelated questions.

A1. ?
A2. ?

- M0.10. Allow you to build and contribute new functionality others can immediately adopt. [?Jeff] [A1] [A2]
Copy link
Collaborator

@venturealtruist venturealtruist Feb 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My sense is that the best possible version of this builds on M0.3, M0.7, and M0.9 by actively identifying where other people's contributions might help you address a specific use case you care about. If in five years everyone is engaged in real-time collaborative ecosystem development, the current model of "suggested apps" is still going to buckle under information scaling pressures...it'll require a much more advanced understanding of what new tools you aren't even aware of yet can improve your workflow for the thing you are doing at this very moment.

What I have now: The Google Play store recommends apps other people "like me" downloaded, and it's effectively a matter of chance as to whether or not I discover that one of these suggestions addresses a problem I'm actively engaged with.

(In fact this is a whole NEW problem we didn't have twenty years ago because we weren't stupidly rich in apps we don't even know about, and reading people's reviews is a time-consuming and noisy way of trying to assess whether you're actually pulling the one that will suit your needs best out of a zillion variants. This isn't empirical at all; it's a question of whether the designers made a slick sales page, etc. We're back to using good looks and hearsay to establish trust in strangers.)

And God forbid I happen to develop a superb app that nobody ever finds because gaming the attention economy is not my full-time job...

What I want: I suddenly find my machine is capable of amazing new, perfectly relevant things I don't necessarily need to pre-approve, because (1) the machine can inspect code and predict both how new software will alter performance and whether I will be happy about it; and (2) my data is being managed locally so I'm not concerned about baroque, inscrutable surveillance capitalism end-user agreements.

I want automated novel functionality search to be something I can make more conservative when I'm committed to process and solving a problem in a specific way, and I want to be able to raise the search "temperature" when I'm more interested in results and in finding a creative solution (https://en.wikipedia.org/wiki/Simulated_annealing).

In the current paradigm where the corporate app store vets developers and the whole thing is built on the exploitation of attention, I wouldn't dream of letting my phone approve foreign app downloads. In the world I want, one function of my local language model is to serve as an immune system and executive assistant that intelligently sorts which new software proposals make it past the "front desk" — at which point I can choose to choose, or choose to let my machine do the choosing, as suits me based on my risk tolerance.

(This is all contingent on networked machines being able to query each other in a kind of collective reasoning-checker exercise where no machine has to take it on faith that the new code works, or works as advertised. We can't reasonably expect a machine to be able to simulate all possible failure modes but it can cross-check "what it thinks the new code will do based on an M0.9 capability" with pools of anonymized data and analysis. Could look something like Goodly Labs' Public Editor, only made of machines: https://www.goodlylabs.org/projects)

Copy link
Owner Author

@ajbouh ajbouh Feb 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The real limiters in any sort of automatic system (much more quickly than you might expect) are always: 1) how much electricity you have and 2) the finite patience of we humans have for false positives.

While solving a problem that will matter in five years can be a valuable activity, I believe it's important to find a sequence of problems which lead us in the right direction.

In fact, I would say problems (like app discovery) that have only just hit us are probably not a good thing to invest time in. They are evidence of the fact that apps are a misfit and should be abandoned.

Instead we should backtrack deeper into history and find the problems from 25 or 100 years ago that are still with us now. With a wider view we can consider what tools we can build today that were once only pipe dreams of the 20th century.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

• With the above I'm really just embroidering the question I asked you in the Milestone 0 Google Doc about what this software hub/store is going to look like. I haven't heard a thing about that yet and yet it's a major bullet point in the demo...but given that we are showing off limited reasoning, it seemed safe to assume that this system implements some clever new mechanism for surfacing the right new features at the right time with smart, context-rich recommendations.

Separately: I agree that apps are not the fundamental unit in five years and that what we're really talking about down the line is more like plasmid-based horizontal gene transfer between bacteria. Totally safe to replace every future tense instance of "app" above with "relevant code string" or whatever. Still: How does this system, not in five years but now, help developers push code that others can easily discover and deploy when they need it?

Good point about false positives, but that's why I suggested tolerance for false positives is an adjustable user setting. Moving leverage to the edge = not making that decision for them.

Mining well-established unsolved problems: Totally. Not knowing the right tool for the job, not having it when you need it because you have to sift through the overwhelming panoply of options...that's perennial. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants