Enhancing SaaS application productivity with generative AI

Today, employees typically switch between 6 to 8 SaaS applications from different vendors to complete tasks, switching context constantly. Application developers see an opportunity to use generative AI to reduce this context switching and provide a richer end user experience, but are often limited to data within their own software without including cross-app data. Data silos form and generative AI results miss important context from other applications that help end users make better decisions.

AWS AppFabric for productivity reimagines productivity in SaaS applications by generating insights and actions with context from multiple applications. AppFabric fully manages the integrations with multiple SaaS applications and automatically normalizes data for use across apps. This means developers can enable AppFabric productivity features directly into their applications' user interface to maintain a consistent app experience for their users, whilst surfacing relevant context from other applications. With cross-application context, developers provide a more personalized user experience that increases adoption and loyalty.

End users benefit from accessing insights they need without interrupting their workflow. Embed AWS AppFabric in your application's new or existing generative AI assistant to enable your users to be more productive and more connected across their preferred apps.

AWS AppFabric is a service that manages to connect multiple SaaS applications together. Since our launch in June, IT and security professionals have been using AWS AppFabric to enhance the security posture of their companies across a very complex application stack.

Today, we are excited to continue our journey working with developers to imagine productivity for employees. While many AI productivity applications are designed to enhance the productivity of their users, the typical employee needs to interact with 6 to 8 applications just to go about their day. Disconnected applications ultimately lead to data silos forming across all these applications, and that leads to critical information at risk of being missed to execute workflows.

The typical employee tends to spend over 12% of their time just trying to locate information that's dispersed across the data silos. That's one hour of every day being used to try and locate what's important to me to just go about my day. But what if we could change this? What if we could create a much simpler experience for employees so that they can stay in the flow of work?

A solution that's emerged over the past 12 months has been to introduce some form of conversational UI to enable users to interact across the broad portfolio of knowledge in their company. We've also realized that leads employees to add yet another application, yet a new user interface that they need to familiarize themselves with, and therefore exacerbates the cognitive overload as they need to understand and go about their work.

With AWS AppFabric, SaaS application developers can actually embed the ability to surface what's relevant for their users in their applications using an interface that their users are familiar with. This creates an opportunity for application developers to create highly personalized, user-centric experiences centered around your work.

AWS AppFabric allows application developers to first surface actionable insights that reveal the context and content users have across all the applications they need to go about their day. This is enabling application developers to create very personalized experiences. Ultimately, users are benefiting from increasing productivity and execution of their cross-functional workflows.

Um then it actually gets a lot more challenging. So we, we found and I think this was common. Everyone I've talked to like the distance from like something that works in an alpha or beta to getting it production ready is way bigger than in traditional product development.

So I think part of the challenge here is like once you get started, then, then what do you do? Like how do you actually get it um production? Because a lot of these muscles you, you're, is probably not used to. It's stochastic at heart. It's not deterministic. There's new technology to customers. They're worried about privacy controls. There's all sorts of trade offs that your company and your customers aren't used to, especially if you're an enterprise.

Um so for us, like conversation with customers has been key to this. And when we were first getting started, I think we were two or three weeks into using LMs. We we didn't have a designer working on a PM. I was pretending to be the designer in the PM. We were just getting started building smart status.

Smart status is so if you're working, you have a status update, you need to tell your stakeholders what are the risks? What are the blockers? How is progress going? What do you need help with? Um when you collaborate, there's a lot of those insights hidden throughout the tendrils of like your comments or tests being completed, decisions being made.

So with LMs, we've been able to have Asana just intuit those and get you started 80% of the way. So we built first prototype of smart status. It was rough around the edges, really bad UX but we showed it to customers. I was like, oh maybe this will work. Maybe customers are going to be like what is this? But their eyes lit up. They were just like this is amazing. This will save me so many hours. When can I get this rolled out and what can I do? And they gave us a lot of directional feedback that was really helpful of what they wanted, what they didn't want.

So I think from that it's just also getting your customers hands as soon as you can in a kind of a limited pilot way to see what they think and get feedback. And, and that speaks to the concept of experimentation, which is ultimately really fueling this space of innovation where we've been seeing with the technology and Matthew work a lot with a lot of builders across many segments. And we've seen a lot of changes so far in the product experience. How do you think that's gonna be evolving over the next two or three years?

Sure. Yeah, two or three years is actually uh uh a lot of time given the, the pace that that's happening. Um I will say what, what i have seen be successful over the last year and what i believe will continue to lead to success is first off, every company needs to develop subject matter expertise in how to work with Lol and this comes in various areas. Uh the, the two that i think are most important to highlight are prompt engineering and ML Ops.

Uh we actually have extensive prompt engineering tutorials and interactive tools for quickly leveling people up. Um those are all available at docs.anthropic.com and then on the ML op side, um this is basically learning how to assess the performance of an LLM before you bring it into production as well as after uh given that these models are inherently stochastic, you want to ensure a high level of performance that your product team is happy with.

And this is something where UX is incredibly important and i believe this will remain true. So suppose you have some tool that you've built for your application that works 90% of the time that could either be incredible magic or it could be utterly frustrating depending on how it's revealed in the UX and how users can correct its mistakes when they happen.

And so as you think about product izing any LLM driven uh feature or application or product, um i recommend bringing the product and UX teams in at the beginning and being super clear and upfront about what success looks like. So you can very quickly tell if what you have is likely to succeed and likely to work well with users.

And then you're going to want to continually update your sense of what is possible every few months based on all of the improvements that are happening in large language models. So for example, nine months ago, doing something with agents, you know, a LLM system that can take multiple actions over a period of time, that was, that was like incredibly shaky and out there nine months ago. Whereas we're now seeing agents use cases in production.

And so you'll need to continue to pay attention to all these improvements as they come and actively experiment with them to get a sense of what is possible so that, you know, when you can build a product off of it. And so it ties back to the prototype, quick prototype, it push it out to customers, get feedback and adjust accordingly.

And i think there is an overall theme of adaptability across multiple dimensions. And maybe if i were to just add in terms of like the next 2 to 3, you know, i think like my perspective is that today from a user experience perspective, gen AI is an additive experience, right? From, you know, so you know, just like Asana, like we're in the collaboration space, right?

And one of the challenges we face in collaboration is that there are all these capabilities, these features that you keep adding, but there's a diminishing return because the discoverability of those features, you know, goes down as more and more are added. So i think a big opportunity is actually in how do you actually move to much more of a solitary sort of UX as opposed to an additive UX where it's not like one more thing, one more option, one more entry point, but that you actually start to reimagine the user experience.

So my bet is next three years is that you will actually see some fun mentally different user experience paradigms that will start to emerge that are powered through these gen AI large language models that simplify the interaction between the application and the user.

So this whole notion of human computer interaction like this whole CI space i think is up for a lot of interesting stuff in building these paradigms, totally agreed on that. Like i think about very complex applications that normally have quite a long learning curve. And any first time user is just intimidated by this wall of like 100 different control surfaces on the on the screen and doesn't know where to start.

And i think LMs through a combination of natural language interface for asking questions and learning what's possible. Plus anticipation and what the user might want to do that will just make learning curves for these applications like far shorter. I think the ability of large language models really to have this reasoning capabilities. We've seen a lot with the advancement with Cloud Cloud 2.1 has been really at the forefront of that really enabling a lot of these more proactive proactive experiences.

Eric, i think Casana, you've developed another archa model on how to think about product experience.

Yeah. And i think like the phrase that i was thinking with you, we are saying is we've been saying like LMs and AI should help connect the intention to your outcomes of what you're trying to do. You shouldn't need to learn every button click and sign up for a product and learn the product before you get started.

Um like product should be built and personalized just for you. So we were kind of thinking about the future user experience in like two large flavors. I don't know what it will mean, but there's something about like, proactively flagging what needs your attention and how do you move forward? How do you move, work forward? We're calling that air traffic control.

You shouldn't spend hours of your day reading email catching up on work. It should just you get started and it's like, oh based on your role and the context and what you've been doing like this is change, this needs your attention. Focus here.

The other is focus on what's your mind, your intention and like co create a solution with AI. That's why you're seeing the phrase co pilots pop up so much in the industry. But right now those interfaces are all text based. That's not going to stay the case when you're, when we're collaborating, we don't send paragraphs of text back and forth. We whiteboard, we have diagrams, we have visuals like the media needs to adapt and AI is getting really great at doing that as well and that's going to continue being there.

So this text back and forth is definitely going to change and, and i feel like so that then i'm trying to connect the dots here. So experimentation on the core of accelerating and thinking about the product road map. And then there is this tension, this natural tension on balancing these R&D investments, this prototype this experimentation that may yield return over a longer time horizon than investments that may actually unlock customer value now and today.

And so how maybe i'll start with you, Eric, how are you thinking through that?

Yeah, i mean, this is, this is a scary one if you've um were looking at the news the last few months, like OpenAI day happened and there are lots of tweets of um like AI leaders being like my six month road map is going demolished or my start up like what are we doing now?

Um so it's really hard to plan ahead when, when things are shifting so quickly. So i think there's two parts. Where do you invest your time now and how do you adopt your teams? So you can move faster?

So for where to invest your time? Um i think as Vern you were saying, like, focus on your customer jobs to be done first and foremost, like where are you helping customers? Where's your differentiators? Where are your moats and like invest in that? Strengthen, what makes your company special?

Um i think some people are forgetting their normal software engineering best practices and they're doing these like mega like files and prompts with everything like lean into composition, lean into module architecture, observable and testable code evaluation pipelines like when you change something, you should know exactly what happens. It shouldn't be like a whack a mole that change one thing and 30 things break.

Um and i think also people are forgetting the family elements of enterprise software, like access control scale. These all get more important with AI not less important. And i think a lot of companies are like, oh, where do i find the budget to work on? AI? Let's take it from access control. Let's take for this. And that's the opposite of what you want to do. You actually need to invest more in those as you're working on this.

But obviously and security are really at the core of it.

Yeah. Yeah, because if, if it's your company already is really nervous about which employee can get access to what data, there's autonomous agents, getting access data is even more important to be careful.

Yeah. And i think the last thing is when you're moving really quickly, we found it really useful to align cross functionally with what parts of the end to end user interface or experience are we confident about? And the answer is a lot of them are not.

And then for those cases don't spend three months polishing the perfect design mox and building the perfect flow. If you think in a month, you're going to throw it out, just do that in a day like unblock yourself, show it to customers, move on, focus your time where you think it's going to be sticky and where you can kind of keep building off of.

And Maron, you are introduced with the Mirror Assist, the launch of Mirror Assist and you talked about earlier anchoring an investment working backwards from what is the business problem? What's the customer pinpoint that you're really trying to address?

So can you take us through maybe how did you approach that uh process as, as you folks made investment and ultimately built Mirror system?

Yeah. Yeah. So i think like we went through a journey much like many of the other companies. I think the first step of the journey was just make sure that you get something out so that you're not the only company that doesn't have, you know, gen experiences. And i think that was the single thing is like time to market, make sure that there's something out there.

Obviously, there's a secondary benefit to it, which is you're trying to learn from your end users, what are the things that you use? So in our case, we actually launched something, i think it was like a seven week sprint and we launched 14 capabilities

We took 14, you know, canonical sort of use cases. And we added AI magic to Eric's word like AI magic to those things. And the whole idea was we don't know what's going to work. We don't know where the end user is going to see a lot of value. So the best thing to do is, you know, ship them really, really fast and, and then see what the response is and like much like many of those things, I think what we found is that in our case, there is not an 80/20 there is actually a 95/5 - 5% of the features are driving 95% of the usage. But maybe it was just that we rushed a bunch of things and that's why, you know, the ratios were such, I think, like sort of looking at like our existing sort of plan and strategy, you know, going forward.

I think the unique observation that we have right now is that for the first time I'm seeing that there is a brand new piece of technology that is relatively, you know, new and being baked where the willingness to pay by the by the customer is very high. So the fundamental framework that we are using right now at Miro in terms of prioritizing our capabilities and functionality around gen AI is what are the features we believe that fundamentally the customer will say pay additional money for. And uh you know, the last uh 90 days, I actually did a tour and I went and met with about 15 different CIOs um in North America and 75% of them. And this is just like, it's not like a scientific study, but it's just like 15 CIOs just discussion, 75% of them told me that they are going to have a net new line item in their budget, which is going to be around AI and and meaningful amount of money is going to be put in there.

So the insight for us here is that there's literally an opportunity not only to actually make the product better from an end user usability and automation and workflow, but really to accelerate revenue. And that's actually the guiding pole that we have internally in terms of prioritizing those things. I think it ties into a recent study that we published as well where we've seen that over 90% of the companies that we surveyed across several industries are expecting some form of usage of AI technologies becoming more and more prominent as part of their business processes. So that seems to be uh a consistent. Yes. Yes.

And so as we're thinking about roadmap creating road maps, um we talked about there is a lot of uh the, the pace of innovation, the technology, there is a lot that we um don't know yet about the technology. It sparks the question as to whether we need to approach product road map also somewhat differently. And uh Matt I, I love your perspective as to whether you feel like we should bring additional stakeholders, other experts as engineering team, product team are starting to double around not to think about their product experience over the next 6 to 12 months.

Yeah, I would say there there are several categories of experts that make sense to bring in. I already talked about the importance of UX and how that can make or break AI driven features. I would also say depending on the type of use case that you're pursuing, it may make sense to have subject matter experts in the type of intelligent behavior that you're trying to get from the LM come in and do extensive benchmarking of what good performance looks like as well as if you're say fine tuning a model, you'll want to bring those experts in to generate reinforcement learning for human feedback data. So essentially, this is data that's used to show the large language model, what a good response looks like in order to maximize its performance and bring the style of thinking that the AI is required for that, that subject matter expertise into the model so that you consistently get that kind of performance separately. Those subject matter experts are great in terms of doing the test driven development and ensuring that performance is good across the full range of potential user inputs.

That's, that's super interesting and, and, and i feel like there is this um there is a heightened um attention on what are the societal impact on some of these experiences. It's super important to really have this broad range of partners that you can tap on to and ultimately inform uh what are some of the unknown unknowns if you will that it's worth watching out for. So again, it reinforces the the focus on understanding the evaluation of some of these experiences as well.

Yeah, that that's very much an important point to bring up to talk a little bit about edge cases and potential risks. There, there are a few flavors of areas to pay attention to. So one is hallucinations. So all state of the art language models still hallucinate sometimes. And this is something where they may make up an answer as opposed to saying, I don't know for what it's worth humans also hallucinate. And that's a separate problem. But um there are, there are several methods of addressing hallucinations. Um basically, so Claude has been trained using something called constitutional AI that um embeds a set of moral principles in the model. And one of those is to basically only answer if you are confident that you have the knowledge to answer uh separately, you can address hallucinations with prompting. So you can basically force the model to show its work. You can say, ok, sort cite the source material that I gave you in the prompt before answering and then explain your thinking process. And then that way users of the model's output can go back and check the model's answers. Uh so that's, that's hallucinations bias is another one to be aware of.

Um I actually think the AI community has done a great job of jumping on the bias issue. Um I i recall going to the NREP conference as far back as I think 2014 or so and there was already a whole group looking at bias. And I think that while bias is a potential issue to be aware of large language models are remarkably good at being less biased when you tell them to avoid certain kinds of bias. I think they actually substantially outperform humans. In this case, you can walk up to a model and say, hey, avoid these kinds of bias in your response and you will, you will get that to actually happen, whereas it's a lot harder to get a human to do that. So while bias is an issue, there's some very mature techniques for dealing with it. Constitutional AI is also used to reduce bias and Anthropic has several papers where you can look at bias levels against several common bias benchmarks. And I think so we're going to get into safe. I think there is very much to discuss there a lot to discuss there. But i think you mentioned the world involving expert partnership a couple of.

And so maybe from perspective of from the engineering perspective here, how do you make that decision? At what point do you partner with somebody versus actually deciding to build and taking on the work with yourself so that partner versus build?

Yeah. Yeah. And i feel like, i'm, i'm maybe starting to be a broken record here, but like, you've all worked with partners before probably. Um i think one of the things early on in us so that we had to train our people is like, hey, this isn't that different? There's a lot of your core things you need to do. So build verse partner is a classic thing. You want to look to a partner when they can deliver customer benefits, either better or faster than you would be able to do it yourself. So you can focus your innovation tokens on what's unique to you. I think what makes gen AI different is pace of innovation, shifting landscape. We've talked about that as well as customer sentiment and partnerships having a lot more impact in the decision than just pure technical factors.

So, um because things are moving fast, you have to think in order of days, not quarters. Um and that just shifts, um not necessarily which partners you work with, but how you use those partners and how you plan around it. Um you can't spend six months on an RFP for working with a partner because in six months it might change. Um well, i guess it will change, it will change. And then I think for the for the other end, like if it's not purely technical, just like what's the quality, then how do you decide which partners to use?

Um i kind of think of this as two partnership buckets in my head. One is um there's sets of horizontal problems, everyone hits these. Every technology company is going to hit these and you just need to look at like purely can you move faster by doing this? Is there a sticking cost like how, how much are you gonna be able to pivot and iterate later? Um anything example you were mentioning like tooling like LM, tooling for evaluation pipelines. Like that's, that's a clear one. You just want to figure out how to move faster with that. And you want to look for near term, ROI for that.

I think the second is like deeper partnerships and these are like, how do you invest with a company that you have mutual beneficial reasons to help one another with their respective respective missions? So these take longer but you will if it works out achieve bigger outcomes. So i like to think about us in that example, we started working together two years ago, a year and a half ago and none of our conversations where like in two months, what's the best thing we can do? It was like, how do we work together long term and help Asana and help AWS and help, help the whole ecosystem. How do we, how do we kind of build across our product, technology, customers? And we've kind of learned and pivoted together as we've gone on for all these cases though. You just need to be nimble and focus on where to put your innovation tokens essentially.

Absolutely. And so i, i wanna maybe double click on the CAI aspect. I think there is, as i mentioned a lot to discuss there, what do we um again, Matthew working with a lot of builders, uh what do you think the biggest roadblocks are as uh as you're seeing companies of all sizes trying to adopt and embrace chance we have.

Yeah. So from a safety perspective, I've already touched on a couple of the areas. So we talked some about bias, we talked some about avoiding hallucinations and there are plenty of techniques there. The next area I'd like to touch on is jailbreaks. Essentially, jailbreaks are attempts by the end user to subvert the model to do something that uh the model's creators, uh the, the people who wrote the prompt, the enterprise that's, that's running, the model does not want to have happen. That's something where if you think about adopting a large language model into your company, it's almost like hiring an employee or rather hiring a whole set of employees. If you're going to have AI do your customer service, for example, you want to make sure that that AI represents your brand well, and to that end, you want to make sure that the model is quite resistant to jailbreaks.

S Anthropic has done a number of techniques to minimize its chance of getting jail broken. And there's a number of studies that have shown that, for example, while other industry leading models have had a jailbreak rate of around 50% for the attacks that that were tried. Claude's jailbreak rate was around 2%. So we, we take model safety very seriously and we want to make sure that um if the model is deployed in a public facing way that it's extremely resistant to being jail broken.

Uh Veron in your conversation with CIOs, um what are some of the themes that have emerged around? Uh some of the concerns, some of the roadblocks in adopting general high powered solutions?

Yeah, I think uh it, it's, it's similar to what Matt has also mentioned. I think I would echo everything that Matt said. I think like uh what I would build on top of that is that generally when we talk to CIOs, they think of like uh two classes of use cases that are being powered. One is where you're actually generating some information and when you generate some information, then there is obviously a lot of angst around. What is the source of that? Is this correct or not? And you know what the implications of that are having said that I think gen AI especially LLMs you know, can be used for activities that are not generative, right?

So I'll give you an example in our case, you know, of these 14 capabilities that i mentioned half of them are use cases where as a user, you know, let's say retrospect is, is a big use case in Miro, right? So you are sitting in a room at the end of the session, we want to say what was great and what could have been improved. So we get all of your feedback

It's on a single mirror board, bunch of stickies. And what you want to do is sentiment analysis. You want to say what's positive, what's negative and what's neutral. This capability is actually powered through an LLM. And when we go talk to CIOs, when we go talk to our users, they are actually very happy and usually have no concern whatsoever around these things because there's not new like content being generated.

However, when we get to, you know, generation of content, you know, there are concerns, you know, which Matt talked about and the way we've thought about it and the way we engage our, our CIOs is that we fundamentally believe that there should be a human in the loop.

There is one specific thing about Miro and I think it's true for Asana as well is that we are a collaborative app. The definition of collaborative app is that at the given time, there are two or more people on the same canvas that are working together. So imagine a scenario where you generate some information and that was not the the output that you wanted instantly the other person who's on the board will actually see that.

So one of the user experience sort of innovations or like optimizations that we've done in our paradigm, it actually resonates a lot with our customers is that we actually display it on top of the board. And we ask the user, does this look right? And you can say, no discard it or you can say yes, it looks right. Keep it and that small, you know, ux hack or you know, uh is actually resulting in like a lot of like users saying, ok, this is something that we can start to roll.

So I think like the concerns are there, you know, ii, i think to your previous question, i think like we get a lot into security reviews, you know, a lot more. And so unlike some of our other capabilities, in the case of LLM powered capabilities, we had to build a toggle switch. So as an enterprise for the first time, you know, even though it's a core capability, we had to build the switch because some enterprise customers were not comfortable with rolling it out enterprise wide.

So that's one thing we had to do. And then so I think the role of security and privacy, like getting your infra and legal folks involved up front, we had to redo our terms of service. We had to get an explicit opt in from the end user. And we built a bunch of these things that basically gave us the confidence and also the confidence to our enterprise buyers around these capabilities.

Yeah. Um yeah, I'll, I'll just build on this. Um I think we, we have an internal think tank uh at a so called the work innovation lab and they did a study that 10 out of nine out of 10 people have concerns about using AI ethically at work, which is way higher than I I thought.

Um so yeah, it's, it's clearly easy to feel scared about this. Like in addition to the toggles and the controls, it's just like, well, should I use this? Can I use this? Um so I think is just really important across the whole company from builders all the way up to executives, you need to make sure that you think through the principles design systems, processes, launch reviews that you're using AI in an ethical way.

So as an example, like resource management is really big in enterprise work management and bias is a real factor in LLM. So if you use AI to make decisions about how do I staff teams who should be working on this can easily lead to performance review feedback which could lead to hiring and firing decisions and that then could become a really harmful thing.

People often be like, oh Asana, why is bias important? But like it's very clear um this link of how we have to be careful about it. Um so something that like since I joined Asana, it's been always really at the forefront that our mission is to help humanity thrive. And that keeps coming up with AI.

Really often because if we think about like the health human thrive lens of resource management, you can kind of spin that around. Like why is it even called resource management? Why are we treating humans like cattle that we're just like moving to different pens and they're like disposable and interchangeable. There's a different way we can think about using AI towards that, which is much more employee first.

How do we help individuals ramp up best to the work context at hand, we can understand their existing skills, their exist interests, their growth areas, their their context on the work. And then there's a lot of magic we can do when they get assigned a project. They like, oh here's your on boarding project. Here is a link to a video to learn more about this domain area and we can kind of set them up for success.

So I think there's risks here, but there's also a way to use this and turn it around and make it both better for the company and also just much more magic for the employees.

One of the aspects I think it's phenomenal in this wave of technologies that safety critical component of it. But there is an overarching theme here around change, management, change is affecting, how teams think about the road map. How do we educate users, how do we educate companies around understanding the capabilities of this technology, ultimately addressing fear and anxiety, the the necessity of creating these opt out experience or opt in experiences.

And so um how is this AI has been around for a long time, I mean, it it is not something happened in the past 12 months. And so how is this generative moment of artificial intelligence? So different maybe come up?

Yeah. So it has been interesting, I've been working in AI for close to 20 years now and I would say the there are a few very significant differences between the the models that have become available in, in the last year or so versus what was present in, in the late 20 tens.

The first is that the scaling laws which basically showed that if you, you know, throw more data and more computed training that you get predictably better intelligence as measured by like next token prediction loss, those scaling laws basically gave a handful of companies the confidence to raise enormous sums of money and then plow all of that money into model training.

And so these models just fundamentally are now trained on a good fraction of all human knowledge. And rather than having a specialized model for one task, you have this foundation model that's capable of a wide variety of things and you elicit the behavior you want through a combination of prompting and maybe some fine tuning.

Um I would say the second major difference that's shown up in the last few years is the rise of prompt engineering. You know, in the 20 tens, if you wanted particular behavior with the model, you had to train it for that behavior in particular. Um or, or at the very least fine tune it for that behavior. Whereas now you can actually change the prompt on a daily or even hourly basis as new information becomes available.

And that means in an enterprise context where say the core knowledge that powers the enterprise. If that's changing on a moment to moment basis, the model can now adapt to that via tools like retrieval augmented generation where essentially the model is getting up to date information in its prompt. And as a result, these models can be much more tightly tied in to the day to day activities of the enterprise or of the users of that enterprise software.

And so with technology changing so rapidly. Movie baron and, and eric, how are we thinking about structuring our teams? How are we able to innovate when it's almost like the carpet is shifting under your feet?

Sure. Yeah. Um I mean, I think, I think of the key question as um what you were just saying, like, how do you adapt your teams to this new way of working? And um I think that's gonna depend a lot on the company, uh different teams have different cultures.

Um but I think the biggest change for us was just developments a lot less linear than you're used to. And your muscle memory every like pm leader i talk to is just like my muscle memory of what i'm used to. My intuition is just off and i need to learn that.

So we've kind of um introduce like two phases that we use. And we think about when we're doing product development, these names are very literal, not very creative, but the first phase we call ship to alpha. And it's we take a strategic direction and what's the fastest path to getting a working prototype? How do we get this in the matter of days? Not months that something can be used.

And this means we have to retrain ourselves on how we collaborate and build. We don't have to start with upfront slide decks and docs. We don't have to start with mocks. We just start with a prompt and maybe a little bit of code in the sandbox and get started.

Um and then we could just see, we also had to train our engineers that throw, throwing away code is ok. I think code can't be precious with this kind of mindset. But then every week now we have these demo days where we can show leadership like, hey, here's what we're working on. Here's the art of the possible and it really just gets everyone trained up the whole company. You were saying you can't have this in a, in a silo, like the whole company can see what's happening, which is great.

Um the second phase we call ship to production, which is um that 30% which takes a lot longer than 30% should take where you have to say, ok, this resonated, this is useful. How do i get it in the hands of customers? How do i get it reliable? How do i get it to be enterprise grade?

And that's just really hard. So as an example, a feature we launched recently is called smart answers. Takes the treasure trove of information in a sana. And you can ask it all sorts of questions. Like if you, my favorite use case is a user feedback project with like 10,000 tasks in it of individual user feedback items. And it was like, what are the themes from this feedback?

This was really early on it, pulled out a theme and then an example quote and the example quote was in japanese and we didn't ask it and it just auto translated it into english. I would, you probably want english because you're speaking english.

Um it's just like that kind of magic was, was great. Um but for this, we actually to get it ready for launch, we had to rebuild it five times, complete rebuild over and over. And this goes to what we were talking about earlier, like you need modular architecture, you need evaluation pipelines, you need to solidify upfront criteria of what do you care about and from everything from quality to reliability, to performance, to kind of safety and then you can kind of as you iterate as you go kind of make progress towards there.

Yeah, i think the only thing i would add is uh you know, for us it's asking starting by asking the question around everything that's been built, you know, how can llm gen ai be part of it? So any product review that happens, any new uh product concept that actually runs through our process. Our first thing is like, have you thought about it? Right.

So i think that's a change of mindset where we are trying to introduce, you know, because to your point like there was the old way of doing it and then there's the new way and in order to do the new way, you have to do change management and change management needs the muscle needs to be built. So you need to exercise that.

So in every single meeting doing that, that's one, i think the second thing is uh uh we are a big uh slack culture, uh like some of our books here as well and we have a channel around like great ai things that we're seeing out there. So we know that, you know, that, you know, we're starting to understand the space. But there are a lot more folks out there companies out there that are doing amazing stuff, right?

And so uh everybody is contributing to it and we have leadership on a very frequent basis, chime in on that. And that actually sends a very big signaling effect is that of all the channels we could be doing. This is where a lot of the focus is and people are like, oh, this is great.

Have you thought about this? This could be taken for our scenario and stuff? That's the second one. And then the third one is that we have, you know, a team, our data science team that actually has the center of excellence in terms of all the amazing work that's happening in terms of foundation models and all the other innovation. And they are driving a bunch of enablement inside of the organization around things, you know, talk trucks and you know, you know, uh you know, brown bags and other things to help educate and inform.

And I think the last thing I would say, I'd say is like culturally being very open in terms of taking risks, running experiments, right? So making sure that we're supporting folks uh carving out some of their time in order for them to try out these new things.

So I guess do we think that generative AI has the ability to make employees more productive users more engaged? Oh for sure. Um to, to make an analogy. Imagine what daily life in an office was like in 1970. So no slack, no asana. No miro, no internet access, no computers, no ability to instantly send a document to all your coworkers. Like you had to go to the like the xerox machine if you wanted to like duplicate information. So if you imagine what it was like to get work done, to collaborate on a document, to collaborate on a project, like everything was you had to walk around and be in the same place and you had to like use typewriters or paper. So like information, it was very hard to copy, it was very hard to disseminate. It was very hard to edit, like cut and paste was like literally cutting and pasting things. So like this, this was a fundamentally different work world and people were far less productive.

What happened was with the digital revolution. Information became incredibly cheap to produce, to disseminate, to collaborate on. So I would argue that what we're going to see in the next five years is that the the developments in AI uh will make cognition equally cheap in the same way that working with information became cheap over the last 50 years. And so what what that will mean is that intelligence will be embedded in every part of the organization. And there are a lot of tasks that happen in large organizations where our human ability to process information and think about social connections is actually limiting us.

So for example, people talk about Dunbar's number as you know, the most social connections you can hold in your head at any given time. I believe what will happen is a lot of the breakdowns that you see in organizations that happen because of Dunbar's number will go away. We're essentially issues of say executives not knowing the facts on the ground or individual contributors, not knowing that the problem that they're trying to solve was solved by someone in another team a month ago. Like all of those issues of like knowledge transfer will start to go away because an intelligence system behind the scenes that actually has the intellectual horsepower to track what's going on with 10,000 employees and figure out what connections need to be made where a team might be going off the rails where the company's culture is not getting respected. All of those things can be detected by an AI and steered toward maximum individual and collective productivity.

So adaptability again, be a key. So jobs are evolving the same way arguably at a faster pace than they have evolved in the past through the advancement of this technology. And there is an interesting dichotomy here because you talked about generative AI is making, we've seen a lot of the applications of generative AI around creating content and the dichotomy there that if you think about a typical knowledge worker now that what that's going to do is just going to increase the amount of notification and slack pings and emails hitting, hitting your, your your work day.

So how from a, from a collaboration perspective, I guess how is Asana thinking about this risk? Yeah, if you will of increasing noise. Yeah, I mean, one of the, the, the top focus for Asana is clarity um and everything that you're doing. So it's, it's a huge risk for us. Um I was reading a few months ago, uh I'm reading more science fiction given this because everything is in the future. So I was reading uh uh NFNFM by Neil Stephenson. And there they have kind of what you're what you're saying, a large data network, they call the reticulum, which is everything out there, all information at your fingertips, but it was flooded with seemingly correct documents that actually were incorrect like hallucinations. And then they had to have this entire new career which were people to make like separate out fact and fiction and like kind of curate that, which is kind of what we're seeing with this.

So it really resonated with me. Um hallucinations, the problem, early adoption of AI a lot of products weren't doing what you were saying with preview information. They were just throwing information out there. So yeah, we've been thinking about three things to help with this. One is when we generate content, it's going to get you 80% of the way there, but you need to rely on design principles to make sure it gets 100%. So preview the content don't have it be a black box show citations and so on.

We're focusing our word graft data model which uh our co-founder DSI Mosz, co founded Facebook and made the social graph. So you can kind of think of it as the work version of that um to make sure it has the most relevant context that we can send to LLMs. And this involves not just LLM use but uh better ML ranking about the edge nodes, um edges between nodes embeddings and so on.

And the third is being mindful of when and how we surface third party data to LLMs. And that's relevant to what we talk about all the time because um and you need to be really careful with the scale of data that you're pulling into your product and putting in as context. So as an example, like let's say you have 500 spam emails that are all slightly incorrect. You don't want to pull that in and call that fact and have that come into your status report. So that makes your job a lot harder because you need to help separate out fact and fiction and that's why we're excited partner on those kinds of use cases.

And so just, just to catch on that and we have a couple of minutes. So I want to ask a couple of fireside, what are some of the use case you talk about status report? And I think that's a bit of a keyword right now for us. So, yeah. Yeah. So um for status updates, I think something we're excited about with um working with that fabric is uh you go one more slide uh uh one more than that. Yeah. So right now, the way we do our status reports is we look at all collaboration that happened in Asana. If you're a heavy Asana user, it works magically. It pulls out insights and nuance of like we've been collaborating back and forth and a decision was almost made and it calls that out as a blocker to help move the work forward, but not everything's in Asana.

So how do we maybe there is some blockers that were called out but we're actually solved and are in an email and we just need to look at that email to see that or maybe there's new risks that weren't in Asana that are in Slack where you say like, oh no, like this customer is now worried, like don't, don't continue the sales progress or process or whatever. So there's a lot of really key insights that we want to pull in at Fabric to help us connect to the various tools that information might be in. Um and that way we, we can have a more accurate project health report when we service it to users.

And uh Von and Matt, any specific use cases that um you think are gonna really move the needle as we think about employees productivity? Yeah, I think there would be two that I would add to what Eric mentioned. Uh one is around uh large data synthesis. Uh so uh if you think about a typical organization and all the omni channels in which like feedback is coming from customers in, I oftentimes find that product organizations struggle to actually really synthesize it in a way and use it to drive roadmap. So I think like that's one where large amounts of data can be synthesized into key insights that can start to drive product priorities or business priorities or company priorities. That's one, I think the second one where uh I think it's going to have a lot of impact is around automation.

So, you know, let's take the example of this conference and you know, all of us like draw like, you know, our architectures for our AWS environments like today. What is the process? It's manual, we put all of these things in manually, we put it on, on some sort of a publish system. You know, it should be fully automated where, you know, you're connected to all of these environments and like, you know, it's automatically built and then there is an AI that sort of starts to drive optimization around it or, you know, questions that you can ask around it. So I would say connectors where things that used to be done manually will now be done automated and then driving insights on top of it.

Matt, what do you fill up the role of that fabric for knowledge workers in the context of generative AI? So where I'm particularly excited about AppFabric is um in the cross application sharing of relevant information. Um essentially, I when when you mentioned the employees spending an hour per day uh searching for information that they know exists, I just felt that in my heart like I do that so much. And so the idea of not just having relevant information automatically surfaced to me, but also the unknown, unknowns that document I didn't know existed that's relevant to what I'm doing to have that pop up in my workflows is particularly exciting.

And so I think to be able to enable that in all of the enterprise productivity applications at scale and to enable those app developers to leverage that information as it comes in. That's super exciting. Also the fact that you've solved the first question that's going to come from every security team of hey, wait a minute, you know this, this tool is going to be accessing data from these other apps will respect the user's preferences and their, their access control levels. Like the fact that you've already solved that just makes it far easier for this kind of feature to get approved by the security orgs and that has been one of the premises of actually building a fabric really putting as Adam mentioned, the security and privacy at the core of the product and not very much as an afterthought.

Well, uh I think we are, we are a time uh, please for to join me to say a big thank you for the panelists. Today. We're gonna be hanging around here if you have questions, you want to engage with Matt Von or Eric and myself. Um other than that, if you're curious about fabric, we have a session in a couple of hours. Uh um you can scan the QR code uh and join bis 109 later this afternoon. Um beyond that, you can also scan this QR code uh by visiting our product detail page, one page. Yeah, there you go. You are excited. Yeah, too excited, too excited. All right. Um and if you wanna get started, if you, we're curious about um what's available today for app developers who are using our fabric, please uh visit our product page. And as I mentioned, we're gonna stick around if you, if you have any questions besides that, please uh join me with a big round of applause for the panelist. Thank you.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值