What does it look like to connect AI and PKM tools

After using ChatGPT for the past two weeks, I’ve started thinking about the benefits of integrating an artificial Intelligence (AI) tool like ChatGPT with my Personal Knowledge Management (PKM) system.

I envision an AI assistant that is better than knowing my regular app usage patterns and can do more than schedule timers (like the AI assistants we have on our phones). What I’m looking for is AI designed to continually train on my specific PKM graph to understand the data that is important to me.

New Connections and Ideas

One potential benefit of this integration is identifying new connections in their data that users may not have considered. These connections could be direct links between existing notes or they could require the creation of a new note to connect two or more existing ones. For example, if I have notes about the effects of nutrient deficiencies in the body, the AI might identify that multiple conditions are all caused by a deficiency in a specific vitamin that is not yet included in my data but is present in its own source knowledge.

Another benefit is the suggestion of new topics. The AI could analyze your existing PKM data and suggest topics that may be relevant or interesting based on previous documentation. This could help expand your knowledge and stay up-to-date on relevant topics. Consider the impact this could have on science based research where new studies are being published regularly!

Customization options would be important for these types of discovery. For example, the ability to specify which data sources or types of information the AI should prioritize when making suggestions or identifying connections would allow users to trust recommendations it provides. It would also allow the AI to better understand your interests and focus on areas that are most relevant to you.

For this to work well, consideration has to be given on how the AI will interact with other tools or systems being used, like task management or project management apps. Ideally it could provide relevant information or suggestions based on the user’s current workflow and goals like the following:

  1. Task Management – the AI could suggest tasks or projects related to the user’s current PKM data or goals
  2. Project Management – the AI could provide relevant information or recommendations based on the user’s current projects
  3. Other PKM Tools – the AI could help provide a unified view of multiple data sources and information from numerous tools

By linking to other tools, the AI could provide information or suggestions in real-time as the user is working on a task or project. This would allow the user to stay on track and avoid the mental debt of context switching required to navigate between multiple apps or systems.

Additionally, integration with other tools could help the AI assistant provide accurate and up-to-date information. Imagine the user is using a tool that tracks their time spent on different tasks or projects, the AI might be able to use this data to make more accurate recommendations specific to the users actual work. The result is a more streamlined workflow.

Concerns

While there are potential benefits to this integration, there are also concerns about data security and unchecked bias. I want to be confident my data is not shared with a central repository and is not at risk of being accessed by others.

Just as important, I want to avoid creating an echo chamber where the AI only reflects my own ideas. I want it to balance other data as important too. To do this, it will need to weigh user data against other information to indicate when the users become too biased.

To address these concerns, it will be important to consider customizations and safeguards to protect users. Which is where human oversight will become increasingly important!

Human Oversight and AI

Human oversight will be indispensable in this new AI-partnered work environment.

Take for example the desire to choose which data sources the AI should prioritize. One one hand, if I’m doing research in a particular area of study, I might want the AI to weigh that research balanced against research outside of my data. On the other hand, there are times where bias might actually be useful; an AI assistant echoing my data back to me might be exactly what is needed for someone writing fiction and needs recommendations based on their own fictional world.

Bias isn’t necessarily bad, it’s just helpful to know when you’ve encountered it. Giving users the ability to oversee where the data comes from helps create a more appropriate AI partner for them. Giving both the researcher and the fictional writer the ability to choose more or less bias is the better choice long term. Initially however, it might be good to have a default setting that prevents the one-person echo chamber where all recommendations sound like meā€¦because they are.

This ability for human oversight must go hand in hand with the ability to review and verify the AI’s recommendations. Reviewing sources lets a user see when too many sources are their own. Users who review and verify the AI’s recommendations help ensure it is not promoting any particular agenda or bias, and is instead presenting a balanced and fair view of the data. The more sources that are primarily from your own PKM the more you are creating a one-person echo chamber.

Assuming you reach a good balance of sources, there should also be a feedback mechanism in place to identify any errors or unwanted suggestions made by the AI. While the AIs being used today are trained on large amounts of data, it is still possible for them to make mistakes or misunderstand data. Allowing the user to review and provide feedback gives both the human and the AI a chance to catch errors before they become problematic. It would also help with ethical considerations, such as ensuring that the AI assistant is respectful of a multitude of personal beliefs. More in-depth customizations would also help the AI to handle sensitive or controversial topics that may arise in a user’s PKM data in a responsible manner without causing them to lose trust in it’s data management approach.

More human oversight at the user level is needed in order to address concerns about bias or unethical behavior. Currently, this is not even possible with ChatGPT. There’s no way to know where the data came from or how accurate it is without heavy research into what you’ve been provided. This is something that I predict will become a larger talking point as we become more comfortable with these tools. Right now we have no way to identify bias or stop the spread of misinformation. That will have to change.

In conclusion

I already enjoy working with AI as a partner in thinking and believe it will only get better as the technology advances. Part of that is thinking of new ways to integrate AI partners with our current tools. And part of it is thinking how do we do this in a responsible way.

What are your thoughts? Come join the conversation on Twitter.