Chai & Chips Episode 010: Will AI Replace Lawyers? Navigating Indian Tech Laws & Data Privacy
Guest: Karishma Sundara, Founder – Kintsugi Law
Episode Summary:
India’s technology landscape is evolving at breakneck speed—but are companies truly prepared for the legal and regulatory shifts that come with it?
In this episode of Chai & Chips, I speak with Karishma Sundara, Founder of Kintsugi Law and a leading advisor in technology, media, and telecommunications (TMT). Karishma brings deep clarity to some of the most complex issues facing startups and enterprises today—data protection, AI labelling, consent management, and regulatory compliance in India.
We unpack India’s rapidly changing regulatory environment, including the Digital Personal Data Protection Act (DPDPA) and proposed AI labelling guidelines, and explore how these laws compare with global approaches in the EU and China. Karishma explains why compliance should be viewed not as a cost, but as a long-term investment—and why misunderstanding Indian regulations can quietly become one of the biggest risks for technology companies.
This conversation is essential listening for founders, operators, and leaders navigating AI, data, and digital businesses in India.
YouTube episode link:
Prefer listening on the go? Find us on Spotify, Apple Podcasts, or wherever you get your podcasts.
Key Insights & Takeaways:
1. Compliance Is Not a Cost—It’s a Long-Term Asset
One of the biggest mistakes startups make is treating compliance as an unavoidable expense. Karishma argues that strong compliance frameworks become a source of long-term value, especially during fundraising, acquisitions, or regulatory scrutiny. Weak compliance often surfaces at the worst possible time—when the stakes are highest.
2. India’s Laws Are Inspired Globally—but Fundamentally Distinct
A common misconception is that compliance with GDPR or other global regimes automatically ensures compliance in India. In reality, Indian regulations—especially under the DPDPA—have unique nuances, including a closed list of non-consent-based “legitimate uses.” Assuming equivalence can lead to serious blind spots.
3. Data Protection in India Is Deeply Consent-Driven
India’s data protection framework places consent at the centre, but not in a simplistic way. Valid consent must meet strict conditions, making awareness and education critical. Without widespread understanding, there’s a real risk of either invalid consent or user fatigue that could slow technology adoption.
4. Awareness Building Is as Important as Enforcement
For the DPDPA to succeed, awareness must precede enforcement. Karishma emphasizes that data principals (users) must understand how consent works and what their rights are. India’s diversity makes this challenging—but examples like UPI adoption show that large-scale behavioural change is possible with the right communication.
5. AI Labelling Will Change How We Perceive Digital Content
Proposed AI labelling rules could significantly alter how people consume online content. Explicit and permanent labels for AI-generated or AI-modified media may reshape trust, credibility, and platform responsibility. Even seeing “AI-generated” in print can change user perception dramatically.
6. Platforms, Not Just Creators, Will Bear Responsibility
Under the proposed AI rules, responsibility doesn’t stop with the creator. Platforms will be expected to detect AI-generated content—even independently of user declarations—creating a layered system of accountability between creators, tools, and platforms.
7. Sensitive Sectors Must Go Beyond Minimum Compliance
While the DPDPA does not create separate categories for sensitive data like health information, organizations in health-tech and ed-tech cannot rely on minimum standards alone. “Reasonable security safeguards” must be context-specific, especially when dealing with children’s data or medical records.
8. Indian Regulation Is Trying to Balance Innovation and Control
Rather than introducing a standalone AI law, India appears to be regulating AI through intersecting frameworks—content laws, data protection, and platform governance. This approach signals an attempt to balance innovation with oversight, though its long-term impact will only become clear with enforcement history.
9. New Regulations Will Create New Startup Opportunities
Regulatory change doesn’t just create constraints—it creates new markets. Consent managers, compliance tooling, and legal-tech platforms could emerge as meaningful startup opportunities, provided they meet clearly defined regulatory thresholds.
10. The Biggest Risk Is Thinking You’re Already Compliant
Perhaps the most important takeaway: assuming compliance is often more dangerous than knowing you’re not compliant. Indian regulations demand deliberate interpretation, customization, and ongoing reassessment—not checkbox compliance.
Connect with our Guest:
Karishma Sundara on LinkedIn: https://www.linkedin.com/in/karishma-sundara/
Follow Chai & Chips:
Subscribe on YouTube(Chai & Chips - YouTube), Spotify(Chai & Chips Podcast | Podcast on Spotify), Apple Podcasts(Chai & Chips Podcast - Podcast - Apple Podcasts) or your favourite podcast platform to never miss an episode.
Leave a review—it helps us reach more listeners!
Follow Prakash on LinkedIn(Prakash Mallya | LinkedIn) and X(Prakash Mallya (@PrakashMallya) / X) for updates.
· Episode transcripts on Substack: https://www.prakashmallya.com/s/chai-and-chips-podcast
Episode transcripts on Medium: https://medium.com/@pmallya2411
Episode Transcript
Speakers:
Prakash Mallya (Host)
Karishma Sundara (Guest)
Preview: 00:00:00 - 00:01:19
Prakash Mallya 01:20
Hi Everyone! Welcome to Chai & Chips. Today we have with us, Karishma Sundara. Karishma is a seasoned legal advisor. She is the founder of Kintsugi law, a wonderful name. We will talk all about it. And prior to that, she has been involved in a variety of leading legal firms in India, and she has studied in Cambridge with BA Honours and MA Honours from University of Edinburgh. She is a renowned legal advisor in the area of data privacy, data protection, e commerce, AI related issues and several other elements of legalese, as we say, in the area of technology, media and telecommunications and here on, we will term it as TMT. So, welcome to the show, Karishma.
Karishma Sundara 02:17
Thanks for having me, Prakash.
Prakash Mallya 02:18
Yeah, it is going to be a very interesting conversation on law and regulation. So, let me start with Kintsugi law. What made you choose Kintsugi as a name? It’s a beautiful name.
Karishma Sundara 02:32
It is a beautiful name. So let me start at the beginning. Kintsugi, I think, represents so many things to so many people. It’s a very layered concept, and often people end up focusing on repair. For me, I think that sustainability is the overarching theme. Kintsugi is, of course, a Japanese art form, and the way it works is that broken pieces of pottery or if they’re glass etc. are not discarded. Instead, they’re sort of collected and joined together again with gold. And when I heard this concept, and I saw Kintsugi firsthand, it really spoke to me, and when I was starting something new, and to me, Kintsugi is something new. It creates something new out of something old. And that is the reason why I named the practice this.
Prakash Mallya 03:23
Very cool! So, let me start with just the overall landscape. And if you look at law in general, in the TMT space, it has had very fast evolution. So, you have had online gaming law very recently, you had far more recently, data protection law, an updated one and AI labelling guidelines happened a few weeks back. So how do you look at the landscape today? Or how should one look at the landscape today, and what could be the compliance looking ahead for companies, including startups, and what risks or opportunities do you see for companies given the environment?
Karishma Sundara 04:09
Sure, on that one, you’re right. I think the law has been changing at an absolutely galloping pace. In the past few months itself, we’ve seen an online gaming law which completely reworked the gaming industry. It prohibited real money gaming, and it drew specific attention to what social games really are. We’re looking at the potential for those games to be further regulated. But of course, that’s still in the wings now, and I think the biggest development, of course, is data protection. We are looking at a brand-new law. I say brand new, but of course, it’s two years old. The reason it’s new is because we finally have the implementing rules. Everyone’s been waiting for this law for at least since the first iteration, which came in in 2018 and now it’s time to finally implement. The landscape is changing, and I think that one of the things, one of the ways to really approach it, is with eyes wide open. You have to know what’s happening. You have to understand the different regulations impacting your business. Often, I’ve noticed with both, I think, startups and established companies, there is a tendency to believe that Indian laws are derived entirely from laws of different jurisdictions. Now, while this may be true to an extent, it ignores the nuances that come with Indian law. And the problem there is that you stop yourself from building effective compliance if you assume you’re already compliant. And I think that that’s an important thing to keep in mind is to keep your eyes wide open and to fully assimilate the requirements of these laws, to understand whether you’re within scope, to understand whether you need to do something to change how you currently operate, and what could be the risks If you don’t.
Prakash Mallya 06:00
And if what you just said makes a lot of sense, but you’re also operating in an environment where things are changing very quickly. So, does it mean that in the current environment, you need skilled expertise, or different kind of expertise, to make sure you know where the puck is going?
Karishma Sundara 06:20
I think that’s where doing experts come in. And I think that that’s really where inputs come today. Honestly, it’s about strategic inputs on difficult situations, because at the end of the day, a lot of entities have either midsized teams to handle this internally, or they have larger external teams to do this. But what ultimately is going to move the needle is whether you’ve strategically changed things, whether it’s at a macro level, deciding what compliance plan to follow, or whether it’s about dealing with those nuanced issues, issues which you wouldn’t traditionally encounter otherwise, and that will keep you ahead of the curve.
Prakash Mallya 07:05
Yeah, agree, and you are a trusted advisor both to startups and Fortune 500 companies in all the areas that we talked about. So, you would be tracking what’s happening globally as well, because these rules and regulations are being thought through in different parts of the world. So, let’s consider AI labelling guidelines and data protection guidelines per se. How do you compare and contrast different countries on how they are looking at it, and where does India stand today.
Karishma Sundara 07:34
Sure. I think being aware of what’s happening globally is deeply important, particularly where you may not have regulatory guidance immediately within the country. So, for example, if you look at the AI labelling measures, India’s not the first to think about this. Other jurisdictions have, China, for example, did very recently introduce its labelling measures requirement. You have other jurisdictions, like the EU, which has a very specific AI legislation, which also considers labelling. But of course, they all do this a little differently, and I think that’s what’s interesting here. For example, you have China which talks about explicit labels for specific kinds of synthetic content and typically high-risk kinds. So, for example, if you have facial manipulation or if you have facial generation, those are the kinds of content and synthetically generated content that will require an explicit label, whereas you could potentially have an implicit label for other kinds. If you look at the EU on the other hand, which talks of machine-readable labels. And these are very different and could be different from human readable labels. You could be looking at a watermark, for example. You could be looking at embedded metadata. You could be looking at different things that may not be visible to the average viewer. But English law is different, so we’re looking at something that sits somewhere in the middle there. And I think that’s why it’s important to understand the nuances here locally. We’re looking, for example, at a label that could apply to all synthetically generated content, anything that is generated or modified using AI. And that’s a tremendous amount of content online today, so I think it’s going to drastically change not just how systems that enable this generation function, but also just how we perceive online content, anything with an explicit label today, which tells you AI generated, even if we’re somewhat aware that it may be, seeing it in print, whatever form it may take, is likely going to change how we view it.
Prakash Mallya 09:43
Yeah. So, one aspect I want to ask you is labelling all data versus labelling only the synthetic data. The article I was reading was companies having a view that if you label everything, then it’ll add to their cost. What’s your perspective on it?
Karishma Sundara 10:01
Well, first, I wanted to clarify that the labelling requirement doesn’t apply to everything, just anything that is synthetically modified or generated using an AI tool. While we’re surrounded by AI generated content today, it could create this perception that you’re going to end up labelling everything, and it may end up being true across large portions of the internet, but for now, businesses themselves don’t need to worry about having to label everything.
Prakash Mallya 10:31
Okay, so suppose I’m a podcaster. It’s not supposed, I am a podcaster. So how does my life change?
Karishma Sundara 10:40
Well, your life isn’t changing immediately, because these are draft amendments to these rules. And second, the amendment doesn’t impose an obligation on you. You’re as long as you’re not the one who is, say, providing an AI tool that enables modification. It’s not going to apply to you. On the other hand, if you are simply a podcaster, as you are here, and you use an AI tool to modify say this interview, and then you publish it on say a social media platform, if these rules are amended as proposed, this could change things in two ways, One at the time of you creating this modified content, it could receive a label which is generated at source. And this label is supposed to be permanent, it’s supposed to be indelible, and it’s supposed to meet certain size specifications, like covering 10% of the visual area for audio content. And it’s, of course, something we need to consider in terms of mixed media content. It’s supposed to cover at least 10% of that initial audio recording to indicate to the person who is receiving this content that it is somehow AI generated or AI modified. And the other stage at which this will change things is when you post this content. So, one you’ll be asked to declare whether or not you used an AI tool to modify or generate it, and then, regardless of your declaration, the platform is expected to review this content before it’s published, to determine whether or not an AI tool was used. So, it’s likely that the labelling and disclaimer requirements will work in tandem, because if you have an indelible label that’s applied, it perhaps will make it easier for platforms to detect if AI technology has been used and then be able to enable a disclaimer with that content.
Prakash Mallya 12:38
That’s a good explanation. So, you started or founded Kintsugi law and as a specialized legal practice in TMT space. And if you consider everything that is going on and the regulations that have already come into play, how would you consider India’s position in technology innovation and the legal and the regulatory environment. Does it inspire or spur innovation? What’s your point of view on it?
Karishma Sundara 13:10
So, I think the impact on tech innovation is something which we’ll have to wait to see. So much has changed recently, and we still don’t have enforcement history or regulatory guidance to understand what that’s going to look like, say, five years from now or 10 years from now. So, I think it’s very much a wait and watch. But I do believe that the regulation themselves are indicating an interest in balancing innovation with regulation. For example, if you look at the AI governance guidelines, which the government published quite recently, AI is, of course, a massively like dynamic space, and it is increasingly being regulated in a lot of other places in the world. But India has taken this stance, at least for now that it seems unlikely that it will want to regulate it through a separate legislation. Now this is important, because the path that they’re choosing instead is to approach it from different intersecting laws, wherever there could be an overlap, wherever there could be a need to clarify an issue in terms of how AI may change things, whether it’s potentially, like we’re looking at online content laws, right? For example, the AI labelling requirements we’re discussing, that’s one of those potential ways in which this could change.
Prakash Mallya 14:31
Right. So, if one looks at the data protection law, DPDPA, which got kind of announced very recently as well, what do you believe the framework or principles that the government should adopt as it rolls out data protection law in India?
Karishma Sundara 14:54
So, I think before we get to the principles, I think one of the very important things which will I think enable the success of this legislation is just awareness building. I think, for the system to work properly, all of its stakeholders need to buy into it. For example, you are leading with a deeply consent focused regime, and consent has to flow from the person whose data it is, and because that’s the case, they need to understand what consent means. So, I think before we get to the principles in terms of enforcement or regulatory guidance, I think this is going to be the number one step, ensuring that people understand that their lives are going to change considerably and how. And of course, we’ve had, we now have 18 months to decide what that implementation cycle will look like. But I think having this initial awareness building alongside compliance building is going to be very, very important.
Prakash Mallya 15:53
And how could that awareness building take place in such a large and diverse country like India?
Karishma Sundara 16:01
Honestly, I think it just it has to be modelled to suit different audiences. It has to come from perhaps even an overlap or an intersection between businesses and say, regulators to be able to understand how you reach people, to tell them how this can function. Now, of course, we have a great example in terms of UPI, which is now used across the nation and freely and easily. And of course there are, of course, drawbacks and bad actors wherever you have technology adoption and wherever you have technology development. But with this law, it’s I think, a lot more intricate than simply ruling out even a technology as dynamic as UPI. But we got everyone to buy into that. So, the question really is, can we explain the intricacies of this law to everyone.
Prakash Mallya 16:57
In a simple way.
Karishma Sundara 16:59
In a simple way. So, the law itself, in fact, requires so much of this to be made understandable and easily understandable. But I think one of the issues with that is that waits for implementation, which means it may not happen for 18 months, but that kind of change can’t happen overnight for a very important segment of those stakeholders, which is your data principles. So, I think that awareness building needs to start much earlier.
Prakash Mallya 17:28
So, couple of segments I feel are going to be more impacted or more critical in areas of data protection or AI, one of them is healthcare. The other is education, simply because kids are involved in education, and healthcare has so much private data across the board. So how do you see the impact of these regulations on sectors like health-tech or ed-tech, how companies operate?
Karishma Sundara 17:57
So, I think it’s going to be an interesting space to look at, because obviously, you’re dealing with data that’s more sensitive in some cases where they’re looking at health information, and in other cases with education, especially insofar as you’re handling children’s data. The law, of course, very specifically protects children’s data, but with sensitive data like health information, the DPDPA itself is not providing specific protection for this kind of data. But like I said earlier, I don’t think that means that you can immediately relegate all the information that you handle to the same level of scrutiny, to the same level of protection, because the law over and above everything else, requires you to apply reasonable security safeguards. There is an at minimum list, which is also provided by the government, but because you have to apply reasonable safeguards, you have to determine what’s appropriate in a circumstance.
Prakash Mallya 18:56
And what is the at minimum list.
Karishma Sundara 18:59
So, the at minimum list is a list that’s come through with the recently notified data protection rules. So far under the main statute under the DPDPA, businesses have been told that they can adopt reasonable security safeguards. Now the word reasonable enough itself, of course, allows for a degree of latitude when you’re interpreting what’s appropriate for your business. So, what came through with the rules, of course, is an at minimum list, which does include specific requirements like log retention for say, period of a year to be able to enable the investigation of unauthorized access. There is limited clarity around what those logs need to be, but there is, of course, a positive retention requirement for that period of a year. The rest of the at minimum requirements, I think, by and large, import the same flexibility that the word reasonable allows under the DPDPA, which, again, is helpful to the extent that you’re able to assess what’s appropriate for the kinds of data that you handle.
Prakash Mallya 20:07
Understand. So, you can customize, to a certain extent, based on what kind of data, what kind of consumer segments your cater to.
Karishma Sundara 20:17
Absolutely. And I think that data protection will always be a question of understanding what data you handle, what could be the implications if there is a data breach, and all of those questions and those responses will help you build a system that works for you.
Prakash Mallya 20:37
You. You span both startups and Fortune 500 companies, right? And in that regard, what in your experience, you think is the biggest misconception people have about India’s regulatory and legal environment?
Karishma Sundara 20:54
I think I touched on this a little while ago when we talked about how people perceive India’s laws as being a derivation of other laws in different jurisdictions. So, I think biggest misconception is very often related to believing that India’s laws are derived entirely from laws in other jurisdictions. What this ends up doing is people believe that if they’re compliant with law A in Country B, that they are automatically compliant with the law in India, or that there will be very little modification or customization that would be necessary. And this, by and large, isn’t true. While Indian laws, and particularly say we take our data protection law, has drawn inspiration from different regimes, it is distinct from them. And one very good example of this is how you process personal data, or rather the lawful basis you rely on. So, to just take a few steps back, processing includes collecting information or storing it, for example, or analysing it or using it. It’s a very broad definition, and this means that you can’t do any of those things without having a lawful basis. So traditionally, our data protection regime has focused on consent and how consent drives this kind of processing. That’s true, I think, even under the DPDPA, but with one difference. There’s a closed list of non-consent based legitimate uses which you can rely on to be able to process personal data without consent. But these legitimate uses not themselves are specific to particular scenarios. They apply, for example, in a medical emergency, or they apply where someone may voluntarily provide their personal data for a particular purpose, so in those circumstances, because it would either be infeasible or impractical or unnecessary to obtain consent, you can rely on alternative. But very often, because there is a parallel, different basis under, say, the GDPR, which is legitimate interests, there is a belief that two could be seen as corresponding provisions. While there could be some overlap in some cases, depending on the specific facts at hand, there isn’t an automatic and complete overlap between the two.
Karishma Sundara 20:54
Okay. So, in that regard, if you’re advising a startup founder in India operating in such an environment, what would your advice be for them?
Karishma Sundara 22:47
So, I think with startups, it’s a very specific kind of risk profile. You’re looking very likely at new businesses that perhaps will be unlike their larger say, competitors, or larger businesses unburdened by legacy data that they now have to pass through. So, one good thing, I think, with a lot of startups is you get to start either with minimal baggage, or you get to start afresh. The good thing with I think, starting afresh is that you have a clean slate, you get to build a compliant framework from the ground up. And I think one piece of advice that I would offer is to, and I know it’s hard to do, especially when you’re a startup, is to not view compliance purely as a cost, but to rather see it as an investment in long term value. It becomes a stumbling block at the time of round of investment, for example, in the future, where investors want to know if you’re compliant with laws, especially laws with high risks, especially laws with huge penalties involved if you are non-compliant. So, I think looking at that, and of course, there is a potential for startups to have an exemption from certain provisions under, say, the data protection law, but we’re still waiting for notification to that effect.
Prakash Mallya 25:11
Yeah, so a related or slightly different question. Do you have any example of a recent regulation which was intended to protect consumers and businesses, but had an unintended consequence of impacting technology adoption or innovation?
Karishma Sundara 25:30
Well, to look into the crystal ball, I would think there’s at least one that comes to mind. I don’t know ultimately and this is missing. I don’t know how this will ultimately play out, but think DPDPA and specific, some specific provisions of it, for example, consent. The new law is very heavily consent driven, but this is consent that’s a deeply intricate form of consent. It has specific conditions that need to be fulfilled for the consent to be valid. And I think one of the problems with that is being able to communicate what consent means to an audience as diverse and as populous as India’s. To be able to explain it in simple terms. And I think this goes back to something we discussed earlier, awareness creation. I think that could be one of those stumbling blocks. Either you create an audience that is interacting with these notice consent flows and just doesn’t know what’s happening, and therefore providing consent that could be technically invalid, or you have an entire group of people who just do not, who cease to want to utilize technology that requires them to read voluminous notices and provide consent at every stage. And again, like I said, it’s hard to know how that’s going to play out. And of course, the law does have an inbuilt mechanism which is expected to streamline it, which is consent management, creating a new class of individuals called consent managers, who are to provide sort of an interoperable platform through which a data principle can again give, receive or manage their consent. But of course, leaving aside the issue of awareness we just spoke of, even if we were to assume that somebody was aware of this and aware of how consent functions, for this to work as a system, all stakeholders need to buy in. You need to have platforms who want to integrate with consent managers. You need to have users and data principles who want to be able to use a consent management platform because it’s only use, and I think extensive use, that’s going to show us what successfully running a platform like that will look like.
Prakash Mallya 27:51
Yeah. And do you also foresee startup opportunities in new categories generated as a result of some of these recent regulations?
Karishma Sundara 28:02
Startup opportunities, I think there could be some. I think it really depends on whether or not they meet regulatory criteria. So, for example, consent managers, consent managers must meet certain specific criteria under the law. So, I think once, if they meet those requirements, I don’t see why not.
Prakash Mallya 28:27
That’s a good segue into our rapid fire round where I’ll ask you a bunch of questions related to law, or sometimes not related to law. Should we do it?
Karishma Sundara 28:41
Sure, let’s do it.
Prakash Mallya 28:42
Okay. Describe your consulting style in three words, but one must be a food item.
Karishma Sundara 28:50
Oh gosh, this is gonna be a hard one. Okay, I’ll say clear, concise. And for my food item, I’m going to pick a slightly complicated one, a vegan chocolate cake.
Prakash Mallya 29:05
Why a vegan chocolate cake?
Karishma Sundara 29:07
So, I recently tried a recipe which has now made it a favourite. And two, because I think that it’s sustainable, and ultimately, it’s a classic with a twist, which I think is how I’d hope that people who know me can describe me and describe the practice of law that I offer.
Prakash Mallya 29:28
Very interesting. So, next question, a book or movie that most shaped your thinking on technology or justice?
Karishma Sundara 29:36
I think it’ll be very hard to pick one book or movie that shaped my understanding of technology or justice. So, I’ll settle for something else. I’d like to talk about, a recent movie that changed how I think about or focused on one particular issue, which is, I think grievance redressal. The movie I’m talking about, of course, is one called Swiped. It is a movie about the rise of a particular dating application, and what struck me the most is how this app was modelled on addressing a common consumer complaint, which is stopping unsolicited contact. And of course, I’m sure there are multiple other grievances, and maybe doesn’t solve it entirely, but the idea of creating something and baking grievance addressal into its design, I think there are lessons in there for everyone. The reason I say this, of course, is that today we’re dealing with we’re surrounded by technology. We live out our lives online, whether it’s shopping online or buying your groceries. Ultimately, being able to have effective grievance redressal isn’t just a question of ensuring your users are happy. It’s about making sure that you ultimately save time and effort and build a holistic brand, and, of course, comply with the law. A number of laws require grievance addressal, and I think there is a blind spot here that we’re missing. For example, being able to utilize data that comes through from grievances, to be able to say, either automate responses in an intelligent way to be able to say, use data to determine who’s a bad actor, to determine where you may have an operations issue. I think these are blind spots today, which could see a meld between business and law, if appropriately deployed.
Prakash Mallya 31:36
If you weren’t in law, what would you be doing?
Karishma Sundara 31:40
Oh, I’d be teaching English, without a doubt.
Prakash Mallya 31:42
Teaching English. Oh, you have a background there as well.
Karishma Sundara 31:46
I do, I do. So, I enjoyed English literature as a college student. I loved it. I spent four years studying it.
Prakash Mallya 31:54
Okay, what part did you enjoy in that?
Karishma Sundara 31:58
So, I loved being able to see literature stand on its both on its own and in a particular historic or cultural context. And what really brought this home to me was I’d be studying a particular period of literature, let’s say medieval literature, and I also took a course in history of art. So, I would then leave to go to a different lecture, and suddenly I’d be looking at some art from the same period of time and thinking, Okay, here’s where the influences lie. And I think that enables you to see the world in a much bigger way, a much broader way. And I think that’s fundamentally important, because it ensures you don’t live a blinkered life.
Prakash Mallya 32:43
Very well said. So, from there, history of art, let’s get back to cyber security. One thing I wanted to ask you was, as per PwC 2022 data India is ranked number two in cyber-attacks globally, right? And the current regulation, it used to be addressed on the basis of cybersecurity framework, but with DPDPA regulation coming on very recently, it could be a different data protection regime. So how do you foresee this being addressed, such issues being addressed in the future? Do you estimate a change happening?
Karishma Sundara 33:26
That’s a great question, and I think it’s one a lot of people are asking. Today, you’re right in saying that the cyber security regime is handled by a separate regulator. Its thresholds for notification are different, and yes, we’re going to see a change with the DPDPA, but the change is going to be one which involves both inward looking and outward looking, because we’re looking at now a regime that’s going to govern personal data breaches. Now not all data breaches need be personal data breaches, but every personal data breach will likely be a data breach. So, it’s about being able to understand that these two regimes are going to run in parallel, with different timelines for reporting, different reporting thresholds, different levels of information to be provided. It’s going to require a lot of streamlining here. And the second thing, of course, is that the second requirement under the DPDPA is an outward notification to affected persons. So that means you’re going to have to craft very specific notices for users who are impacted again within very, very short timelines to provide this information. So, I think one of the challenges is going to be understanding where these systems converge, both internally and externally, to ensure that when it comes to an event, how do you identify it? How do you report it? To whom do you report it? How quickly and what do you tell them when you report it?
Prakash Mallya 35:00
Understand, yeah. India has 5.3 crore cases pending, if I’m not wrong. Does AI present any hope of reducing the cycle of resolution for citizens? That’s part one to that question. And part two is, you talked about ChatGPT and people believing it as gospel. What do you think is the impact of AI on jobs in the legal space?
Karishma Sundara 35:32
Okay, so to answer the first question, I think, yes, I think AI has tremendous value to add there. We do have a tremendous backlog of cases, often during due to not having, I think, just enough time to deal with them. And I think that’s where we can perhaps see it provide immense value. We’re already seeing certain courts like you have Kerala High Court who’s already looking at implementing it, utilizing AI data. And I think we just have to wait to see how those instances pan out for us to be able to see other adoption, other increased deployment of AI in these spaces. And I think that’s when we’ll really be able to tell what that delta is in the provision of legal services. One of course, we have legal tech is absolutely expanding at an unprecedented rate. We have law practices adopting AI in and through the day-to-day activities, and I think that’s bound to provide some change and definitely value add. But I think what we’re not thinking through well enough is how to blend that with, say, incoming cohorts of young associates who join practices. And I think that’s a twofold issue. One, I think it’s about being able to show them what AI looks like in that workspace when they’re in college, when as part of their legal education, to show them how to use it. Now, these are young people who, and if I may say so, young people who are using AI every day, very likely, but to show them its specific use in how they might use it for work is, I think, going to be very important. I don’t just mean this from the perspective of understanding its research capabilities or understanding its drafting capabilities. I think it’s equally important to provide them guidance, and I think that can happen either at an internship stage or at the stage when you have a young associate come in, because ultimately, a lot of practices are using AI to cut out the initial effort spent in terms of voluminous research, in terms of summarization, in terms of doing a number of tasks that were traditionally utilized to build skill sets in young lawyers. So, what you have instead now are very young lawyers who have not already learned these skill sets but are having to now review documents without knowing how to review them. Part of knowing how to review something is knowing how to draft some things. And I think that so much of this is going to be understanding where that line is, to be able to imbue them with the skills to draft, the skills to review, and to then show them how they can use that in an efficient way, using AI.
Prakash Mallya 38:42
Sorry to interrupt your thought. But in such a situation, what would your advice be for youngsters looking at a career at the intersection of law, technology and policy?
Karishma Sundara 38:58
So, the advice I would give them would be to use these tools but use them judiciously. And the reason I say this is because there is a very real tendency to outsource things to AI tools, and you should be able to use them smartly and efficiently to outsource the right functions. The function you shouldn’t outsource is thinking, and I think that that’s something which a lot of young people may not already be accustomed to when it comes to using these tools in a scenario like legal advice. I think that’s where guidance comes in. That’s where you need to have internal governance, really, so to speak, being able to show them how to use these tools in a way that benefits everyone. Speeds up say, research enables them to do things faster, quicker, maybe even better. To be very honest, AI is deeply democratic people who didn’t have the ability to express themselves in a particular way now do it. But the downside is using it but not learning from it in some ways, right? So being able to draft something in a particular way is very, very handy, but if you continue to outsource but don’t realize or know how to do it yourself, that’s going to be a problem. For example, easy to rely on AI generated drafts. But how do you deal with an ongoing negotiation? How do you deal with in person meetings where you’re required to think you’re required to respond at that particular point of time based on your understanding of the law based on your understanding of the implications. A lot of this needs to happen organically, and I think that’s what they have to keep in mind as they enter the space.
Prakash Mallya 40:50
What’s your vision on Kintsugi law? Now that you are, like, three or four months into it, what do you believe is the impact you can create in the TMT space, or in general, the legal space in India?
Karishma Sundara 41:03
So, I think a little too early to say, but the hope is that it will be able to provide what clients are looking for, which is to hear from senior practitioners with experience who can provide clear, concise advice that draws on years of experience navigating difficult compliance structures.
Prakash Mallya 41:31
So, I have a final question that I ask all my guests, which is as follows, that India’s march into the areas like AI or semiconductors or deep tech, is much more than domestic growth. It’s about our place, or India’s place in the global map. And if you fast forward 10-15 years, where do you believe India’s position is or going to be at that point in time, and what needs to happen today from your vantage point to make that a reality?
Karishma Sundara 42:02
Interesting question. We’re looking today at an India which has 1.6 billion people, expected to stay the most populous country in the world all the way up until 2070. That is, we still have a window between now and then, and I think that’s a window we’re going to need to utilize to be able to ensure that we grow at the pace that we want to as a nation, grow at the pace that we want to for a population of this size. And I think one of the ways to do it is to not just use technology, and technology adoption has happened rapidly across the country at different stages, almost every person you see in an open space has a cell phone. Almost every person has a social media account. It is, it is rapidly, rapidly part of everyone’s life. But I think what we don’t have is a deep set understanding of how it functions, or what its impact can be. I think one of the gaps that we can address will be on the educational side. For example, you have the government talking about introducing AI based initiatives or building it into the curriculum, which I think is is a very interesting approach here, but I think it’s something we need to consider at every single stage. Where does it have its place in education, in formal education, where does it have its space in skill-based learning initiatives? And ultimately, where does it have its space across the cities that we live in, across the towns that we live in, to be able to implement it in ways that make a difference. For example, I remember seeing some a few data sets that someone had posted on social media a while ago about usage of Bangalore’s Metro, and it was interesting, because we’re talking about at a time when there were so many complaints. And I know there are often complaints about traffic and traffic jams. And it was interesting because he was plotting specific routes and looking at where there was an increased level of traffic and where there wasn’t. And wouldn’t it be wonderful if we were able to utilize data sets like this and combine it with other data sets to see where we have these jams, where we have issues to be able to create data-based solutions. Of course, database solutions that ultimately lead to infrastructure that enables easing this but ultimately pivots on data.
Prakash Mallya 44:42
Well, that’s a great thought. It’s a great place to close. Thank you, Karishma,
Karishma Sundara 44:46
Thank you for having me.
Prakash Mallya 44:48
Not at all. You’re welcome. And for our viewers, thank you for supporting Chai & Chips. Please subscribe to our channel. We are available on YouTube, Apple Podcasts and Spotify. If you like an episode, please share with your family and friends, because that’s the way to spread the word. And lastly, I would request, if you have feedback on the kind of guests you want to listen to, the kind of topics you want to learn more about, or any conversation in the past, what you felt, please send me an email. So, with that, thank you very much.
