Cyber threat intelligence and analyses providers play an essential role in helping security teams gain real-time insights into unknown threats. While threat intelligence feeds play an essential role in reducing exposure, they continue to face challenges, such as lack of context and difficulties developing consistent standards. SecureDisruptions executive editor Jeremy Seth Davis interviews Grace Chi, co-founder and chief operating officer of Pulsedive Threat Intelligence, a free threat intelligence platform that is working to deliver more context to intelligence feeds so analysts can gain more reliable insights.
This transcript has been edited slightly for length and clarity.
Could you tell us a bit about yourself, what you do at Pulsedive and how you got started in infosec?
My journey is unconventional. I graduated with a double-honors in behavioral neuroscience and art history back in college, then moved into product marketing and business roles. My first stint with cybersecurity was purely happenstance. A pre-launch Israeli startup working with my employer at the time was developing AI detection technology and seeking market research and launch support.
That’s when I realized just how impressive the industry was, how quickly and how much new knowledge was being generated. Coming from industries like supply chain and real estate, security felt like a bullet train.
After supporting Pulsedive as a friendly outsider, I took the opportunity to join as a Cofounder. Now I get to wear many hats as COO, working with users, maturing the product, making sure the business is running and scaling.
Your background is really interesting—How did you get up to speed in a very technical space coming from such a unique background?
A lot of self-education, keeping myself surrounded by good people, and the internet. Even for other jobs, learning how to Google and leverage search results is a universal skill. I invested time early on in meeting new colleagues and communities who gave me direction on what mattered most foundationally—yes, there’s a lot of noise. Cybersecurity folks can be phenomenal in helping each other and helping new entrants. However, due to the velocity that security technology and the entire industry develops, it’s also a challenge for everyone to stay up to speed every week, without losing sight of the bigger systemic issues.
The other critical aspect is, as a solutions provider, your users know a heck of a lot more than you. If you build basic knowledge, then ask the right questions, they’ll share concrete examples and goldmine insights that are invaluable to understanding what’s really going on in the space.
What is your advice for young people who possess a limited background in infosec and think that they are “not technical”?
I talk to thousands of infosec and security folks over the course of the year. The majority come from computer science/engineering, IT, law enforcement, military. And that’s been the case and continues to dominate. But there are increasingly more bootcamps, online courses, programs, college degrees, and even K-12 initiatives aiming to build out that talent pipeline to fill growing gaps. And the value here isn’t just sheer “warm bodies” and skills (which is an issue), but also diversity of thoughts and experiences that enrich how security teams function.
There’s no avoiding the technical foundations. If you want to work in a SOC, you need to learn networking and the fundamentals of security technologies. However, there is a consistent issue with the hiring process that requires 3+ years of experience for an “entry-level” job and more certs than someone with student debt could even considering affording, much less study for and pass while applying to jobs.
A security team in the UK said one of their best interns was someone with essentially no experience but a serious spirit to absorb and learn. Within six months of some training, they were superstars (albeit at a junior level) and got hired full time, wrote blogs about entry level security knowledge, created volunteer efforts with local high schools. While there is some proactive effort needed for “non-technical” entrants to study and learn, I hope to see more from the employment side of building out apprenticeship or truly entry level, training on the job opportunities.
What is the starting point that you recommend to young people who are interested in the industry?
Putting yourself in the cyber community—digital or physical—is really helpful. Trying to build an entire career for yourself without guidance is overwhelming and daunting. It’s still up to you to make decisions and apply advice received, not just follow someone’s sage wisdom blindly, but they’ve been there and might even be hiring for someone like you.
I did a short survey last year that discusses this topic. The basics that are needed in cybersecurity are an active knowledge of connected devices, understanding the fundamentals of cyber technologies, Python and Security+. Certifications depend on what path you’re interested in and trying to catch them all can be a big distraction.
I also recommend following news and curating it yourself to know what’s going on and who’s who around your topics of interest. Follow active researchers and influencers on Twitter and LinkedIn.
The survey highlights diverse “soft” skills. There’s a tension between this and the need for more “good geeks” in cybersecurity: highly technical professionals who can dive deep into technical details but might have low EQ and don’t score high on the traditional HR checklist. Are these conflicting qualities both needed?
The continued demand for soft skills can be a bit reactive due to how cybersecurity evolved. But that doesn’t mean extraordinarily technical professionals are excluded whatsoever. Really strong and productive teams have high trust, high openness, and a lot of diversity of skills, approaches, personalities. It’s a difficult mix to get right within the culture but just like any other business or team, it’s about bringing in complementary skills and experiences. I know a bunch of guiding voices who speak to neurodiversity, specifically in security – and how to nurture different types of talents for great reward all around. Nathan Chung created a NeuroSec podcast episode on this issue.
There are many non-technical roles that intersect with security: privacy, legal and compliance, risk management, policy, to name a few. Do you suggest this path for those who don’t (yet) have technical training?
Potentially, but not always. Depends on what the individual is looking or hoping to do. CyberSeek has a visualization of typical roles and how each career path lays out with the requisite skills.
GRC is a big area that falls between the lines and helps connect business with the more technical side. Integrating cyber teams within the larger organization yields major benefits. I believe we’ll see more demand for folks that have that holistic view and ability to communicate within and from the more technical teams in the years to come.
Take us through a recent workday. How do you work, and how has your routine changed recently?
My days and weeks, while there are routines, follow a cadence of high-intensity, overarching initiatives and strategies tied to longer-term projects, punctuated by recurring responsibilities. We prepare product launches, conduct user research, assess new data sources as we had before – and working from home hasn’t impacted our ability to execute.
Has Pulsedive raised funding, and is the company cash-flow positive?
We’ve been bootstrapped since founding, with a robust free/community offering that we intend to keep free. Consistent with what we’ve heard across the industry, it got a bit quiet in early-mid 2020 as the scope of the pandemic was taking shape and everyone was adapting. Around March-May, we hunkered down and focused on our top goals and planning around those. The threat intelligence market is still very highly in demand so we’re in a decent place now and expect to keep that trend going. As a bootstrapped company, revenue that comes in goes right back into Pulsedive to enable further growing and scaling.
The industry continues to suffer from security malaise and breach fatigue. How do we tackle this more effectively?
A big piece of this is building the right mix of people – technology – process. Often these are seen as siloes, or technology gets outsized attention. In truth, it’s all three working in conjunction.
I did a SOC best practices quick survey to find out what team members “on the ground” saw as highly effective ways to avoid burnout and low morale from a human perspective. Here’s what we found.
Simply celebrating the efforts—which are often invisible—and building a culture of talent investment empowers individuals to bring their best to the table. That matched with an attitude towards process improvement, standards, and open thinking about how to make each effort (whether investigation, response, remediation) a little better every time can be a game changer. And then of course, getting more powerful technology or tailoring it – the familiar build/buy/partner conundrum – boosts the team’s capabilities.
Examples where the turnover is high or you hear about demoralized analysts simply “checking boxes” on tickets demonstrate when something is clearly missing. It results in the same false positives cropping up, lack of appropriate client-service expectations, or lack of documentation creating confusion or ability to revisit—but no sense of agency to address it.
Neurologically we tend to think of our future selves as a stranger. Parts of the brain that light up when we have empathy for a friend, relative, or neighbor are inactive when we think of our future selves. It starts at the top—preventing burnout, keeping employees engaged.
Humans can withstand a lot of monotony, tedium, and also challenge if they consider it a valuable effort. You need team members engaged in order to bring the right tools in and get those results. Without thoughtful discovery, feasibility, thinking through requirements and the current environment, even the best tools can fall flat. And even after the “right” decision is made, it takes time and effort to continuously squeeze the most out of a tool with implementation and day to day operation.
A recent report found senior managers are more likely to share their work device outside the company than junior colleagues. This is a difficult population to engage on security issues. What have you learned about communicating and prioritizing cybersecurity as a c-suite issue?
Speaking the same language and having discussions empathetic from the executive perspective on why something matters is what I’ve seen most successful. Risk is big here. All that hard work being done in the team, researching, implementing technologies – must be measured and effectively communicated – so that it can continue to get investment and buy-in from decision makers.
All this gets directly at the big topic of the day, the SolarWinds breach. We’re now hearing more discussion of the need for focusing on the basics and standardization. How can the industry actually deliver on this?
First off, a major shoutout to all the teams and individuals banding together, sharing knowledge, and helping to detect and remediate at a global scale. So far, FireEye’s been a role model in moving fast and putting what they’re working on out there, even as the situation unfolds every day. I want to emphasize the importance and impact of collaboration, especially at events of this scale.
This is a wake-up call. It is, at minimum and definitely much more, validation that highly sophisticated, well-organized supply chain attacks can succeed and will continue to be tested. Even without novel technologies. And where multiple potential threat actors are involved.
Without broader protections and guides, the onus is for every small team, medium business, major enterprise, to be on the defense and set up their security functions as best they can. Many of these teams are relatively new or under-resourced. This will be a serious effort in clean-up and response. It will be resource-intensive and without much coordination/organizational precedent. This will be a lesson in building the plane as it’s taking off and hopefully playbooks can get reused for future events. The Cyberspace Solarium Commission (CSC) 2020 report is a reference for moving forward.
Researchers have been using Pulsedive along with other data sources in tracking the situation. What are the most interesting discoveries you’ve seen from researchers using your data?
This is one of the best use cases of Pulsedive. Enabling teams in their process of researching and producing finished reporting. The community platform helps enrich, contextualize, and share threat intelligence seen by individuals around the world. There are a few distinct and different moving parts in this recent situation: any new, publicly shared information is aggregated and enriched for our researchers. Our users manipulate the data that they submit to and find in Pulsedive in myriad ways.
The most straightforward approach: a researcher searches an indicator (domain, ip, url) and Pulsedive will enrich it with passive or active scanning, pulling back valuable context like Geodata, Whois, Metadata, SSL certificate details, linked indicators, screenshots, and other information to help find discover first-hand characteristics to determine whether or not something is suspicious and warrants further investigation, or finding shared infrastructure or properties.
If someone knows what threat infrastructure looks like, they can create queries across our database, building a profile and finding hits on a set of datapoints for further pivoting. We’ve seen submissions related to current events by well-known researchers and feeds, as well as community comments with links to reports, cropping up in the effort to share more about this attack and help teams all over the world working nonstop to discover known unknowns.
While threat intelligence solutions have evolved in recent years, there are ongoing challenges of lack of context and transparency, poor benchmarking, and attribution difficulties. How are you working on these?
It’s a major issue which we’re constantly trying to work with teams to address. There’s no silver bullet. Even the largest and most sophisticated providers, who have troves of great data and visibility, are still limited in what they have access to.
Researchers from the Netherlands and Germany recently compared threat intelligence and found minimal overlapping data between the services. The research doesn’t mean there isn’t a need for threat intelligence, but that standards and measurement can only help the whole market of providers improve.
We know that the current state of threat feeds is confusing, hard to measure. We’ve invited teams to put us head-to-head in their environment or otherwise put our feed to the test, which is how we can refine our risk scoring algorithms and data. There’s still a lot of ambiguity. We often hear that feeds can yield false positives, have old data when cross-referenced, or lack context. Every organization is different with unique risks and threats, which makes standardization tougher as well. When we hear Pulsedive is “great” and dig in, it might be on very specific instances where a user discovered something malicious through our services or uncovered something shared across a threat campaign tracked – but all-in-all we’re aiming to collect assessments beyond anecdotal notes.
It’s a work in progress and a lot to figure out. We try to make it easy for users to share threat data and information—such as IOCs and comments—so we grow access to current research activities, while ingesting enough high-quality open-source feeds tracking important threats for the known threat landscape. By performing scans, we’re pulling first-hand observations without risk to our users and developing historical context over time, which is particularly useful when infrastructure gets cycled quickly in high volumes. Our risk scores account for common risk factors to support a quick temperature check, answering basic questions an analyst might look to answer. We also provide all the raw data for further research. To address transparency of sources, Pulsedive makes it clear when data was added or a threat was linked, by whom if there is a source feed – so there is no circular confusion and “source amnesia”
There are many seemingly overlapping cybersecurity associations and coalitions. And yet, so much work remains. Do you see a void in security collaboration?
I wouldn’t say so. This industry is heavily built on the goodwill and efforts of individuals working together. The sheer number of open-source tools, GitHub lists of resources, and content by experts for experts is a testament to how much spirit of collaboration there is. ISACs are big and most industries have one – though each is varying in maturity and participation. Countless private trust groups exist in Slacks, Discords – for vetted intelligence professionals to share more openly resources or research that is otherwise TLP:AMBER. It’s impressive when you get into a group like Curated Intelligence and there are the biggest names and junior analysts tackling issues together, coming across company and industry boundaries to share best practices and helpful information or evolving details on an exploit.
Then there are “pop-up”-like groups, for example Cyber Threat Coalition and CTI League, that cropped up when the threats stemming from COVID were obvious in March/April, yet it was unclear who would step up and help mount a unified defense. What they were and are able to put together, all with volunteer coalitions – showcases the strength of the cyber community unifies against cybercrime / cyber threats. I believe in the ability for security participants, likely buttressed by the biggest players and government support, to address gaps and needs in small iterations as we continue to hit challenges.
It’s so easy for companies to do automation poorly. How do you think intentionally about avoiding “shortcut automation”?
I ran a panel called SOC-re Bleu!, where SOC professionals spoke about their nightmare technologies. One speaker mentioned “SOC in a box”—the idea that all of an analyst can be automated.
At this point and considering the complexities of the environment, threat landscape, organizational needs, I agree that this isn’t prudent. There’s a way to empower employees to spend less time on tedious tasks and even to develop AI/ML for internal detections. But a large swath of cyber teams and organizations need the simplest support in accessing usable information to fight the lower-hanging cybercrimes. We create different tiers of tools that start from the most basic automation to supporting highly customized automation that teams build out in house, customized for their own needs.
Who are the security professionals you admire the most, and what have you learned from them?
These are a few leaders who I follow for their security expertise and because I’m inspired by their hustle. I really admire Katie Nickels (@likethecoins) for her insights into cyber threat intelligence. Troy Hunt (@troyhunt) provides so much value, not only through Have I Been Pwned?, but his continuous education and coverage of broad-reaching security topics. I’ve always been impressed by how Xena Olsen (@Ch33r10) seems to be involved in everything all the time, while still encouraging the best in the community.
What are you focused on achieving now, and what can we expect from Pulsedive in the future?
No two days are ever the same at a startup. At this exact moment, we’ve been wrapping up 2020—making our metrics list, double-checking our strategy, and preparing to put 2021 plans into action. From our product standpoint, this includes refinements to our interface, updated functional capabilities, adding highly requested data types and integrations. From a business standpoint, we’re looking to continue working closely with our community and highlight real users/researcher stories on how they’re leveraging Pulsedive’s various datasets and services to help others find ways to get the most out of what we have.