Given a list of words, each word consists of English lowercase letters.
Let's say word1
is a predecessor of word2
if and only if we can add exactly one letter anywhere in word1
to make it equal to word2
. For example, "abc"
is a predecessor of "abac"
.
A word chain is a sequence of words [word_1, word_2, ..., word_k]
with k >= 1
, where word_1
is a predecessor of word_2
, word_2
is a predecessor of word_3
, and so on.
Return the longest possible length of a word chain with words chosen from the given list of words
.
Example 1:
Input: words = ["a","b","ba","bca","bda","bdca"]
Output: 4
Explanation: One of the longest word chain is "a","ba","bda","bdca".
Example 2:
Input: words = ["xbc","pcxbcf","xb","cxbc","pcxbc"] Output: 5
Constraints:
1 <= words.length <= 1000
1 <= words[i].length <= 16
words[i]
only consists of English lowercase letters.
Approach:
A naive approach would be to check every word against every other word looking for predecessors, but that would lead to a TLE result. The first important realization that we should be able to make is that while a word may have many 26 * (word.length + 1) possible successors, it can only have word.length predecessors.
So rather than iterating from small to large words and checking every combination for a link, we can store the words in a set and only check the few possible predecessors while iterating from large to small. To aid in that, we can actually separate words into an array of sets (W) indexed by word length, so that we can directly access batches of words by their length.
(Note: As we iterate backward through W, if we find that W[i-1] is empty, we don't need to process the words in W[i], since there cannot possibly be a predecessor match.)
Then we can use a dynamic programming (DP) approach to eliminate some common subproblems. We can define a hashmap (dp) where dp[word] is the length of the longest chain ending at word found so far.
So at each word, we'll iterate through each of its predecessors (pred) and check the appropriate set in W for a match. If we find a match, we can update dp[pred] if dp[word] + 1 is better, increasing the chain by one. We should also separately keep track of the best chain length we've seen, so that once we reach the end, we can just return best.
C++ Code:
Java Code:
Python Code:
- Time Complexity: O(N*M) where N is the length of words and M is the average length of the words in words.
- Space Complexity: O(N + P) where P is the number of predecessors found and stored in dp.
No comments:
Post a Comment
Please do not enter any spam link in the comment box